Virus Update

First, the bad news:

The United States surpassed its record for covid-19 hospitalizations on Tuesday, with no end in sight to skyrocketing case loads, falling staff levels and the struggles of a medical system trying to provide care amid an unprecedented surge of the coronavirus.

Tuesday’s total of 145,982 people in U.S. hospitals with covid-19 . . . passed the record of 142,273 set on Jan. 14, 2021, during the previous peak of the pandemic in this country.

But the highly transmissible omicron variant threatens to obliterate that benchmark. If models of omicron’s spread prove accurate — even the researchers who produce them admit forecasts are difficult during a pandemic — current numbers may seem small in just a few weeks. Disease modelers are predicting total hospitalizations in the 275,000 to 300,000 range when the peak is reached, probably later this month.

As of Monday, Colorado, Oregon, Louisiana, Maryland and Virginia had declared public health emergencies or authorized crisis standards of care, which allow hospitals and ambulances to restrict treatment when they cannot meet demand [The Washington Post].

In the U.S., 840,000 confirmed deaths and 1,700 every day (almost all of whom are unvaccinated). However:

Scientists are seeing signals that COVID-19′s alarming omicron wave may have peaked in Britain and is about to do the same in the U.S., at which point cases may start dropping off dramatically.

The reason: The variant has proved so wildly contagious that it may already be running out of people to infect, just a month and a half after it was first detected in South Africa.

At the same time, experts warn that much is still uncertain about how the next phase of the pandemic might unfold. . . . And weeks or months of misery still lie ahead for patients and overwhelmed hospitals even if the drop-off comes to pass.

“There are still a lot of people who will get infected as we descend the slope on the backside,” said Lauren Ancel Meyers, director of the University of Texas COVID-19 Modeling Consortium, which predicts that reported cases will peak within the week.

The University of Washington’s own highly influential model projects that the number of daily reported cases in the U.S. will crest at 1.2 million by Jan. 19 and will then fall sharply “simply because everybody who could be infected will be infected,” according to Mokdad [ABC News].

What’s happened in South Africa, with omicron as the latest spike: 

Finally, a note from France [The Washington Post]:

In an interview with France’s Le Parisien newspaper, [French President Emmanuel] Macron shared his thoughts about France’s unvaccinated population. He did not mince his words. “The unvaccinated, I really want to piss them off,” Macron said. “And so, we’re going to continue doing so until the end. That’s the strategy.”

The English translation hardly does the comment justice. In French, the verb he used is “emmerder,” which means, quite literally, to cover in excrement. The ire is difficult to translate, but in French it is crystal clear.

Actually, it’s quite easy to translate using the verb form of a different four-letter word — followed by “on them”.

Neuroscientific Mind Reading Is Becoming Surprisingly Easy

Unrelated to the Republican Party’s attack on democracy (it truly is a crisis), the New Yorker has a somewhat unbelievable article about the progress being made using machines and artificial intelligence to read people’s minds. Some excerpts:

During the past few decades, the state of neuroscientific mind reading has advanced substantially. Cognitive psychologists armed with an Functional Magnetic Resonance Imaging (fMRI) machine can tell whether a person is having depressive thoughts; they can see which concepts a student has mastered by comparing his brain patterns with those of his teacher. By analyzing brain scans, a computer system can edit together crude reconstructions of movie clips you’ve watched. One research group has used similar technology to accurately describe the dreams of sleeping subjects. In another lab, scientists have scanned the brains of people who are reading the J. D. Salinger short story “Pretty Mouth and Green My Eyes,” in which it is unclear until the end whether or not a character is having an affair. From brain scans alone, the researchers can tell which interpretation readers are leaning toward, and watch as they change their minds.

fMRI machines [haven’t] advanced that much; instead, artificial intelligence had transformed how scientists read neural data. 

[Ken Norman of the Princeton Neuroscience Institute explains that] researchers . . . developed a mathematical way of understanding thoughts. Drawing on insights from machine learning, they conceived of thoughts as collections of points in a dense “meaning space.” They could see how these points were interrelated and encoded by neurons. By cracking the code, they were beginning to produce an inventory of the mind. “The space of possible thoughts that people can think is big—but it’s not infinitely big,” Norman said. A detailed map of the concepts in our minds might soon be within reach.

Norman invited me to watch an experiment in thought decoding. [In] a locked basement lab at P.N.I., a young woman was lying in the tube of an fMRI scanner. A screen mounted a few inches above her face played a slide show of stock images: an empty beach, a cave, a forest.

“We want to get the brain patterns that are associated with different subclasses of scenes,” Norman said.

As the woman watched the slide show, the scanner tracked patterns of activation among her neurons. These patterns would be analyzed in terms of “voxels”—areas of activation that are roughly a cubic millimetre in size. In some ways, the fMRI data was extremely coarse: each voxel represented the oxygen consumption of about a million neurons, and could be updated only every few seconds, significantly more slowly than neurons fire. But, Norman said, “it turned out that that information was in the data we were collecting—we just weren’t being as smart as we possibly could about how we’d churn through that data.” The breakthrough came when researchers figured out how to track patterns playing out across tens of thousands of voxels at a time, as though each were a key on a piano, and thoughts were chords.

The origins of this approach, I learned, dated back nearly seventy years, to the work of a psychologist named Charles Osgood. When he was a kid, Osgood received a copy of Roget’s Thesaurus as a gift. Poring over the book, Osgood recalled, he formed a “vivid image of words as clusters of starlike points in an immense space.” In his postgraduate days, when his colleagues were debating how cognition could be shaped by culture, Osgood thought back on this image. He wondered if, using the idea of “semantic space,” it might be possible to map the differences among various styles of thinking.

Osgood became known not for the results of his surveys but for the method he invented to analyze them. He began by arranging his data in an imaginary space with fifty dimensions—one for fair-unfair, a second for hot-cold, a third for fragrant-foul, and so on. Any given concept, like tornado, had a rating on each dimension—and, therefore, was situated in what was known as high-dimensional space. Many concepts had similar locations on multiple axes: kind-cruel and honest-dishonest, for instance. Osgood combined these dimensions. Then he looked for new similarities, and combined dimensions again, in a process called “factor analysis.”

When you reduce a sauce, you meld and deepen the essential flavors. Osgood did something similar with factor analysis. Eventually, he was able to map all the concepts onto a space with just three dimensions. The first dimension was “evaluative”—a blend of scales like good-bad, beautiful-ugly, and kind-cruel. The second had to do with “potency”: it consolidated scales like large-small and strong-weak. The third measured how “active” or “passive” a concept was. Osgood could use these three key factors to locate any concept in an abstract space. Ideas with similar coördinates, he argued, were neighbors in meaning.

[Researchers at Bell Labs] used computers to analyze the words in about two thousand technical reports. The reports themselves—on topics ranging from graph theory to user-interface design—suggested the dimensions of the space; when multiple reports used similar groups of words, their dimensions could be combined. In the end, the Bell Labs researchers made a space that was more complex than Osgood’s. It had a few hundred dimensions. Many of these dimensions described abstract or “latent” qualities that the words had in common—connections that wouldn’t be apparent to most English speakers. The researchers called their technique “latent semantic analysis,” or L.S.A.

In the following years, scientists applied L.S.A. to ever-larger data sets. In 2013, researchers at Google unleashed a descendant of it onto the text of the whole World Wide Web. Google’s algorithm turned each word into a “vector,” or point, in high-dimensional space. The vectors generated by the researchers’ program, word2vec, are eerily accurate: if you take the vector for “king” and subtract the vector for “man,” then add the vector for “woman,” the closest nearby vector is “queen.” Word vectors became the basis of a much improved Google Translate, and enabled the auto-completion of sentences in Gmail.

Other companies, including Apple and Amazon, built similar systems. Eventually, researchers realized that the “vectorization” made popular by L.S.A. and word2vec could be used to map all sorts of things. Today’s facial-recognition systems have dimensions that represent the length of the nose and the curl of the lips, and faces are described using a string of coördinates in “face space.” Chess A.I.s use a similar trick to “vectorize” positions on the board. The technique has become so central to the field of artificial intelligence that, in 2017, a new, hundred-and-thirty-five-million-dollar A.I. research center in Toronto was named the Vector Institute. Matthew Botvinick, a professor at Princeton whose lab was across the hall from Norman’s, and who is now the head of neuroscience at DeepMind, Alphabet’s A.I. subsidiary, told me that distilling relevant similarities and differences into vectors was “the secret sauce underlying all of these A.I. Advances”. . . .

In 2001, a scientist named Jim Haxby brought machine learning to brain imaging: he realized that voxels of neural activity could serve as dimensions in a kind of thought space. Haxby went on to work at Princeton, where he collaborated with Norman. The two scientists, together with other researchers, concluded that just a few hundred dimensions were sufficient to capture the shades of similarity and difference in most fMRI data. At the Princeton lab, the young woman watched the slide show in the scanner. With each new image—beach, cave, forest—her neurons fired in a new pattern. These patterns would be recorded as voxels, then processed by software and transformed into vectors. The images had been chosen because their vectors would end up far apart from one another: they were good landmarks for making a map. Watching the images, my mind was taking a trip through thought space, too.

The larger goal of thought decoding is to understand how our brains mirror the world. To this end, researchers have sought to watch as the same experiences affect many people’s minds simultaneously. Norman told me that his Princeton colleague Uri Hasson has found movies especially useful in this regard. They “pull people’s brains through thought space in synch,” Norman said. “What makes Alfred Hitchcock the master of suspense is that all the people who are watching the movie are having their brains yanked in unison. It’s like mind control in the literal sense”. . . .

Norman described another study, by Asieh Zadbood, in which subjects were asked to narrate “Sherlock” scenes—which they had watched earlier—aloud. The audio was played to a second group, who’d never seen the show. It turned out that no matter whether someone watched a scene, described it, or heard about it, the same voxel patterns recurred. The scenes existed independently of the show, as concepts in people’s minds. . . .

Recently, I asked [neuroscientist David Owen] what the new thought-decoding technology means for locked-in patients [who are alive but unable to move or even blink]. [A “bare-bones protocol” is used: for example, the patient is asked to think about tennis, and when the patient does so, it means “yes”, while thinking about walking around the house equals “no”. Then the patient can answer yes-no questions like “Is the pain in the lower half of your body? On the left side?”] Owen said “I have no doubt that, some point down the line, we will be able to read minds. People will be able to articulate, ‘My name is Adrian, and I’m British,’ and we’ll be able to decode that from their brain. I don’t think it’s going to happen in probably less than twenty years.”

In some ways, the story of thought decoding is reminiscent of the history of our understanding of the gene. For about a hundred years after the publication of Charles Darwin’s “On the Origin of Species,” in 1859, the gene was an abstraction, understood only as something through which traits passed from parent to child. As late as the nineteen-fifties, biologists were still asking what, exactly, a gene was made of. When James Watson and Francis Crick finally found the double helix, in 1953, it became clear how genes took physical form. Fifty years later, we could sequence the human genome; today, we can edit it.

Thoughts have been an abstraction for far longer. But now we know what they really are: patterns of neural activation that correspond to points in meaning space. The mind—the only truly private place—has become inspectable from the outside.

Unquote.

Ludwig Wittgenstein, among others, argued strenuously that the mind isn’t private at all. We’re all very good at understanding what other people are thinking and talking about. But he’d be amazed at what neuroscientists are able to do.

Will the Future Be Electric?

Should anybody be optimistic about the climate crisis? Noted environmentalist Bill McKibben reviews a new book, Electrify: An Optimist’s Playbook for Our Clean Energy Future by Saul Griffith, an engineer and inventor. The title of the review is “The Future Is Electric”. Here’s McKibben’s summary of Griffith’s playbook: 

Electrification is to climate change as the vaccine is to Covid-19—perhaps not a total solution, but an essential one. [Griffith] begins by pointing out that in the United States, combustion of fossil fuels accounts for 75 percent of our contribution to climate change, with agriculture accounting for much of the rest. . . . The US uses about 101 quadrillion BTUs (or “quads”) of energy a year. . . .

Our homes use about a fifth of all energy [or 20 quads]; half of that is for heating and cooling, and another quarter for heating water. “The pride of the suburbs, the single-family detached home, dominates energy use, with large apartments in a distant second place,” Griffith writes.

The industrial sector uses more energy—about 30 quads—but a surprisingly large percentage of that is spent “finding, mining, and refining fossil fuels.” A much smaller amount is spent running the data centers that store most of the Internet’s data . . .

Transportation uses even larger amounts of energy [40 quads?] —and for all the focus on air travel, passenger cars and trucks use ten times as much.

The commercial sector—everything from office buildings and schools to the “cold chain” that keeps our perishables from perishing—accounts for the rest of our energy use [10 quads?].

If we are to cut emissions in half this decade—an imperative—we’ve got to cut fossil fuel use in big chunks, not small ones. For Griffith, this means leaving behind “1970s thinking” about efficiency: don’t waste time telling people to turn down the thermostat a degree or two, or buy somewhat smaller cars, or drive less. Such measures, he says, can slow the growth rate of our energy consumption, but “you can’t ‘efficiency’ your way to zero”:

Let’s stop imagining that we can buy enough sustainably harvested fish, use enough public transportation, and purchase enough stainless steel water bottles to improve the climate situation. Let’s release ourselves from purchasing paralysis and constant guilt at every small decision we make so that we can make the big decisions well.

“A lot of Americans,” he insists, “won’t agree to anything if they believe it will make them uncomfortable or take away their stuff,” so instead you have to let them keep that stuff, just powered by technology that does less damage.

By “big decisions” he means mandates for electric vehicles (EVs), which could save 15 percent of our energy use. Or electrifying the heat used in houses and buildings: the electric heat pump is the EV of the basement and would cut total energy use 5 to 7 percent if implemented nationwide. LED lighting gets us another 1 or 2 percent. Because electricity is so much more efficient than combustion, totally electrifying our country would cut primary energy use about in half. (And simply not having to find, mine, and refine fossil fuels would reduce energy use by 11 percent.)

Of course, replacing all those gas-powered pickups and oil-fired furnaces with electric vehicles and appliances would mean dramatically increasing the amount of electricity we need to produce overall—in fact, we’d have to more than triple it. We’ve already dammed most of the rivers that can produce hydropower (about 7 percent of our current electric supply); if we’re going to replace coal and natural gas and simultaneously ramp up our supply of electricity, we have three main options: solar, wind, and nuclear power, and according to Griffith “solar and wind will do the heavy lifting.”

That’s primarily because renewable energy sources have become so inexpensive over the past decade. They are now the cheapest ways to generate power, an advantage that will grow as we install more panels and turbines. (By contrast, the price of fossil fuel can only grow: we’ve already dug up all the coal and oil that’s cheap to get at.) According to Griffith’s math, nuclear power is more expensive than renewables, and new plants “take decades to plan and build,” decades we don’t have.

It’s a mistake to shut down existing nuclear plants that are running safely—or as safely as current technology allows—and it’s possible that new designs now on the drawing board will produce smaller, cheaper reactors that eat waste instead of producing it. But for the most part Griffith sides with Mark Jacobson, the environmental engineering professor at Stanford whose team showed a decade ago that the future lay with cheap renewables, an estimation that, though highly controversial at the time, has been borne out by the steady fall in the price of solar and wind power, as well as by the increasing efficiency of batteries to store it.

Griffith devotes more attention to batteries than almost any other topic in this book, and that’s wise: people’s fear of the “intermittency” of renewables (the fact that the sun goes down and the wind can drop) remains a major stumbling block to conceiving of a clean-energy future. Contrary to these fears, each month brings new advances in battery technology. The Wall Street Journal recently reported on the super-cheap batteries being developed that use iron instead of pricey lithium and can store energy for days at a time, making them workhorses for utilities, which will need them to replace backup plants that run on natural gas.

Griffith is good at analogies: we’d need the equivalent of 60 billion batteries a year roughly the size of the AAs in your flashlight. That sounds like a lot, but actually it’s “similar to the 90 billion bullets manufactured globally today. We need batteries, not bullets.”

This renewable economy, as Griffith demonstrates, will save money, both for the nation as a whole and for households—and that’s before any calculation of how much runaway global warming would cost. Already the lifetime costs of an electric vehicle are lower than those of gas-powered cars: Consumer Reports estimates they’ll save the average driver $6,000 to $10,000 over the life of a vehicle. Though they cost a little more up front, at least for now, the difference could be overcome with a reasonably small subsidy. And since most people buy a new car every six to seven years, the transition should be relatively smooth, which is why in August President Biden and the Big Three automakers announced their plans for 40 to 50 percent of new sales to be electric by 2030.

That’s still not fast enough—as Griffith makes clear, we’re already at the point where we need every new replacement of any equipment to be electric—but it’s likely to happen much quicker with cars than anything else. A gas furnace lasts twice as long as a car, for instance. And putting solar panels on your roof remains an expensive initial investment, partly because of regulations and paperwork. (Griffith notes that in his native Australia such “soft costs” are less than half of what they are in the US.)

Happily, he provides the formula for success. The federal government needs to do for home and business energy retrofits in this decade what Freddie Mac and Fannie Mae did for homeownership in the last century, except this time accessible to all applicants, not just white ones: provide government-backed mortgages that make it affordable for everyone to acquire this money-saving and hence wealth-building capacity, and in the process jump-start an economy that would create vast numbers of good jobs. “A mortgage is really a time machine that lets you have the tomorrow you want, today,” Griffith writes. “We want a clean energy future and a livable planet, so let’s borrow the money.”

In short, Griffith has drawn a road map for what seems like the only serious chance at rapid progress. His plan won’t please everyone: he has no patience at all with NIMBY opposition to wind turbines and transmission lines. But I don’t think anyone else has quite so credibly laid out a realistic plan for swift action in the face of an existential crisis.

Bayes and What He Hath Wrought

Thomas Bayes was an 18th century British statistician, philosopher and Presbyterian minister. He’s known today because he formulated Bayes’ Theorem, which has since given rise to Bayseian probability, Bayseian inference, Bayseian epistemology, Bayesian efficiency and Bayseian networks, among other things.

The reason I bring this up is that philosophers, especially the ones who concentrate on logic and the theory of knowledge, often mention something Bayseian, usually in glowing terms. It’s been a source of consternation for me. I’ve tried to understand what the big deal is, but pretty much failed. All I’ve really gotten out of these efforts is the idea that if you’re trying to figure out a probability, it helps to pay attention to new evidence. Duh.

Today, however, the (Roughly) Daily blog linked to an article by geneticist Johnjoe McFadden called “Why Simplicity Works”. In it, he offers a simple explanation of Bayes’ Theorem, which for some reason I found especially helpful. Here goes:

Just why do simpler laws work so well? The statistical approach known as Bayesian inference, after the English statistician Thomas Bayes (1702-61), can help explain simplicity’s power.

Bayesian inference allows us to update our degree of belief in an explanation, theory or model based on its ability to predict data. To grasp this, imagine you have a friend who has two dice. The first is a simple six-sided cube, and the second is more complex, with 60 sides that can throw 60 different numbers. [All things being equal, the odds that she’ll throw either one of the dice at this point are 50/50].

Suppose your friend throws one of the dice in secret and calls out a number, say 5. She asks you to guess which dice was thrown. Like astronomical data that either the geocentric or heliocentric system could account for, the number 5 could have been thrown by either dice. Are they equally likely?

Bayesian inference says no, because it weights alternative models – the six- vs the 60-sided dice – according to the likelihood that they would have generated the data. There is a one-in-six chance of a six-sided dice throwing a 5, whereas only a one-in-60 chance of the 60-sided dice throwing a 5. Comparing likelihoods, then, the six-sided dice is 10 times more likely to be the source of the data than the 60-sided dice.

Simple scientific laws are preferred, then, because, if they fit or fully explain the data, they’re more likely to be the source of it.

Hence, in this case, before your friend rolls one of the dice, there is the same probability that she’ll roll either one. With the new evidence — that she rolled a 5 — the probability changes. To Professor McFadden’s point, the simplest explanation for why she rolled a 5 is that she used the dice with only 6 sides (she didn’t roll 1, 2,3, 4 or 6), not the dice with 60 sides (she didn’t roll 1, 2, 3, 4, 6, 7, 8, 9, 10, . . . 58, 59 or 60).

Now it’s easier to understand explanations like this one from the Stanford Encyclopedia of Philosophy:

Bayes’ Theorem is a simple mathematical formula used for calculating conditional probabilities. It figures prominently in subjectivist or Bayesian approaches to epistemology, statistics, and inductive logic. Subjectivists, who maintain that rational belief is governed by the laws of probability, lean heavily on conditional probabilities in their theories of evidence and their models of empirical learning. Bayes’ Theorem is central to these enterprises both because it simplifies the calculation of conditional probabilities and because it clarifies significant features of subjectivist positions. Indeed, the Theorem’s central insight — that a hypothesis is confirmed by any body of data that its truth renders probable — is the cornerstone of all subjectivist methodology. . . .

To illustrate, suppose J. Doe is a randomly chosen American who was alive on January 1, 2000. According to the United States Center for Disease Control, roughly 2.4 million of the 275 million Americans alive on that date died during the 2000 calendar year. Among the approximately 16.6 million senior citizens (age 75 or greater) about 1.36 million died. The unconditional probability of the hypothesis that our J. Doe died during 2000, H, is just the population-wide mortality rate P(H) = 2.4M/275M = 0.00873. To find the probability of J. Doe’s death conditional on the information, E, that he or she was a senior citizen, we divide the probability that he or she was a senior who died, P(H & E) = 1.36M/275M = 0.00495, by the probability that he or she was a senior citizen, P(E) = 16.6M/275M = 0.06036. Thus, the probability of J. Doe’s death given that he or she was a senior is PE(H) = P(H & E)/P(E) = 0.00495/0.06036 = 0.082. Notice how the size of the total population factors out of this equation, so that PE(H) is just the proportion of seniors who died. One should contrast this quantity, which gives the mortality rate among senior citizens, with the “inverse” probability of E conditional on H, PH(E) = P(H & E)/P(H) = 0.00495/0.00873 = 0.57, which is the proportion of deaths in the total population that occurred among seniors.

Exactly.

How Being a Right-Wing Creep Can Give Meaning to Your Life

Earlier today, I posted a Twitter thread by David Roberts regarding the so-called “War on Christmas”. He provided context with an excerpt from a New York Times article by Thomas Edsall that discusses some relevant research:

In their September 2021 paper “Exposure to Authoritarian Values Leads to Lower Positive Affect, Higher Negative Affect, and Higher Meaning in Life,” seven scholars . . . write:

Right-wing authoritarianism played a significant role in the 2016 U.S. presidential election. In subsequent years, there have been numerous “alt-right” demonstrations in the U.S., including the 2017 Unite the Right rally in Charlottesville that culminated in a fatal car attack, and the 2021 Capitol Insurrection. In the U.S., between 2016 and 2017 the number of attacks by right-wing organizations quadrupled, . . . constituting 66 percent of all attacks and plots in the U.S. in 2019 and over 90 percent in 2020.

How does authoritarianism relate to immigration? [Jake Womick, one of the co-authors] provided some insight in an email:

Social dominance orientation is a variable that refers to the preference for society to be structured by group-based hierarchies. It’s comprised of two components: group-based dominance and anti-egalitarianism. Group-based dominance refers to the preference for these hierarchies and the use of force/aggression to maintain them. Anti-egalitarianism refers to maintaining these sorts of hierarchies through other means, such as through systems, legislation, etc.

Womick notes that his own study of the 2016 primaries showed that T____ voters were unique compared to supporters of other Republicans in the strength of their

group-based dominance. I think group-based dominance as the distinguishing factor of this group is highly consistent with what happened at the Capitol. These individuals likely felt that the T____ administration was serving to maintain group-based hierarchies in society from which they felt they benefited. They may have perceived the 2020 election outcome as a threat to that structure. As a result, they turned to aggression in an attempt to affect our political structures in service of the maintenance of those group-based hierarchies.

In their paper, Womick and his co-authors ask:

What explains the appeal of authoritarian values? What problem do these values solve for the people who embrace them? The presentation of authoritarian values must have a positive influence on something that is valuable to people.

Their answer is twofold:

Authoritarian messages influence people on two separable levels, the affective level, lowering positive and enhancing negative affect, and the existential level, enhancing meaning in life.

They describe negative affect as “feeling sad, worried or enraged.” Definitions of “meaning in life,” they write,

include at least three components: significance, the feeling that one’s life and contributions matter to society; purpose, having one’s life driven by the pursuit of valued goals; and coherence or comprehensibility, the perception that one’s life makes sense.

In a separate paper, “The Existential Function of Right-Wing Authoritarianism,” [political scientists] provide more detail:

It may seem ironic that authoritarianism, a belief system that entails sacrifice of personal freedom to a strong leader, would influence the experience of meaning in life through its promotion of feelings of personal significance. Yet right-wing authoritarianism does provide a person with a place in the world, as a loyal follower of a strong leader. In addition, compared to purpose and coherence, knowing with great certainty that one’s life has mattered in a lasting way may be challenging. Handing this challenge over to a strong leader and investment in societal conventions might allow a person to gain a sense of symbolic or vicarious significance.

From another vantage point, Womick and his co-authors continue,

perceptions of insignificance may lead individuals to endorse relatively extreme beliefs, such as authoritarianism, and to follow authoritarian leaders as a way to gain a sense that their lives and their contributions matter.

In the authors’ view, right-wing authoritarianism,

despite its negative social implications, serves an existential meaning function. This existential function is primarily about facilitating the sense that one’s life matters. This existential buffering function is primarily about allowing individuals to maintain a sense that they matter during difficult experiences.

In his email, Womick expanded on his work: “The idea is that perceptions of insignificance can drive a process of seeking out groups, endorsing their ideologies and engaging in behaviors consistent with these.”

These ideologies, Womick continued,

should eventually promote a sense of significance (as insignificance is what drove the person to endorse the ideology in the first place). Endorsing right-wing authoritarianism relates to higher meaning in life, and exposing people to authoritarian values causally enhances meaning.