Bayes and What He Hath Wrought

Thomas Bayes was an 18th century British statistician, philosopher and Presbyterian minister. He’s known today because he formulated Bayes’ Theorem, which has since given rise to Bayseian probability, Bayseian inference, Bayseian epistemology, Bayesian efficiency and Bayseian networks, among other things.

The reason I bring this up is that philosophers, especially the ones who concentrate on logic and the theory of knowledge, often mention something Bayseian, usually in glowing terms. It’s been a source of consternation for me. I’ve tried to understand what the big deal is, but pretty much failed. All I’ve really gotten out of these efforts is the idea that if you’re trying to figure out a probability, it helps to pay attention to new evidence. Duh.

Today, however, the (Roughly) Daily blog linked to an article by geneticist Johnjoe McFadden called “Why Simplicity Works”. In it, he offers a simple explanation of Bayes’ Theorem, which for some reason I found especially helpful. Here goes:

Just why do simpler laws work so well? The statistical approach known as Bayesian inference, after the English statistician Thomas Bayes (1702-61), can help explain simplicity’s power.

Bayesian inference allows us to update our degree of belief in an explanation, theory or model based on its ability to predict data. To grasp this, imagine you have a friend who has two dice. The first is a simple six-sided cube, and the second is more complex, with 60 sides that can throw 60 different numbers. [All things being equal, the odds that she’ll throw either one of the dice at this point are 50/50].

Suppose your friend throws one of the dice in secret and calls out a number, say 5. She asks you to guess which dice was thrown. Like astronomical data that either the geocentric or heliocentric system could account for, the number 5 could have been thrown by either dice. Are they equally likely?

Bayesian inference says no, because it weights alternative models – the six- vs the 60-sided dice – according to the likelihood that they would have generated the data. There is a one-in-six chance of a six-sided dice throwing a 5, whereas only a one-in-60 chance of the 60-sided dice throwing a 5. Comparing likelihoods, then, the six-sided dice is 10 times more likely to be the source of the data than the 60-sided dice.

Simple scientific laws are preferred, then, because, if they fit or fully explain the data, they’re more likely to be the source of it.

Hence, in this case, before your friend rolls one of the dice, there is the same probability that she’ll roll either one. With the new evidence — that she rolled a 5 — the probability changes. To Professor McFadden’s point, the simplest explanation for why she rolled a 5 is that she used the dice with only 6 sides (she didn’t roll 1, 2,3, 4 or 6), not the dice with 60 sides (she didn’t roll 1, 2, 3, 4, 6, 7, 8, 9, 10, . . . 58, 59 or 60).

Now it’s easier to understand explanations like this one from the Stanford Encyclopedia of Philosophy:

Bayes’ Theorem is a simple mathematical formula used for calculating conditional probabilities. It figures prominently in subjectivist or Bayesian approaches to epistemology, statistics, and inductive logic. Subjectivists, who maintain that rational belief is governed by the laws of probability, lean heavily on conditional probabilities in their theories of evidence and their models of empirical learning. Bayes’ Theorem is central to these enterprises both because it simplifies the calculation of conditional probabilities and because it clarifies significant features of subjectivist positions. Indeed, the Theorem’s central insight — that a hypothesis is confirmed by any body of data that its truth renders probable — is the cornerstone of all subjectivist methodology. . . .

To illustrate, suppose J. Doe is a randomly chosen American who was alive on January 1, 2000. According to the United States Center for Disease Control, roughly 2.4 million of the 275 million Americans alive on that date died during the 2000 calendar year. Among the approximately 16.6 million senior citizens (age 75 or greater) about 1.36 million died. The unconditional probability of the hypothesis that our J. Doe died during 2000, H, is just the population-wide mortality rate P(H) = 2.4M/275M = 0.00873. To find the probability of J. Doe’s death conditional on the information, E, that he or she was a senior citizen, we divide the probability that he or she was a senior who died, P(H & E) = 1.36M/275M = 0.00495, by the probability that he or she was a senior citizen, P(E) = 16.6M/275M = 0.06036. Thus, the probability of J. Doe’s death given that he or she was a senior is PE(H) = P(H & E)/P(E) = 0.00495/0.06036 = 0.082. Notice how the size of the total population factors out of this equation, so that PE(H) is just the proportion of seniors who died. One should contrast this quantity, which gives the mortality rate among senior citizens, with the “inverse” probability of E conditional on H, PH(E) = P(H & E)/P(H) = 0.00495/0.00873 = 0.57, which is the proportion of deaths in the total population that occurred among seniors.

Exactly.

The Ethics of “Sweet Illusions and Darling Lies”

Are we morally responsible for what we believe? To some extent, we are. The acceptance of lies and bizarre conspiracy theories by so many of our fellow citizens makes the issue extremely relevant. The philosopher Regina Rini discusses the ethics of belief for the Times Literary Supplement:  

On January 6, the US Capitol building was stormed by a mob, motivated by beliefs that were almost entirely false, absurd and nonsensical: the QAnon conspiracy; the President’s [lies] about massive voter fraud, and the various conspiracy theories that he and his lawyers peddled in support of overturning the election results.

In 1877, the English philosopher William Clifford published a now famous essay, “The Ethics of Belief”, setting out the view that we can be morally faulted for shoddy thinking. Clifford imagines a ship-owner who smothers his doubts about the seaworthiness of a creaky vessel, and adopts the sincere but unjustified belief that it is safe to send passengers across the Atlantic. The ship then sinks. Clifford (himself a shipwreck survivor) asks: don’t we agree that the ship-owner was “verily guilty” of the passengers’ deaths, and that he “must be held responsible for it”? If we agree to this, Clifford continues, then we must also agree that the ship-owner would deserve blame even if the ship hadn’t sunk. It is epistemic carelessness that makes the ship-owner guilty, even if catastrophe is luckily avoided. “The question of right or wrong has to do with the origin of his belief … not whether it turned out to be true or false, but whether he had a right to believe on such evidence as was before him.”

Clifford’s views went out of favour among philosophers for most of a century. Moral evaluation, it was thought, should stop at the mind’s edge. After all, we cannot directly control our beliefs in the way we control our fists. I can’t just decide, here and now, to stop believing that Charles I had a pointy beard . . . And if I can’t control my beliefs, how can I be held accountable for them?

download

Yet in recent decades, many philosophers have become less impressed by this objection (sometimes called the problem of “doxastic voluntarism”). After all, I can control how I acquire and maintain beliefs by shaping my informational environment. Suppose I do really want to change my beliefs about Charles I’s grooming. I could join a renegade historical society and surround myself with dissenting portraiture. Slowly, indirectly, I can retrain my thoughts and I can be held accountable for choosing to do so.

More to the point, I can also fail to take action to shape my beliefs in healthy ways. The social media era has made this point especially acute, as we can each now curate our own information environment, following sources that challenge our beliefs, or flatter our preconceptions, as we please. Digital epistemic communities are then made up of people who amplify one another’s virtues or vices. Credulously accepting conspiracy stories that vilify my partisan enemies not only dulls my own wits, but encourages my friends to dull theirs. Clifford himself was quite sharp on this point: “Habitual want of care about what I believe leads to habitual want of care in others about the truth of what is told to me … It may matter little to me, in my cloud-castle of sweet illusions and darling lies; but it matters much to Man that I have made my neighbours ready to deceive”.

So far, then, Clifford’s 150-year-old diagnosis seems precisely to explain the epistemic culpability of those who stormed the Capitol on a wave of delusion and lies. But there is a wrinkle here. Clifford thought that credulity – insufficient scepticism toward the claims of others – was the most troubling intellectual vice. But the epistemic shambles of QAnon show a more subtle problem. After all, if there’s anything conspiracy fanatics possess, it is scepticism. They are sceptical of what government officials say, sceptical of what vaccine scientists say, sceptical even of what astronauts say about the shape of the Earth. If anything, they show that critical thinking is a bit like cell division; valuable in proportion, but at risk of harmful metastasis. In the eyes of QAnon devotees, we are the “sheeple” who fail to “do the research” of tumbling down every hyperlinked rabbit-hole.

Conspiracy aficionados are all too willing to think for themselves – that is how they end up believing that Democrats are Satan-worshippers or that 5G phone towers cause Covid-19. And that’s where Clifford’s moralizing – “No simplicity of mind, no obscurity of station, can escape the universal duty of questioning all that we believe” – goes wrong. The ethics of belief should not be a Calvinistic demand for hard epistemic labour. Conspiracists work at least as hard as the rest of us, pinning notes and photos to their bulletin boards late into the night. Hard epistemic labour is just as prone to amplifying epistemic mistakes as overcoming them.

In fact, we should not be focused on individual intellectual virtue at all. The epistemic practices that justify our beliefs are fundamentally interpersonal. Most of our knowledge of the world depends essentially on the say-so of others. Consider: how do you know that I live in Toronto? Well, it says so right at the bottom of this column. But that’s not the same as going to Toronto and seeing me there with your own eyes. So even this simple belief requires trusting the say-so of me or the [Times Literary Supplement].

Perhaps you want to be an uncompromising epistemic individualist, refusing to believe until you’ve verified it yourself? Well, you’ll need to come to Toronto to check. But how will you find Toronto? You can’t use Google Maps (that’s just more say-so from others). Maybe you’ll set out with a compass and enterprising disposition. But how do you know what that compass is pointing to? How do you know where the North Pole is, or how magnetism works? Have you been to the North Pole? Have you done all the magnetism experiments yourself? The list goes on.

No one lives like that. We are all deeply, ineradicably dependent on the say-so of others for nearly all our beliefs about the world. It’s only through a massive division of cognitive labour that we’ve come to know so much. So genuine epistemic responsibility isn’t a matter of doubting all that can be doubted, or only believing what you’ve proven for yourself. It’s a matter of trusting the right other people. That takes wisdom.

Not everyone in the Capitol mob was a QAnon believer. Some were white supremacists aiming to violently uphold a president who refused to condemn their hate. Others were merely insurrection tourists. Still, many do seem to have genuinely believed they were fighting a monstrous regime of Satanic child-harmers. Those beliefs did not appear in a vacuum. A Bellingcat investigation of the social media history of Ashli Babbitt, the woman shot by police while attempting to storm the House of Representatives, suggests that she held relatively mainstream political views until about a year ago, when she veered off into deep QAnon obsession. She put her trust in the wrong people, and all her epistemic labour only made things worse.

That is the most delicate and important lesson to draw from last week’s horror show. QAnon believers are culpable for their bad judgment. But that culpability extends far beyond them, through to everyone whose actions fed their dangerous beliefs. It’s not enough to insist that responsibility falls entirely on the believer, because we are all dependent on others for our knowledge, and we must all trust someone. That mutual reliance means we are all our neighbour’s epistemic keeper.

Most obviously the blame for last week’s catastrophe extends to politicians who cynically courted and channelled [lies] to support their false allegations of election fraud. Donald Trump spoke to the mob moments before their assault, declaring “you’ll never take back our country with weakness. You have to show strength and you have to be strong”, and ordered them to march against the Capitol. That evening, after Congress regained control of its chambers, senators such as Josh Hawley continued to flog “objections and concerns” about the presidential election which had been dismissed by numerous courts and Trump’s own Justice Department.

But manipulative politicians are not the only ones to blame. The culture of the internet played a big role as well. An investigation of QAnon’s origins by the podcast Reply All found that the conspiracy began life as a joke on the ultra-ironic website 4chan. In 2017, “Q” was one of only several fake government source characters being played, tongue-in-cheek, by forum participants who all understood it was a game. Gradually the Q persona became the most popular, and then outsiders – who didn’t get the joke – stumbled onto Q’s tantalizing nonsense. Within a year, thousands of people looking for anything to fill the gap left by their scepticism toward authority developed a sincere belief in Q. Behind the scenes, someone, with cynical political or commercial motives, was happy to oblige.

Unquote. 

Prof. Rini’s analysis sounds right. The next question, of course, is: how should we respond? Millions of people are being immoral with regard to what they believe. Is there anything to be done about it? We have laws against some immoral behavior, like theft and assault. Although we can’t have laws that control what people believe, we can have limited government regulation of the companies that distribute those lies (such as Facebook, Fox News and your local cable TV company). The public can also exert pressure on companies, TV networks, for instance, that give certain politicians and pundits repeated opportunities to lie in public. And in our personal lives, when we hear somebody say something that’s simply not true, we can speak up, even though it’s easier to stay quiet.

Knowledge: A Very Short Introduction by Jennifer Nagel

This entry in the Oxford University Press series of “very short introductions” was recommended on a popular philosophy blog, so I gave it a try. It deals with questions like these:

What is knowledge? What is the difference between just thinking that something is true and actually knowing that it is? How are we able to know anything at all?

This isn’t a general introduction to epistemology, but since that branch of philosophy is also known as “the theory of knowledge”, it comes pretty close. The author doesn’t provide her own answers to the questions above. Instead, she explains the answers given by various philosophers from ancient times to the present. There are chapters on skepticism and the debate between rationalists and empiricists, but the more interesting discussion begins with what’s known as the “Gettier problem”.

Most philosophers have accepted the idea that a belief counts as knowledge if it is both true and justified. Truth isn’t enough. I might believe there are precisely 11 coins in your pocket, and you might actually have 11 coins in your pocket, but unless I have a good reason for believing there are 11, and not some other amount, I don’t really know you have 11. I’m just making a lucky guess. For me to know you have 11 coins, I need a reason for thinking that’s how many there are, e.g. I saw you empty your pocket and then put exactly 11 coins back in.

A philosopher named Edmund Gettier wrote a short paper in 1963 that challenged the standard idea that knowledge is the same as true, justified belief. He argued that a belief can be very well-justified and also quite true, but not count as knowledge. For example, I might believe you own a Chevrolet, since you bought my Chevrolet a while back. Then, this morning, I noticed that you drove that same Chevrolet to work. So it’s reasonable for me to believe you own a Chevrolet. Most people would say I know you own one.

But what if you secretly sold your Chevrolet to someone else yesterday, and the buyer said you could borrow it for the day. Furthermore, what if you used the money you got from selling your old Chevrolet yesterday to buy a new one last night? You do, in fact, own a Chevrolet, and I have very good reasons to believe you do, but the Chevrolet you own isn’t the one I saw you drive into the parking lot. Do I actually know you still own a Chevrolet or am I merely making a well-founded but lucky guess? My belief that you own a Chevrolet is true, and justified, but, according to Gettier (and many other philosophers), I don’t actually know you own one. For all I know, you could have sold your Chevrolet and bought a Ford last night, and I’d still be convinced you owned a Chevrolet. It just so happens you bought another Chevrolet, which makes my belief that you own one true, but I’m ignorant of the true situation. I don’t know you still own a Chevrolet. I merely assume you do. And my very reasonable assumption just happens to be true.

Philosophers have been analyzing Gettier’s article and offering ways around it for years, but there is still no general agreement as to what knowledge is. Nor is there general agreement about the other questions Prof. Nagel asks. Personally, I think it’s almost impossible to find simple answers to traditional philosophical questions. That’s why the questions have lingered so long. One reason is that philosophers too often try to find “the answer”, arguing that something like knowledge amounts to X or Y, when the best answer is that X, Y and Z, as well as A, B and C, all capture aspects of the problem they’re working on.

So, I recommend Knowledge: A Very Short Introduction, especially if you find topics like the Gettier problem interesting. It’s a good summary of some key issues in the theory of knowledge, although you’ll probably be left with more questions than answers.

Fear of Knowledge: Against Relativism and Constructivism by Paul Boghossian

Boghossian is a professor of philosophy at New York University. This is a short, well-argued book, although its title is misleading. Its subject is doubt about knowledge or the dismissal of knowledge. The idea that anyone is afraid of knowledge is only mentioned once on the next to last page.

Boghossian’s main target is constructivism: the idea that “knowledge is constructed by societies in ways that reflect their contingent social needs and interests”. He points out that constructivism comes in different varieties. The benign version simply notes that we gather knowledge about topics we’re interested in or need to investigate. He is concerned with versions that lead people, often academics, to say that no group’s or culture’s beliefs are more valid or accurate than anyone else’s. From the epilogue:

There look to be severe objections to each and every version of constructivism about knowledge that we have examined. A constructivism about truth is incoherent. A constructivism about justification is scarcely any better. And there seem to be decisive objections to the idea that we cannot explain belief through epistemic reasons alone.

On the positive side, we failed to find any good arguments for constructivist views…. At its best, … social constructivist thought exposes the contingency of those of our social practices which we had wrongly come to regard as naturally mandated. It does so by relying on the standard canons of good scientific reasoning. It goes astray when it aspires to become a general theory of truth or knowledge. The difficulty lies in understanding why such generalized applications of social construction have come to tempt so many.

He believes that the appeal of constructivism is mainly political, although misguided:

Constructivist views of knowledge are closely linked to such progressive movements as post-colonialism and multiculturalism because they supply the philosophical resources with which to protect oppressed cultures from charges of holding false or unjustified views. [But] if the powerful can’t criticize the oppressed, because the central epistemological categories are inexorably tied to particular perspectives, it also follows that the oppressed can’t criticize the powerful.

Apparently, Boghossian doesn’t recognize the appeal of oppressed groups being on an equal footing with the powerful (“your views are merely a matter of perspective and no more valid than ours”). He concludes:

The intuitive view is that there is a way things are that is independent of human opinion, and that we are capable of arriving at belief about how things are that is objectively reasonable, binding on anyone capable of appreciating the relevant evidence regardless of their social or cultural perspective. Difficult as these notions may be, it is a mistake to think that recent philosophy has uncovered powerful reasons for rejecting them.

What We Have In Mind (Consciousness Again)

Last week, I suggested that consciousness is a type of brain activity, the kind that consists in having a phenomenal field that includes sights, sounds, pains and the internal monologue depicted by authors as the “stream of consciousness”.

I also recommended that we reserve the phrase “conscious of” for the most important things we’re conscious of, things like our everyday surroundings, our feelings and our thoughts, not consciousness itself. This approach would rule out questions like “Are you conscious of consciousness?” that to me seem misguided and misleading. I don’t think we’re conscious of consciousness, but rather conscious of other things.

To say that we’re conscious “of” other things is to say that the components of consciousness represent other things. Thus, some of the brain activity that is consciousness represents things outside our bodies (e.g. trees falling in the forest). Some of it represents things inside our bodies (e.g. heartburn). And some represents things that exist neither inside nor outside our bodies: abstract things like possibilities (e.g. sanity in Washington), fictional characters (Wonder Woman) and ideas (justice or the number twelve).

From an article about dreaming, which is usually considered a kind of consciousness:

One of the main functions of our brain is to constantly create a model of the world around us, a sort of virtual reality that helps us interact with our environment.

When we’re awake, that model is heavily influenced by what we are seeing and hearing and feeling. But during sleep, when there’s not much input from our senses, the brain’s model of the world is more likely to rely on internal information, like memories or expectations.

I’d add that the model is also a model of the world within us and the abstract world of memory, intention and imagination. But thinking of the model our brains create as “a sort of virtual reality” is what I have in mind (that’s a pun). It’s the “sort” of virtual reality that isn’t virtual, however. Patterns of neural activation in the brain (what the model is made of) are quite real. And it’s a model or representation of other things that are quite real too, like falling trees and sprained ankles.

One of the things that makes our conscious model interesting is that it includes events and processes that are strictly or primarily mental, like having a premonition. I don’t know if such things are representations of unconscious mental events and processes. Maybe they aren’t representations at all; maybe they’re patterns of neural activation that don’t refer to or represent anything else. But the evidence suggests that we all have a lot of unconscious brain activity that plays a very large role in what we think and how we feel.

So it would be consistent with the view I’m trying to explain that when you have something like a premonition, what you’re conscious of is a representation of the underlying brain activity (the unconscious premonition processing), as well as any related events in your body (like chills).

To sum up, the position I’ve arrived at seems to be a strange, possibly ridiculous mixture of ideas associated with two great philosophers who are generally seen as opponents: the idealist George Berkeley and the materialist Thomas Hobbes.

Berkeley (1685-1753) argued that nothing exists independently of minds: “To be is to be perceived (or to perceive)”. A person is an immaterial mind or soul. The physical world (the Earth, for example) doesn’t exist independently of our minds. Fortunately, our individual minds are able to get along because God (a kind of super-mind) synchronizes our perceptions. He makes sure that when I perceive a red apple (in my mind), you do too.

Hobbes (1588-1679) argued that nothing exists except physical stuff. We human beings, including our minds, are material things. Even God may be a kind of material being. When I see a red apple, and you see a red apple, therefore, it’s because there’s an apple out there and it’s red. That’s the whole story. 

Where I’ve ended up is to agree with Berkeley that our consciousness has the various elements in it that he called “perceptions” and “ideas”. But I agree with Hobbes that consciousness is a physical phenomenon, a very cool physical phenomenon, but a physical one just the same. And the reason my perceptions usually line up with yours so nicely is because our perceptions represent the same physical world, albeit observed from our individual perspectives.Â