A Nice Explanation of Quantum Mechanics, with Thoughts on What Makes Science Special

Michael Strevens teaches philosophy at New York University. In his book, The Knowledge Machine: How Irrationality Created Modern Science, he argues that what makes modern science so productive is the peculiar behavior of scientists. From the publisher’s site:

Like such classic works as Karl Popper’s The Logic of Scientific Discovery and Thomas Kuhn’s The Structure of Scientific Revolutions, The Knowledge Machine grapples with the meaning and origins of science, using a plethora of . . .  examples to demonstrate that scientists willfully ignore religion, theoretical beauty, and . . . philosophy to embrace a constricted code of argument whose very narrowness channels unprecedented energy into empirical observation and experimentation. Strevens calls this scientific code the iron rule of explanation, and reveals the way in which the rule, precisely because it is unreasonably close-minded, overcomes individual prejudices to lead humanity inexorably toward the secrets of nature.

Here Strevens presents a very helpful explanation of quantum mechanics, while explaining that physicists (most of them anyway) are following Newton’s example when they use the theory to make exceptionally accurate predictions, even though the theory’s fundamental meaning is mysterious (in the well-known phrase, they “shut up and calculate”):

To be scientific simply was to be Newtonian. The investigation of nature [had] changed forever. No longer were deep philosophical insights of the sort that founded Descartes’s system considered to be the keys to the kingdom of knowledge. Put foundational matters aside, Newton’s example seemed to urge, and devote your days instead to the construction of causal principles that, in their forecasts, follow precisely the contours of the observable world. . . .

[This is] Newton’s own interpretation of his method, laid out in a postscript to the Principia’s second edition of 1713. There Newton summarizes the fundamental properties of gravitational attraction—that it increases “in proportion to the quantity of solid matter” and decreases in proportion to distance squared—and then continues:

I have not as yet been able to deduce from phenomena the reason for these properties of gravity, and I do not feign hypotheses. For whatever is not deduced from the phenomena must be called a hypothesis; and hypotheses, whether metaphysical or physical, or based on occult qualities, or mechanical, have no place in experimental philosophy. . . . It is enough that gravity really exists and acts according to the laws that we have set forth and is sufficient to explain all the motions of the heavenly bodies and of our sea.

The thinkers around and after Newton got the message, one by one.

[Jumping ahead three centuries:]

According to Roger Penrose, one of the late twentieth century’s foremost mathematical physicists, quantum mechanics “makes absolutely no sense.” “I think I can safely say that nobody understands quantum mechanics,” remarked Richard Feynman. How can a theory be widely regarded both as incomprehensible and also as the best explanation we have of the physical world we live in?

. . . Quantum theory derives accurate predictions from a notion, superposition, that is quite beyond our human understanding. Matter, says quantum mechanics, occupies the state called superposition when it is not being observed [or measured]. An electron in superposition occupies no particular point in space. It is typically, rather, in a kind of “mix” of being in many places at once. The mix is not perfectly balanced: some places are far more heavily represented than others. So a particular electron’s superposition might be almost all made up from positions near a certain atomic nucleus and just a little bit from positions elsewhere. That is the closest that quantum mechanics comes to saying that the electron is orbiting the nucleus.

As to the nature of this “mix”—it is a mystery. We give it a name: superposition. But we can’t give it a philosophical explanation. What we can do is to represent any superposition with a mathematical formula, called a “wave function.” An electron’s wave function represents its physical state with the same exactitude that, in Newton’s physics, its state would be represented by numbers specifying its precise position and velocity. You may have heard of quantum mechanics’ “uncertainty principle,” but forget about uncertainty here: the wave function is a complete description that captures every matter of fact about an electron’s physical state without remainder.

So far, we have a mathematical representation of the state of any particular piece of matter, but we haven’t said how that state changes in time. This is the job of Schrödinger’s equation, which is the quantum equivalent of Newton’s famous second law of motion F = ma, in that it spells out how forces of any sort—gravitational, electrical, and so on—will affect a quantum particle. According to Schrödinger’s equation, the wave function will behave in what physicists immediately recognize as a “wavelike” way. That is why, according to quantum mechanics, even particles such as electrons conduct themselves as though they are waves.

In the early days of quantum mechanics, Erwin Schrödinger, the Austrian physicist who formulated the equation in 1926, and Louis de Broglie, a French physicist—both eventual Nobel Prize winners—wondered whether the waves described by quantum mechanics might be literal waves traveling through a sea of “quantum ether” that pervades our universe. They attempted to understand quantum mechanics, then, using the old model of the fluid.

This turned out to be impossible for a startling reason: it is often necessary to assign a wave function not to a single particle, like an electron, but to a whole system of particles. Such a wave function is defined in a space that has three dimensions for every particle in the system: for a 2-particle system, then, it has 6 dimensions; for a 10-particle system, 30 dimensions. Were the wave to be a real entity made of vibrations in the ether, it would therefore have to be flowing around a space of 6, or 30, or even more dimensions. But our universe rather stingily supplies only three dimensions for things to happen in. In quantum mechanics, as Schrödinger and de Broglie soon realized, the notion of substance as fluid fails completely.

There is a further component to quantum mechanics. It is called Born’s rule, and it says what happens when a particle’s position or other state is measured. Suppose that an electron is in a superposition, a mix of being “everywhere and nowhere.” You use the appropriate instruments to take a look at it; what do you see? Eerily, you see it occupying a definite position. Born’s rule says that the position is a matter of chance: the probability that a particle appears in a certain place is proportional to the degree to which that place is represented in the mix.

It is as though the superposition is an extremely complex cocktail, a combination of various amounts of infinitely many ingredients, each representing the electron’s being in a particular place. Taste the cocktail, and instead of an infinitely complex flavor you will—according to Born’s rule—taste only a single ingredient. The chance of tasting that ingredient is proportional to the amount of the ingredient contained in the mixture that makes up the superposition. If an electron’s state is mostly a blend of positions near a certain atomic nucleus, for example, then when you observe it, it will most likely pop up near the nucleus.

One more thing: an observed particle’s apparently definite position is not merely a fleeting glimpse of something more complex. Once you see the particle in a certain position, it goes on to act as though it really is in that position (until something happens to change its state). In mixological terms, once you have sampled your cocktail, every subsequent sip will taste the same, as though the entire cocktail has transformed into a simple simple solution of this single ingredient. It is this strange disposition for matter, when observed, to snap into a determinate place that accounts for its “particle-like” behavior.

To sum up, quantum mechanical matter—the matter from which we’re all made—spends almost all its time in a superposition. As long as it’s not observed, the superposition, and so the matter, behaves like an old-fashioned wave, an exemplar of liquidity (albeit in indefinitely many dimensions). If it is observed, the matter jumps randomly out of its superposition and into a definite position like an old-fashioned particle, the epitome of solidity.

Nobody can explain what kind of substance this quantum mechanical matter is, such that it behaves in so uncanny a way. It seems that it can be neither solid nor fluid—yet these exhaust the possibilities that our human minds can grasp. Quantum mechanics does not, then, provide the kind of deep understanding of the way the world works that was sought by philosophers from Aristotle to Descartes. What it does supply is a precise mathematical apparatus for deriving effects from their causes. Take the initial state of a physical system, represented by a wave function; apply Schrödinger’s equation and if appropriate Born’s rule, and the theory tells you how the system will behave (with, if Born’s rule is invoked, a probabilistic twist). In this way, quantum theory explains why electrons sometimes behave as waves, why photons (the stuff of light) sometimes behave as particles, and why atoms have the structure that they do and interact in the way they do.

Thus, quantum mechanics may not offer deep understanding, but it can still account for observable phenomena by way of . . . the kind of explanation favored by Newton . . . Had Newton [engaged with scientists like Bohr and Einstein at conferences on quantum mechanics] he would perhaps have proclaimed:

I have not as yet been able to deduce from phenomena the nature of quantum superposition, and I do not feign hypotheses. It is enough that superposition really exists and acts according to the laws that we have set forth and is sufficient to explain all the motions of the microscopic bodies of which matter is made.

Newton . . .  was the chief architect of modern science’s first great innovation. Rather than deep philosophical understanding, Newton pursued shallow explanatory power, that is, the ability to derive correct descriptions of phenomena from a theory’s causal principles, regardless of their ultimate nature and indeed regardless of their very intelligibility. In so doing, he was able to build a gravitational theory of immense capability, setting an example that his successors were eager to follow.

Predictive power thereby came to override metaphysical insight. Or as the historian of science John Heilbron, writing of the study of electricity after Newton, put it:

When confronted with a choice between a qualitative model deemed intelligible and an exact description lacking clear physical foundations, the leading physicists of the Enlightenment preferred exactness.

So it continued to be, as the development and acceptance of quantum mechanics, as unerring as it is incomprehensible, goes to show. The criterion for explanatory success inherent in Newton’s practice became fixed for all time, founding the procedural consensus that lies at the heart of modern science.

Consciousness and Primitive Feelings

I’ve been thinking lately that all value — whether ethical, aesthetic or practical — comes down to feelings in the end. Was that the right thing to do? Is that a beautiful song? Is this expensive hammer better than the cheaper one? Only if it tends in the past, present or future to make me or you or somebody else have certain feelings.

Below is most of an interview with Mark Solms, a South African psychoanalyst and neuropsychologist, who has a new book out: The Hidden Spring: A Journey to the Source of Consciousness. The Nautilus site gave the interview the title “Consciousness Is Just A Feeling”, although that’s not what Solms says. The interviewer’s questions are in italics:

. . . You made a big discovery that overturned the prevailing theory that we only dream during REM sleep. What did you find?

It was just assumed that when your REM sleep stops, your dreams also stop. But I found that human patients with damage to the part of the brain generating REM sleep nevertheless continue to experience dreams. In retrospect, you realize what a significant methodological error we made. That’s the price we pay for not gathering subjective data. You know, the actual subjective experience of dreams is an embarrassment to science. And this is what my professors had in mind when they were saying, don’t study things like that. But you’re going to be missing something rather important about how the brain works if you leave out half of the available data.

Your interest in Freud is very unusual for a neuroscientist. You actually trained to become a psychoanalyst, and since then, you’ve edited the complete psychological works of Freud.

Yes, and my colleagues were horrified. I had been taught this was pseudoscience. One of them said to me, “You know, astronomers don’t study astrology.” It’s true that psychoanalysis had lost its bearings. Freud was a very well-trained neuroscientist and neurologist, but in successive generations that grounding of psychoanalysis in the biological sciences had been lost. So I can understand where some of the disdain for psychoanalysis came from. But to its credit, it studied the actual lived life of the mind, which was the thing that interested me, and was missing from neuropsychology. So I turned to psychoanalysis to find any kind of systematic attempt to study subjective experience and to infer what kinds of mechanisms lay behind it.

Did we get Freud wrong? Did he have scientific insights that we’ve ignored?

Very much so. I’m not going to pretend that Freud didn’t make some gigantic mistakes. That’s to be expected. He was a pioneer, taking the very first steps in trying to systematically study subjective experience. The reason he made so little progress and abandoned neuroscience was because there weren’t scientific methods by which you could study things. Even the EEG was only brought into common use after the Second World War. So there were no methods for studying in vivo what’s going on in the brain, let alone the methods we have nowadays. But the sum of his basic observations, the centrality of emotion, was how much affective feelings influence cognitive processes. That’s the essence of what psychoanalysis is all about, how our rational, logical, cognitive processes can be distorted by emotional forces.

You founded the new field of “neuropsychoanalysis.” What’s the basic premise of this approach?

The neuropsychology I was taught might as well have been neurobehaviorism. Oliver Sacks famously wrote in 1984 that neuropsychology is admirable, but it excludes the psyche, by which he meant the active living subject of the mind. That really caught my attention. So I wanted to bring the psyche back into neuropsychology. Emotion was just not studied in the neuropsychology of the 1980s. The centrality of emotion in the life of the mind and what lies behind emotion is what Freud called “drive.” Basically, his idea was that unpleasant feelings represent the failures to meet those needs and pleasant feelings represent the opposite. It’s how we come to know how we’re meeting our deepest biological needs. And that idea gives an underpinning to cognition that I think is sorely lacking in cognitive science, pure and simple.

There are huge debates about the science of consciousness. Explaining the causal connection between brain and mind is one of the most difficult problems in all of science. On the one hand, there are the neurons and synaptic connections in the brain. And then there’s the immaterial world of thinking and feeling. It seems like they exist in two entirely separate domains. How do you approach this problem?

Subjective experience—consciousness—surely is part of nature because we are embodied creatures and we are experiencing subjects. So there are two ways in which you can look on the great problem you’ve just mentioned. You can either say it’s impossibly difficult to imagine how the physical organ becomes the experiencing subject, so they must belong to two different universes and therefore, the subjective experience is incomprehensible and outside of science. But it’s very hard for me to accept a view like that. The alternative is that it must somehow be possible to bridge that divide.

The major point of contention is whether consciousness can be reduced to the laws of physics or biology. The philosopher David Chalmers has speculated that consciousness is a fundamental property of nature that’s not reducible to any laws of nature.

I accept that, except for the word “fundamental.” I argue that consciousness is a property of nature, but it’s not a fundamental property. It’s quite easy to argue that there was a big bang very long ago and long after that, there was an emergence of life. If Chalmers’ view is that consciousness is a fundamental property of the universe, it must have preceded even the emergence of life. I know there are people who believe that. But as a scientist, when you look at the weight of the evidence, it’s just so much less plausible that there was already some sort of elementary form of consciousness even at the moment of the Big Bang. That’s basically the same as the idea of God. It’s not really grappling with the problem.

You can certainly find all kinds of correlations between brain function and mental activity. We know that brain damage . . . can change someone’s personality. But it still doesn’t explain causation. As the philosopher John Searle said, “How does the brain get over the hump from electrochemistry to feeling?”

I think we have made that problem harder for ourselves by taking human consciousness as our model of what we mean by consciousness. The question sounds so much more magical. How is it possible that all of this thinking and feeling and philosophizing can be the product of brain cells? But we should start with the far more elementary rudiment of consciousness—feeling. Think about consciousness as just being something to do with existential value. Survival is good and dying is bad. That’s the basic value system of all living things. Bad feelings mean you’re doing badly—you’re hungry, you’re thirsty, you’re sleepy, you’re under threat of damage to life and limb. Good feelings mean the opposite—this is good for your survival and reproductive success.

You’re saying consciousness is essentially about feelings. It’s not about cognition or intelligence.

That’s why I’m saying the most elementary forms of consciousness give us a much better prospect of being able to solve the question you’re posing. How can it happen that a physical creature comes to have this mysterious, magical stuff called consciousness? You reduce it down to something much more biological, like basic feelings, and then you start building up the complexities. A first step in that direction is “I feel.” Then comes the question, What is the cause of this feeling? What is this feeling about? And then you have the beginnings of cognition. “I feel like this about that.” So feeling gets extended onto perception and other cognitive representations of the organism in the world.

Where are those feelings rooted in the brain?

Feeling arises in a very ancient part of the brain, in the upper brainstem in structures we share with all vertebrates. This part of the brain is over 500 million years old. The very telling fact is that damage to those structures—tiny lesions as small as the size of a match head in parts of the reticular activating system—obliterates all consciousness. That fact alone demonstrates that more complex cognitive consciousness is dependent upon the basic affective form of consciousness that’s generated in the upper brainstem.

So we place too much emphasis on the cortex, which we celebrate because it’s what makes humans smart.

Exactly. Our evolutionary pride and joy is the huge cortical expanse that only mammals have, and we humans have even more of it. That was the biggest mistake we’ve made in the history of the neuroscience of consciousness. The evidence for the cortex being the seat of consciousness is really weak. If you de-corticate a neonatal mammal—say, a rat or a mouse—it doesn’t lose consciousness. Not only does it wake up in the morning and go to sleep at night, it runs and hangs from bars, swims, eats, copulates, plays, raises its pups to maturity. All of this emotional behavior remains without any cortex.

And the same applies to human beings. Children born with no cortex, a condition called hydranencephaly—not to be confused with hydrocephaly—are exactly the same as what I’ve just described in these experimental animals. They wake up in the morning, go to sleep at night, smile when they’re happy and fuss when they’re frustrated. Of course, you can’t speak to them, because they’ve got no cortex. They can’t tell you that they’re conscious, but they show consciousness and feeling in just the same way as our pets do.

You say we really have two brains—the brainstem and the cortex.

Yes, but the cortex is incapable of generating consciousness by itself. The cortex borrows, as it were, its consciousness from the brainstem. Moreover, consciousness is not intrinsic to what the cortex does. The cortex can perform high level, uniquely human cognitive operations as reading with comprehension, without consciousness being necessary at all. So why does it ever become conscious? The answer is that we have to feel our way into cognition because this is where the values come from. Is this going well or badly? All choices, any decision-making, has to be grounded in a value system where one thing is better than another thing.

So what is thinking? Can we even talk about the neurochemistry of a thought?

A thought in its most basic form is about choice. If you don’t have to make a choice, then it can all happen automatically. I’m now faced with two alternatives and I need to decide which one I’m going to do. Consciousness enables you to make those choices because it contributes value. Thinking goes on unconsciously until you’re in a state of uncertainty as to what to do. Then you need feeling to feel your way through the problem. The bulk of our cognition—our day-to-day psychological life—goes on unconsciously.

How does memory figure into consciousness?

The basic building block of all cognition is the memory we have. We have sensory impressions coming in and they leave traces which we can then reactivate in the form of cognitions and reassemble in all sorts of complicated ways, including coming up with new ideas. But the basic fabric of cognition is memory traces. The cortex is this vast storehouse of representations. So when I said earlier that cognition is not intrinsically conscious, that’s just saying that memories are, for the most part, latent. You couldn’t possibly be conscious of all of those billions of bits of information you have imbibed during your lifetime. So what is conscious is drawn up from this vast storehouse of long-term memory into short-term working memory. The conscious bit is just a tiny fragment of what’s there.

You say the function of memory is to predict our future needs. And the hippocampus, which we typically regard as the brain’s memory center, is used for imagining the future as well as storing information about the past.

The only point of learning from past events is to better predict future events. That’s the whole point of memory. It’s not just a library where we file away everything that’s happened to us. And the reason why we need to keep a record of what’s happened in the past is so that we can use it as a basis for predicting the future. And yes, the hippocampus is every bit as much for imagining the future as remembering the past. You might say it’s remembering the future.

Wouldn’t a true science of consciousness, of subjective experience, explain why particular thoughts and memories pop into my brain?

Sure, and that’s exactly why I take more seriously than most neuroscientists what psychoanalysts try to do. They ask, Why this particular content for Steve at this point in his life? How does it happen that neurons in my brain generate all of this? I’m saying if you start with the most rudimentary causal mechanisms, you’re just talking about a feeling and they’re not that difficult to understand in ordinary biological terms. Then there’s all this cognitive stuff based on your whole life. How do I go about meeting my emotional needs? And there’s your brain churning out predictions and feeling its way through the problem and trying to solve it.

So this is the premise of neuropsychoanalysis. There’s one track to explain the biology of what’s happening in the brain, and another track is psychological understanding. And maybe I need a psychotherapist to help me unpack why a particular thought suddenly occurs to me.

You’ve just summed up my entire scientific life in a nutshell. I think we need both. . . .

Truth About Truth

Pontius Pilate supposedly asked “Quid est veritas?” What is truth? Daniel Detmer teaches philosophy in Indiana. He was asked about postmodernism and ended up talking about objective truth. Below is a fairly long selection from a longer interview conducted by Richard Marshall at 3:16:

DD: As you know, “postmodernism” is a very loose, imprecise term, which means different things in different contexts. The only aspect of it that I have written about at length concerns a certain stance with regard to truth—more specifically either the denial that there is such a thing as objective truth or else the slightly milder thesis that there might as well be no such thing since, in any case, we (allegedly) have no access to it. It is a stance that is reflected well in Richard Rorty’s complaint that we “can still find philosophy professors who will solemnly tell you that they are seeking  the truth , not just a story or a consensus but an accurate representation of the way the world is.” Rorty goes on to call such professors “lovably old-fashioned . . .”

. . . Some of those who thought postmodern truth denial was politically liberatory explained that they thought it enabled one to show that the claims that prop up oppressive political structures are not (simply) true, but rather are to be understood as merely comprising one narrative among others, with no special status. One problem with that, from a political point of view, is that it also entails that the critique of such structures as oppressive is itself also not (simply) true, but rather one narrative among others. . . .

3:16: What do you think postmoderns get wrong and what do they get right . . . ?

DD: Often what they have gotten right is the specifics as to how some specific claim is untrue, or misleading because it is only partially true, because some important thing has been left out. What do they get wrong? Well, consider [Richard] Rorty’s rejection of the notion of objective truth. One of his main arguments is that such a concept is of no help to us in practice, since we have no way to examine reality as it is in itself so as to determine whether or not our beliefs about it are accurate. To put it another way, we have no way of knowing whether or not our beliefs give us information about the way things really are, since “we cannot get outside the range of our lights” and “cannot stand on neutral ground illuminated only by the natural light of reason.” Thus, “there is no way to get outside our beliefs and language so as to find some test other than coherence,” and “there is no method for knowing  when one has reached the truth, or when one is closer to it than before.”

The first problem is that of figuring out what such statements mean. Rorty obviously cannot claim that they are  objectively true—revelatory of the way things really are, so that anyone who disagreed would be simply mistaken—since such a claim would obviously render him vulnerable to charges of self-refutation. But what, then,  does he mean? How, for example, could Rorty, consistent with his strictures regarding the impossibility of knowing the objective truth,  know that “we cannot get outside the range of our lights” and “cannot stand on neutral ground illuminated only by the natural light of reason”? Does he just mean that this is how things  look from  his lights? And how can he  know that there is no method for knowing when one has reached the truth, or when one is closer to it than before? Does he know that  this view is closer to the truth than is the one that holds that there  are methods for knowing when one is closer to the truth than one was before?

At a conference Rorty was once challenged to explain why he would deny that it is objectively true that there was not, at that time, a big green giraffe standing behind him. He replied as follows:

Now about giraffes: I want to urge that if you have the distinction between the idiosyncratic and the intersubjective, or the relatively idiosyncratic and the relatively intersubjective, that is the only distinction you need to take care of real versus imaginary giraffes. You do not need a further distinction between the made and the found or the subjective and the objective. You do not need a distinction between reality and appearance, or between inside and outside, but only one between what you can get a consensus about and what you cannot.

But if it is possible to find out that there really is a consensus about the presence, or lack thereof, of a real giraffe, then why isn’t it also possible, even without such knowledge of a consensus, to find out whether or not there really is a giraffe present? Or, to put it another way, if there is a problem in finding out directly that a giraffe really is or is not present, why does this problem not also carry over to the project of finding out whether or not there really is a consensus about the presence or non-presence of a giraffe? Why are consensuses easier to know about than giraffes? If they aren’t, then what is to be gained, from a practical standpoint, by defining “truth” or “reality” in terms of consensus?

It is as if Rorty were claiming that society’s norms and judgments are unproblematically available to us, when nothing else is. But why would anyone think that it is easier to see, for example, that society  judges giraffes to be taller than ants than it is to see that giraffes  are taller than ants? If anything, this gets things backwards. I would argue that the category “the way things are” is, over a wide range of cases, significantly  more obvious and accessible to us than is the category “what our culture thinks.” Is it a  more clear and obvious truth that we  think that giraffes are taller than ants than that giraffes  are taller than ants? I am quite certain of the latter truth from my own observation, but I have never heard anyone else address their own thoughts on the relative heights of giraffes and ants, let alone discuss their impressions of public opinion on the issue. Similar remarks apply to many elementary moral, mathematical, and logical truths.

Moreover, this problem remains no matter how one understands such phrases as “reality” or “the way things are.” For example, if we understand them in some jacked-up, metaphysical sense, to be expressed with upper-case lettering as Reality-as-it-Really-Is, beyond language or thought or anything human, then, while it is understandable that we might want to deny that we know whether or not a giraffe is “really” present, so should we deny that we know whether or not we “really” have achieved a consensus on the matter. (For notice that knowledge of consensus seems to require knowledge of other minds and their thoughts, and it is unclear why anyone would think that our knowledge of the existence of other minds is any less problematic than is our knowledge of the existence of an independent physical world.)

If, on the other hand, we understand them in a more humdrum sense, merely as meaning that things typically are the way they are no matter what we might think about them, and that some of our thoughts about them are made wrong by the way the things are, then, while it is easy to see how we might be able to gather evidence fully sufficient to entitle us to claim to “know” that we have achieved a consensus on giraffes, so is it clear that we might be able to claim to “know” some things about giraffes, even in the absence of any consensus about, or knowledge of consensus about, such matters. Of course, one could use the jacked-up sense of “reality” when saying that we don’t know what giraffes are “really” like, while simultaneously using the humdrum sense of “reality” when saying that we can nevertheless cope by knowing what our culture’s consensus view of giraffes is, but what would be the sense or purpose of this double standard?

Or again, consider Rorty’s statement that we should be “content to call ‘true’ whatever the upshot of free and open encounters turns out to be,”and that he “would like to substitute the idea of ‘unforced agreement’ for that of ‘objectivity.’” Notice that on this view, in order to know whether or not giraffes are taller than ants we must first know (a) whether or not there is a consensus that giraffes are taller than ants and (b) if there is, whether or not the communication that produced that consensus was free, open, and undistorted. But isn’t it obvious that it is easier to determine whether or not giraffes are taller than ants than it is to determine either (a) or (b)?

. . . At other times Rorty defines “truth” not in terms of consensus, but rather in terms of utility. For example, he characterizes his position as one which “repudiates the idea that reality has an intrinsic nature to which true statements correspond…in favor of the idea that the beliefs we call ‘true’ are the ones we find most useful,” declares that its “whole point is to stop distinguishing between the usefulness of a way of talking and its truth,” and says that it would be in our best interest to discard the notion of “objective truth.” This appears, at first glance, a clever way to avoid the problem of self-refutation. As Rorty obviously recognizes that it would be inconsistent for him to claim to have discovered the objective truth that there is no objective truth to discover, he here instead bases his rejection of “objective truth” solely on the claim that such a notion is not useful to us—we would benefit from abandoning it

But as soon as we ask ourselves whether or not it is indeed  true that the notion of objective truth is not useful to us and that we would therefore benefit from discarding it, all of the old problems return. For either we understand this as an objective truth claim, in which case we get a performative contradiction (because we make use of a notion in issuing the very utterance in which we urge that it be discarded), or else we understand it in terms of Rorty’s pragmatist understanding of “truth,” in which case we generate an infinite regress (because the claim that the notion of objective truth is not useful to us would then have to be understood as true only insofar as it  is useful to us, and  this , in turn, would be true only insofar as  it  is useful to us, and so on).

And insofar as Rorty’s move to pragmatism is motivated by doubts about our ability to know how things really are, the problem remains unsolved. For any grounds we might have for doubting that we can know whether or not giraffes “really” are taller than ants would easily carry over to our efforts to find out whether or not it “really” is useful to believe that giraffes are taller than ants. On the other hand, any standard of “knowledge” sufficiently relaxed as to allow us to “know” that it is useful to believe that giraffes are taller than ants would also be lax enough to enable us to “know,” irrespective of the issue of the utility of belief, that giraffes are taller than ants.

In short, I regard postmodern truth denial of the sort just described as confused, incoherent, and illogical, as well as, from a political standpoint, worse than useless. One might hope that Dxxxx Txxxx’s very different kind of assault on truth might help to reawaken our awareness of the political importance of truth, and of the value commitments (such as a prioritizing of evidence over opinion, and of realism over wishful thinking) necessary to attain it. 

Parmenides Was Unreal (in the Modern Sense)

Parmenides of Elea doesn’t get much publicity these days. He lived 2,500 years ago on the edge of Greece and only one of his philosophical works survives. It’s a poem usually referred to as “On Nature”. The publicity he happens to get derives from the fact that he helped invent metaphysics, the branch of philosophy that deals with the general nature of reality (as it’s been practiced by philosophers in the Western world ever since).

Parmenides is the subject of the latest entry in a series called “Footnotes to Plato”, a periodic consideration of famous philosophers from The Times Literary Supplement. Here’s a bit of the article:

If Parmenides’ presence in the collective consciousness is relatively dim, it is in part because he is eclipsed by the thinkers he influenced. And then there is the small detail that his opinions are, as Aristotle said, “near to madness”.  Let us cut to the chase: Parmenides’ central argument. It is so quick that if you blink, you will miss it. You may need to read the following paragraphs twice.

That which is not – “What-is-not” – he says, is not. Since anything that comes into being would have to come into being out of what-is-not, things cannot come into being. Likewise, nothing can pass away because, in order to do so, it would have to enter the non-existent realm of what-is-not. The notion of beings as generated or perishing is therefore literally unthinkable: it would require of us that we think at once of the same thing that it is and it is not. The no-longer and the not-yet are modes of what-is-not. Consequently, the past and future do not exist either.

All of this points to one conclusion: there can be no change. The empty space necessary to separate one object from another would be another mode of what-is-not, so a multiplicity of beings separated by non-being is ruled out. What-is must be continuous. Since beings cannot be to a greater or lesser degree – this would require what-is to be commingled with the (non-existent) diluent of what-is-not – the universe must be fundamentally homogeneous. And so we arrive at the conclusion that the sum total of things is a single, unchanging, timeless, undifferentiated unity.

All of this is set out in a mere 150 lines, many of which are devoted to the philosopher’s mythical encounter with a Goddess who showed him the Way of Truth as opposed to that of the Way of (mere) Opinion. Scholars have, of course, quarreled over what exactly is meant by this 2,500-year-old text that has reached us by a precarious route. The poem survives only in fragments quoted and/or transcribed by others. The main transmitter was Simplicius, who lived over a thousand years after Parmenides’ death. The earliest sources of Simplicius’ transcriptions are twelfth-century manuscripts copied a further 600 years after he wrote them down.

Unsurprisingly, commentators have argued over Parmenides’ meaning. Did he really claim that the universe was an unbroken unity or only that it was homogeneous? They have also wondered whether he was using “is” in a purely predicative sense, as in “The cat is black”, or in a genuine existential sense, as in “The cat is”. Some have suggested that his astonishing conclusions depend on a failure to distinguish these two uses, which were not clearly separated until Aristotle.

What I took away from my philosophy classes is that Parmenides was a “monist”, someone who thinks that, in some significant sense, Reality Is One. The variety and change we see around us is somehow illusory or unreal or unimportant. One textbook suggest Parmenides believed that “Being or Reality is an unmoving perfect sphere, unchanging, undivided”. A later monist, the 17th century philosopher Baruch Spinoza, argued that reality consists of a single infinite substance that we call “God” or “Nature”. There are various ways to be a monist.

Well, I’ve read the paragraphs above, the ones that try to lay out Parmenides’s central argument, more than twice. You may share my feeling that the argument doesn’t succeed.

Where I think it goes wrong is that Parmenides treats things that don’t exist too much like things that do.

Although it’s easy to talk about things that don’t exist (e.g. a four-sided triangle or a mountain of gold), that only takes us so far. If I imagine a certain configuration of reality (say, me getting a cold) and what I imagined then becomes real (I do get a cold), the imaginary, unreal state of affairs (getting a real cold in the future) hasn’t actually transformed into a real state of affairs (actually getting a cold). All that’s happened is the reality of me imagining getting a cold has been replaced in the world’s timeline (and my experience) by the reality of me getting a cold. One reality was followed by another. It’s not a literal change from something that didn’t exist into something that did.

Saying that the unreal has become real is a manner of speaking. It shouldn’t be understood as a kind of thing (an imaginary situation) somehow changing its properties or relations in such a way that it becomes another kind of thing (a real situation). Philosophers have a way of putting this: “existence is not a predicate”. They mean that existing isn’t the same kind of thing as being square or purple or between two ferns. Existence isn’t a property or relation that can be predicated of something in the way those properties or relations can be. 

When Parmenides says “what is not” cannot become “what is”, he’s putting “what is not” and “what is” in a single category that we might call “things that are or are not”. That leads him, rather reasonably, to point out that “are not” things can’t become “are” things. It’s reasonable to rule that out, because a transition from an “are not” thing to an “are” thing would be something like spontaneous generation. Putting aside what may happen in the realm of quantum physics, when sub-atomic stuff is sometimes said to instantly pop into existence, the idea that “Something can come from nothing” is implausible even today. Parmenides made use of that implausibility in the 5th century BCE when he argued that what isn’t real can’t change into what’s real, so changes never happen at all.

What Parmenides should have kept in mind is that things that “are not” aren’t really things at all — they’re literally nothing — so they can’t change into something. Change doesn’t involve nothing turning into something. Change occurs when one thing that exists (a fresh piece of bread or an arrangement of atoms) becomes something else that exists (a stale piece of bread or a different arrangement of atoms). Real stuff gets rearranged, and we perceive that as something coming into existence or going out of it, i.e. changing.

So I think Parmenides was guilty of a kind of reification or treating the unreal as real. He puts what doesn’t exist into a realm that’s different from the realm of things that do exist, but right next door to it. Those two realms aren’t next door to each other, however. They’re in totally different neighborhoods, one that’s real and one that’s imaginary. It’s impossible and unnecessary to travel from one realm to the other.

By the way, the gist of the Times Literary Supplement article is that Parmenides “insisted that we must follow the rigours of an argument, no matter how surprising the conclusion – setting in motion the entire scientific world view”. Maybe so. I was more interested in his strange idea that change never happens.

The Ethics of “Sweet Illusions and Darling Lies”

Are we morally responsible for what we believe? To some extent, we are. The acceptance of lies and bizarre conspiracy theories by so many of our fellow citizens makes the issue extremely relevant. The philosopher Regina Rini discusses the ethics of belief for the Times Literary Supplement:  

On January 6, the US Capitol building was stormed by a mob, motivated by beliefs that were almost entirely false, absurd and nonsensical: the QAnon conspiracy; the President’s [lies] about massive voter fraud, and the various conspiracy theories that he and his lawyers peddled in support of overturning the election results.

In 1877, the English philosopher William Clifford published a now famous essay, “The Ethics of Belief”, setting out the view that we can be morally faulted for shoddy thinking. Clifford imagines a ship-owner who smothers his doubts about the seaworthiness of a creaky vessel, and adopts the sincere but unjustified belief that it is safe to send passengers across the Atlantic. The ship then sinks. Clifford (himself a shipwreck survivor) asks: don’t we agree that the ship-owner was “verily guilty” of the passengers’ deaths, and that he “must be held responsible for it”? If we agree to this, Clifford continues, then we must also agree that the ship-owner would deserve blame even if the ship hadn’t sunk. It is epistemic carelessness that makes the ship-owner guilty, even if catastrophe is luckily avoided. “The question of right or wrong has to do with the origin of his belief … not whether it turned out to be true or false, but whether he had a right to believe on such evidence as was before him.”

Clifford’s views went out of favour among philosophers for most of a century. Moral evaluation, it was thought, should stop at the mind’s edge. After all, we cannot directly control our beliefs in the way we control our fists. I can’t just decide, here and now, to stop believing that Charles I had a pointy beard . . . And if I can’t control my beliefs, how can I be held accountable for them?

download

Yet in recent decades, many philosophers have become less impressed by this objection (sometimes called the problem of “doxastic voluntarism”). After all, I can control how I acquire and maintain beliefs by shaping my informational environment. Suppose I do really want to change my beliefs about Charles I’s grooming. I could join a renegade historical society and surround myself with dissenting portraiture. Slowly, indirectly, I can retrain my thoughts and I can be held accountable for choosing to do so.

More to the point, I can also fail to take action to shape my beliefs in healthy ways. The social media era has made this point especially acute, as we can each now curate our own information environment, following sources that challenge our beliefs, or flatter our preconceptions, as we please. Digital epistemic communities are then made up of people who amplify one another’s virtues or vices. Credulously accepting conspiracy stories that vilify my partisan enemies not only dulls my own wits, but encourages my friends to dull theirs. Clifford himself was quite sharp on this point: “Habitual want of care about what I believe leads to habitual want of care in others about the truth of what is told to me … It may matter little to me, in my cloud-castle of sweet illusions and darling lies; but it matters much to Man that I have made my neighbours ready to deceive”.

So far, then, Clifford’s 150-year-old diagnosis seems precisely to explain the epistemic culpability of those who stormed the Capitol on a wave of delusion and lies. But there is a wrinkle here. Clifford thought that credulity – insufficient scepticism toward the claims of others – was the most troubling intellectual vice. But the epistemic shambles of QAnon show a more subtle problem. After all, if there’s anything conspiracy fanatics possess, it is scepticism. They are sceptical of what government officials say, sceptical of what vaccine scientists say, sceptical even of what astronauts say about the shape of the Earth. If anything, they show that critical thinking is a bit like cell division; valuable in proportion, but at risk of harmful metastasis. In the eyes of QAnon devotees, we are the “sheeple” who fail to “do the research” of tumbling down every hyperlinked rabbit-hole.

Conspiracy aficionados are all too willing to think for themselves – that is how they end up believing that Democrats are Satan-worshippers or that 5G phone towers cause Covid-19. And that’s where Clifford’s moralizing – “No simplicity of mind, no obscurity of station, can escape the universal duty of questioning all that we believe” – goes wrong. The ethics of belief should not be a Calvinistic demand for hard epistemic labour. Conspiracists work at least as hard as the rest of us, pinning notes and photos to their bulletin boards late into the night. Hard epistemic labour is just as prone to amplifying epistemic mistakes as overcoming them.

In fact, we should not be focused on individual intellectual virtue at all. The epistemic practices that justify our beliefs are fundamentally interpersonal. Most of our knowledge of the world depends essentially on the say-so of others. Consider: how do you know that I live in Toronto? Well, it says so right at the bottom of this column. But that’s not the same as going to Toronto and seeing me there with your own eyes. So even this simple belief requires trusting the say-so of me or the [Times Literary Supplement].

Perhaps you want to be an uncompromising epistemic individualist, refusing to believe until you’ve verified it yourself? Well, you’ll need to come to Toronto to check. But how will you find Toronto? You can’t use Google Maps (that’s just more say-so from others). Maybe you’ll set out with a compass and enterprising disposition. But how do you know what that compass is pointing to? How do you know where the North Pole is, or how magnetism works? Have you been to the North Pole? Have you done all the magnetism experiments yourself? The list goes on.

No one lives like that. We are all deeply, ineradicably dependent on the say-so of others for nearly all our beliefs about the world. It’s only through a massive division of cognitive labour that we’ve come to know so much. So genuine epistemic responsibility isn’t a matter of doubting all that can be doubted, or only believing what you’ve proven for yourself. It’s a matter of trusting the right other people. That takes wisdom.

Not everyone in the Capitol mob was a QAnon believer. Some were white supremacists aiming to violently uphold a president who refused to condemn their hate. Others were merely insurrection tourists. Still, many do seem to have genuinely believed they were fighting a monstrous regime of Satanic child-harmers. Those beliefs did not appear in a vacuum. A Bellingcat investigation of the social media history of Ashli Babbitt, the woman shot by police while attempting to storm the House of Representatives, suggests that she held relatively mainstream political views until about a year ago, when she veered off into deep QAnon obsession. She put her trust in the wrong people, and all her epistemic labour only made things worse.

That is the most delicate and important lesson to draw from last week’s horror show. QAnon believers are culpable for their bad judgment. But that culpability extends far beyond them, through to everyone whose actions fed their dangerous beliefs. It’s not enough to insist that responsibility falls entirely on the believer, because we are all dependent on others for our knowledge, and we must all trust someone. That mutual reliance means we are all our neighbour’s epistemic keeper.

Most obviously the blame for last week’s catastrophe extends to politicians who cynically courted and channelled [lies] to support their false allegations of election fraud. Donald Trump spoke to the mob moments before their assault, declaring “you’ll never take back our country with weakness. You have to show strength and you have to be strong”, and ordered them to march against the Capitol. That evening, after Congress regained control of its chambers, senators such as Josh Hawley continued to flog “objections and concerns” about the presidential election which had been dismissed by numerous courts and Trump’s own Justice Department.

But manipulative politicians are not the only ones to blame. The culture of the internet played a big role as well. An investigation of QAnon’s origins by the podcast Reply All found that the conspiracy began life as a joke on the ultra-ironic website 4chan. In 2017, “Q” was one of only several fake government source characters being played, tongue-in-cheek, by forum participants who all understood it was a game. Gradually the Q persona became the most popular, and then outsiders – who didn’t get the joke – stumbled onto Q’s tantalizing nonsense. Within a year, thousands of people looking for anything to fill the gap left by their scepticism toward authority developed a sincere belief in Q. Behind the scenes, someone, with cynical political or commercial motives, was happy to oblige.

Unquote. 

Prof. Rini’s analysis sounds right. The next question, of course, is: how should we respond? Millions of people are being immoral with regard to what they believe. Is there anything to be done about it? We have laws against some immoral behavior, like theft and assault. Although we can’t have laws that control what people believe, we can have limited government regulation of the companies that distribute those lies (such as Facebook, Fox News and your local cable TV company). The public can also exert pressure on companies, TV networks, for instance, that give certain politicians and pundits repeated opportunities to lie in public. And in our personal lives, when we hear somebody say something that’s simply not true, we can speak up, even though it’s easier to stay quiet.