Does Consciousness Reside in the Brain’s Electromagnetic Field?

At bottom, it all seems to be a bunch of fields:

In the modern framework of the quantum theory of fields, a field occupies space, contains energy, and its presence precludes a classical “true vacuum”. This has led physicists to consider electromagnetic fields to be a physical entity, making the field concept a supporting paradigm of the edifice of modern physics [Wikipedia].

So maybe consciousness is a special type of field generated by brains. Johnjoe McFadden is a professor of molecular genetics in England. He’s written about his electromagnetic field theory of consciousness for Aeon:

Just how do the atoms and molecules that make up the neurons in our brain . . . manage to generate human awareness and the power of thought? In answering that longstanding question, most neurobiologists today would point to the information-processing performed by brain neurons. . . . This [begins] as soon as light and sound [reach the] eyes and ears, stimulating . . . neurons to fire in response to different aspects of [the] environment. . . .

Each ‘firing’ event involves the movement of electrically charged atoms called ions in and out of the neurons. That movement triggers a kind of chain reaction that travels from one nerve cell to another via logical rules, roughly analogous to the AND, OR and NOT Boolean operations performed by today’s computer gates, in order to generate outputs such as speech. So, within milliseconds of . . . glancing at [an object], the firing rate of millions of neurons in [the] brain [correlates] with thousands of visual features of the [object] and its [surroundings]. . . .

Yet information-processing clearly isn’t sufficient for conscious knowing. Computers process lots of information yet have not exhibited the slightest spark of consciousness [note: or so we believe]. Several decades ago, in an essay exploring the phenomenology of consciousness, the philosopher Thomas Nagel asked us to imagine what it’s like to be a bat. This feature of being-like-something, of having a perspective on the world, captures something about what it means to be a truly conscious ‘knower’. In [a] hospital room watching my son’s EEG, I wondered what it was like to be one of his neurons, processing the information [from] the slamming of a door [in the hall]. As far as we can tell, an individual neuron knows just one thing – its firing rate.

It fires or doesn’t fire based on its inputs, so the information it carries is pretty much equivalent to the zero or one of binary computer language. It thereby encodes just a single bit of information. The value of that bit, whether a zero or a one, might correlate with the slamming of a door, but it says nothing about the door’s shape, its colour, its use as a portal between rooms or the noise of its slamming – all features that I’m sure were part of my son’s conscious experience. I concluded that being a single neuron in my son’s brain would not feel like anything.

Of course, you could argue, as neurobiologists usually do, that although a single neuron might know next to nothing, the collection of 100 billion neurons in my son’s brain knew everything in his mind and would thereby feel like something. But this explanation bumps into what’s known as the binding problem, which asks how all the information in millions of widely distributed neurons in the brain come together to create a single complex yet unified conscious perception of, say, a room . . .

Watching those wiggly lines march across the EEG screen gave me the germ of a different idea, something that didn’t boil down to pure neuronal computation or information-processing. Every time a neuron fires, along with the matter-based signal that travels down its wire-like nerve fibre, it also projects a tiny electromagnetic (EM) pulse into the surrounding space, rather like the signal from your phone when you send a text. So when my son heard the door close, as well as triggering the firing of billions of nerves, its slamming would have projected billions of tiny pulses of electromagnetic energy into his brain. These pulses flow into each other to generate a kind of pool of EM energy that’s called an electromagnetic field – something that neurobiologists have neglected when probing the nature of consciousness.

Neurobiologists have known about the brain’s EM field for more than a century but have nearly always dismissed it as having no more relevance to its workings than the exhaust of a car has to its steering. Yet, since information is just correlation, I knew that the underlying brain EM field tremors that generated the spikes on the EEG screen knew the slamming of the hospital door, just as much as the neurons whose firing generated those tremors. However, I also had enough physics to know that there was a crucial difference between a million scattered neurons firing and the EM field generated by their firing. The information encoded by the million discrete bits of information in a million scattered neurons is physically unified within a single brain EM field.

The unity of EM fields is apparent whenever you use wifi. Perhaps you’re streaming a radio documentary . . . on your phone while another family member is watching a movie, and another is listening to streamed music. Remarkably, all this information, whether movies, pictures, messages or music, is instantly available to be downloaded from any point in the vicinity of your router. This is because – unlike the information encoded in discrete units of matter such as computer gates or neurons – EM field information is encoded as immaterial waves that travel at the speed of light from their source to their receiver. Between source and receiver, all those waves encoding different messages overlap and intermingle to become a single EM field of physically bound information with as much unity as a single photon or electron, and which can be downloaded from any point in the field. The field, and everything encoded in it, is everywhere.

While watching my son’s EEG marching across the screen, I wondered what it was like to be his brain’s EM field pulsing with physically bound information correlating with all of his sense perceptions. I guessed it would feel a lot like him.

Locating consciousness in the brain’s EM field might seem bizarre, but is it any more bizarre than believing that awareness resides in matter? Remember Albert Einstein’s equation, E = mc2. All it involves is moving from the matter-based right-hand side of the equation to energy located on the left-hand side. Both are physical, but whereas matter encodes information as discrete particles separated in space, energy information is encoded as overlapping fields in which information is bound up into single unified wholes. Locating the seat of consciousness in the brain’s EM field thereby solves the binding problem of understanding how information encoded in billions of distributed neurons is unified in our (EM field-based) conscious mind. It is a form of dualism, but a scientific dualism based on the difference between matter and energy, rather than matter and spirit.

Awareness is then what this joined-up EM field information feels like from the inside. So, for example, the experience of hearing a door slam is what an EM field perturbation in the brain that correlates with a door slamming, and all of its memory neuron-encoded associations, feels like, from the inside.

But why? Whether neurons are firing synchronously should make no difference to their information-processing operations. Synchrony makes no sense for a consciousness located in neurons – but if we place consciousness in the brain’s EM field, then its association with synchrony becomes inevitable.

Toss a handful of pebbles into a still pond and, where the peak of one wave meets the trough of another, they cancel out each other to cause destructive interference. However, when the peaks and troughs line up, then they reinforce each other to make a bigger wave: constructive interference. The same will happen in the brain. When millions of disparate neurons recording or processing features of my desk fire asynchronously, then their waves will cancel out each other to generate zero EM field. Yet when those same neurons fire synchronously, then their waves will line up to cause constructive interference to project a strong EM signal into my brain’s EM field, what I now call the conscious electromagnetic information (cemi) field. I will see my desk.

I’ve been publishing on cemi field theory since 2000, and recently published an update in 2020. A key component of the theory is its novel insight into the nature of what we call ‘free will’. . . . Most non-modern people . . . probably believed that [a] supernatural soul was the driver of . . . willed actions. When . . . secular philosophers and scientists exorcised the soul from the body, voluntary actions became just another motor output of neuronal computation – no different from those that drive non-conscious actions such as walking, blinking, chewing or forming grammatically correct sentences.

Then why do willed actions feel so different? In a 2002 paper, I proposed that free will is our experience of the cemi field acting on neurons to initiate voluntary actions. Back then, there wasn’t much evidence for EM fields influencing neural firing – but experiments by David McCormick at Yale University School of Medicine in 2010 and Christof Koch at Caltech in 2011 have demonstrated that neurons can indeed be perturbed by weak, brain-strength, EM fields. At the very least, their experiments suggest the plausibility of a wifi component of neuronal information processing, which I claim is experienced as ‘free will’.

The cemi field theory also accounts for why our non-conscious and conscious minds operate differently. One of the most striking differences between the two is that our non-conscious mind can do many things at once, but we are able to engage in only one conscious task at a time. [Try to] divide a number like 11,357 by 71 while concentrating on a game of chess. Our non-conscious mind appears to be a parallel processor, whereas our conscious mind is a serial processor that can operate only one task at a time.

The cemi field theory accounts for these two modes by first accepting that most brain information-processing – the non-conscious sort – goes solely through its neuronal ‘wires’ that don’t interact through EM fields. This allows different tasks to be allocated to different circuits. In our distant past, all neural computation likely took this parallel-processing neuronal route. . . . However, at some point in our evolutionary history, our ancestors’ skulls became packed with more and more neurons such that adjacent neurons started to interfere with each other through their EM field interactions. Mostly, the interference would have impaired function. Natural selection would then have kicked in to insulate neurons involved in these vital functions.

Occasionally, electrical interference might have been beneficial. For example, the EM field interactions might have conferred the ability to compute with complex joined-up packets of EM field information, rather than mere bits. When this happened, natural selection would have pulled in the other direction, to increase EM field sensitivity. Yet there was also a downside to this way of processing information. Remember the pebbles tossed into the pond: they interfere with one another. Different ideas dropped into the brain’s cemi field similarly interfere with one another. Our conscious cemi-field mind inevitably became a serial computer that can do only one thing at a time.

A Nice Explanation of Quantum Mechanics, with Thoughts on What Makes Science Special

Michael Strevens teaches philosophy at New York University. In his book, The Knowledge Machine: How Irrationality Created Modern Science, he argues that what makes modern science so productive is the peculiar behavior of scientists. From the publisher’s site:

Like such classic works as Karl Popper’s The Logic of Scientific Discovery and Thomas Kuhn’s The Structure of Scientific Revolutions, The Knowledge Machine grapples with the meaning and origins of science, using a plethora of . . .  examples to demonstrate that scientists willfully ignore religion, theoretical beauty, and . . . philosophy to embrace a constricted code of argument whose very narrowness channels unprecedented energy into empirical observation and experimentation. Strevens calls this scientific code the iron rule of explanation, and reveals the way in which the rule, precisely because it is unreasonably close-minded, overcomes individual prejudices to lead humanity inexorably toward the secrets of nature.

Here Strevens presents a very helpful explanation of quantum mechanics, while explaining that physicists (most of them anyway) are following Newton’s example when they use the theory to make exceptionally accurate predictions, even though the theory’s fundamental meaning is mysterious (in the well-known phrase, they “shut up and calculate”):

To be scientific simply was to be Newtonian. The investigation of nature [had] changed forever. No longer were deep philosophical insights of the sort that founded Descartes’s system considered to be the keys to the kingdom of knowledge. Put foundational matters aside, Newton’s example seemed to urge, and devote your days instead to the construction of causal principles that, in their forecasts, follow precisely the contours of the observable world. . . .

[This is] Newton’s own interpretation of his method, laid out in a postscript to the Principia’s second edition of 1713. There Newton summarizes the fundamental properties of gravitational attraction—that it increases “in proportion to the quantity of solid matter” and decreases in proportion to distance squared—and then continues:

I have not as yet been able to deduce from phenomena the reason for these properties of gravity, and I do not feign hypotheses. For whatever is not deduced from the phenomena must be called a hypothesis; and hypotheses, whether metaphysical or physical, or based on occult qualities, or mechanical, have no place in experimental philosophy. . . . It is enough that gravity really exists and acts according to the laws that we have set forth and is sufficient to explain all the motions of the heavenly bodies and of our sea.

The thinkers around and after Newton got the message, one by one.

[Jumping ahead three centuries:]

According to Roger Penrose, one of the late twentieth century’s foremost mathematical physicists, quantum mechanics “makes absolutely no sense.” “I think I can safely say that nobody understands quantum mechanics,” remarked Richard Feynman. How can a theory be widely regarded both as incomprehensible and also as the best explanation we have of the physical world we live in?

. . . Quantum theory derives accurate predictions from a notion, superposition, that is quite beyond our human understanding. Matter, says quantum mechanics, occupies the state called superposition when it is not being observed [or measured]. An electron in superposition occupies no particular point in space. It is typically, rather, in a kind of “mix” of being in many places at once. The mix is not perfectly balanced: some places are far more heavily represented than others. So a particular electron’s superposition might be almost all made up from positions near a certain atomic nucleus and just a little bit from positions elsewhere. That is the closest that quantum mechanics comes to saying that the electron is orbiting the nucleus.

As to the nature of this “mix”—it is a mystery. We give it a name: superposition. But we can’t give it a philosophical explanation. What we can do is to represent any superposition with a mathematical formula, called a “wave function.” An electron’s wave function represents its physical state with the same exactitude that, in Newton’s physics, its state would be represented by numbers specifying its precise position and velocity. You may have heard of quantum mechanics’ “uncertainty principle,” but forget about uncertainty here: the wave function is a complete description that captures every matter of fact about an electron’s physical state without remainder.

So far, we have a mathematical representation of the state of any particular piece of matter, but we haven’t said how that state changes in time. This is the job of Schrödinger’s equation, which is the quantum equivalent of Newton’s famous second law of motion F = ma, in that it spells out how forces of any sort—gravitational, electrical, and so on—will affect a quantum particle. According to Schrödinger’s equation, the wave function will behave in what physicists immediately recognize as a “wavelike” way. That is why, according to quantum mechanics, even particles such as electrons conduct themselves as though they are waves.

In the early days of quantum mechanics, Erwin Schrödinger, the Austrian physicist who formulated the equation in 1926, and Louis de Broglie, a French physicist—both eventual Nobel Prize winners—wondered whether the waves described by quantum mechanics might be literal waves traveling through a sea of “quantum ether” that pervades our universe. They attempted to understand quantum mechanics, then, using the old model of the fluid.

This turned out to be impossible for a startling reason: it is often necessary to assign a wave function not to a single particle, like an electron, but to a whole system of particles. Such a wave function is defined in a space that has three dimensions for every particle in the system: for a 2-particle system, then, it has 6 dimensions; for a 10-particle system, 30 dimensions. Were the wave to be a real entity made of vibrations in the ether, it would therefore have to be flowing around a space of 6, or 30, or even more dimensions. But our universe rather stingily supplies only three dimensions for things to happen in. In quantum mechanics, as Schrödinger and de Broglie soon realized, the notion of substance as fluid fails completely.

There is a further component to quantum mechanics. It is called Born’s rule, and it says what happens when a particle’s position or other state is measured. Suppose that an electron is in a superposition, a mix of being “everywhere and nowhere.” You use the appropriate instruments to take a look at it; what do you see? Eerily, you see it occupying a definite position. Born’s rule says that the position is a matter of chance: the probability that a particle appears in a certain place is proportional to the degree to which that place is represented in the mix.

It is as though the superposition is an extremely complex cocktail, a combination of various amounts of infinitely many ingredients, each representing the electron’s being in a particular place. Taste the cocktail, and instead of an infinitely complex flavor you will—according to Born’s rule—taste only a single ingredient. The chance of tasting that ingredient is proportional to the amount of the ingredient contained in the mixture that makes up the superposition. If an electron’s state is mostly a blend of positions near a certain atomic nucleus, for example, then when you observe it, it will most likely pop up near the nucleus.

One more thing: an observed particle’s apparently definite position is not merely a fleeting glimpse of something more complex. Once you see the particle in a certain position, it goes on to act as though it really is in that position (until something happens to change its state). In mixological terms, once you have sampled your cocktail, every subsequent sip will taste the same, as though the entire cocktail has transformed into a simple simple solution of this single ingredient. It is this strange disposition for matter, when observed, to snap into a determinate place that accounts for its “particle-like” behavior.

To sum up, quantum mechanical matter—the matter from which we’re all made—spends almost all its time in a superposition. As long as it’s not observed, the superposition, and so the matter, behaves like an old-fashioned wave, an exemplar of liquidity (albeit in indefinitely many dimensions). If it is observed, the matter jumps randomly out of its superposition and into a definite position like an old-fashioned particle, the epitome of solidity.

Nobody can explain what kind of substance this quantum mechanical matter is, such that it behaves in so uncanny a way. It seems that it can be neither solid nor fluid—yet these exhaust the possibilities that our human minds can grasp. Quantum mechanics does not, then, provide the kind of deep understanding of the way the world works that was sought by philosophers from Aristotle to Descartes. What it does supply is a precise mathematical apparatus for deriving effects from their causes. Take the initial state of a physical system, represented by a wave function; apply Schrödinger’s equation and if appropriate Born’s rule, and the theory tells you how the system will behave (with, if Born’s rule is invoked, a probabilistic twist). In this way, quantum theory explains why electrons sometimes behave as waves, why photons (the stuff of light) sometimes behave as particles, and why atoms have the structure that they do and interact in the way they do.

Thus, quantum mechanics may not offer deep understanding, but it can still account for observable phenomena by way of . . . the kind of explanation favored by Newton . . . Had Newton [engaged with scientists like Bohr and Einstein at conferences on quantum mechanics] he would perhaps have proclaimed:

I have not as yet been able to deduce from phenomena the nature of quantum superposition, and I do not feign hypotheses. It is enough that superposition really exists and acts according to the laws that we have set forth and is sufficient to explain all the motions of the microscopic bodies of which matter is made.

Newton . . .  was the chief architect of modern science’s first great innovation. Rather than deep philosophical understanding, Newton pursued shallow explanatory power, that is, the ability to derive correct descriptions of phenomena from a theory’s causal principles, regardless of their ultimate nature and indeed regardless of their very intelligibility. In so doing, he was able to build a gravitational theory of immense capability, setting an example that his successors were eager to follow.

Predictive power thereby came to override metaphysical insight. Or as the historian of science John Heilbron, writing of the study of electricity after Newton, put it:

When confronted with a choice between a qualitative model deemed intelligible and an exact description lacking clear physical foundations, the leading physicists of the Enlightenment preferred exactness.

So it continued to be, as the development and acceptance of quantum mechanics, as unerring as it is incomprehensible, goes to show. The criterion for explanatory success inherent in Newton’s practice became fixed for all time, founding the procedural consensus that lies at the heart of modern science.

Consciousness and Primitive Feelings

I’ve been thinking lately that all value — whether ethical, aesthetic or practical — comes down to feelings in the end. Was that the right thing to do? Is that a beautiful song? Is this expensive hammer better than the cheaper one? Only if it tends in the past, present or future to make me or you or somebody else have certain feelings.

Below is most of an interview with Mark Solms, a South African psychoanalyst and neuropsychologist, who has a new book out: The Hidden Spring: A Journey to the Source of Consciousness. The Nautilus site gave the interview the title “Consciousness Is Just A Feeling”, although that’s not what Solms says. The interviewer’s questions are in italics:

. . . You made a big discovery that overturned the prevailing theory that we only dream during REM sleep. What did you find?

It was just assumed that when your REM sleep stops, your dreams also stop. But I found that human patients with damage to the part of the brain generating REM sleep nevertheless continue to experience dreams. In retrospect, you realize what a significant methodological error we made. That’s the price we pay for not gathering subjective data. You know, the actual subjective experience of dreams is an embarrassment to science. And this is what my professors had in mind when they were saying, don’t study things like that. But you’re going to be missing something rather important about how the brain works if you leave out half of the available data.

Your interest in Freud is very unusual for a neuroscientist. You actually trained to become a psychoanalyst, and since then, you’ve edited the complete psychological works of Freud.

Yes, and my colleagues were horrified. I had been taught this was pseudoscience. One of them said to me, “You know, astronomers don’t study astrology.” It’s true that psychoanalysis had lost its bearings. Freud was a very well-trained neuroscientist and neurologist, but in successive generations that grounding of psychoanalysis in the biological sciences had been lost. So I can understand where some of the disdain for psychoanalysis came from. But to its credit, it studied the actual lived life of the mind, which was the thing that interested me, and was missing from neuropsychology. So I turned to psychoanalysis to find any kind of systematic attempt to study subjective experience and to infer what kinds of mechanisms lay behind it.

Did we get Freud wrong? Did he have scientific insights that we’ve ignored?

Very much so. I’m not going to pretend that Freud didn’t make some gigantic mistakes. That’s to be expected. He was a pioneer, taking the very first steps in trying to systematically study subjective experience. The reason he made so little progress and abandoned neuroscience was because there weren’t scientific methods by which you could study things. Even the EEG was only brought into common use after the Second World War. So there were no methods for studying in vivo what’s going on in the brain, let alone the methods we have nowadays. But the sum of his basic observations, the centrality of emotion, was how much affective feelings influence cognitive processes. That’s the essence of what psychoanalysis is all about, how our rational, logical, cognitive processes can be distorted by emotional forces.

You founded the new field of “neuropsychoanalysis.” What’s the basic premise of this approach?

The neuropsychology I was taught might as well have been neurobehaviorism. Oliver Sacks famously wrote in 1984 that neuropsychology is admirable, but it excludes the psyche, by which he meant the active living subject of the mind. That really caught my attention. So I wanted to bring the psyche back into neuropsychology. Emotion was just not studied in the neuropsychology of the 1980s. The centrality of emotion in the life of the mind and what lies behind emotion is what Freud called “drive.” Basically, his idea was that unpleasant feelings represent the failures to meet those needs and pleasant feelings represent the opposite. It’s how we come to know how we’re meeting our deepest biological needs. And that idea gives an underpinning to cognition that I think is sorely lacking in cognitive science, pure and simple.

There are huge debates about the science of consciousness. Explaining the causal connection between brain and mind is one of the most difficult problems in all of science. On the one hand, there are the neurons and synaptic connections in the brain. And then there’s the immaterial world of thinking and feeling. It seems like they exist in two entirely separate domains. How do you approach this problem?

Subjective experience—consciousness—surely is part of nature because we are embodied creatures and we are experiencing subjects. So there are two ways in which you can look on the great problem you’ve just mentioned. You can either say it’s impossibly difficult to imagine how the physical organ becomes the experiencing subject, so they must belong to two different universes and therefore, the subjective experience is incomprehensible and outside of science. But it’s very hard for me to accept a view like that. The alternative is that it must somehow be possible to bridge that divide.

The major point of contention is whether consciousness can be reduced to the laws of physics or biology. The philosopher David Chalmers has speculated that consciousness is a fundamental property of nature that’s not reducible to any laws of nature.

I accept that, except for the word “fundamental.” I argue that consciousness is a property of nature, but it’s not a fundamental property. It’s quite easy to argue that there was a big bang very long ago and long after that, there was an emergence of life. If Chalmers’ view is that consciousness is a fundamental property of the universe, it must have preceded even the emergence of life. I know there are people who believe that. But as a scientist, when you look at the weight of the evidence, it’s just so much less plausible that there was already some sort of elementary form of consciousness even at the moment of the Big Bang. That’s basically the same as the idea of God. It’s not really grappling with the problem.

You can certainly find all kinds of correlations between brain function and mental activity. We know that brain damage . . . can change someone’s personality. But it still doesn’t explain causation. As the philosopher John Searle said, “How does the brain get over the hump from electrochemistry to feeling?”

I think we have made that problem harder for ourselves by taking human consciousness as our model of what we mean by consciousness. The question sounds so much more magical. How is it possible that all of this thinking and feeling and philosophizing can be the product of brain cells? But we should start with the far more elementary rudiment of consciousness—feeling. Think about consciousness as just being something to do with existential value. Survival is good and dying is bad. That’s the basic value system of all living things. Bad feelings mean you’re doing badly—you’re hungry, you’re thirsty, you’re sleepy, you’re under threat of damage to life and limb. Good feelings mean the opposite—this is good for your survival and reproductive success.

You’re saying consciousness is essentially about feelings. It’s not about cognition or intelligence.

That’s why I’m saying the most elementary forms of consciousness give us a much better prospect of being able to solve the question you’re posing. How can it happen that a physical creature comes to have this mysterious, magical stuff called consciousness? You reduce it down to something much more biological, like basic feelings, and then you start building up the complexities. A first step in that direction is “I feel.” Then comes the question, What is the cause of this feeling? What is this feeling about? And then you have the beginnings of cognition. “I feel like this about that.” So feeling gets extended onto perception and other cognitive representations of the organism in the world.

Where are those feelings rooted in the brain?

Feeling arises in a very ancient part of the brain, in the upper brainstem in structures we share with all vertebrates. This part of the brain is over 500 million years old. The very telling fact is that damage to those structures—tiny lesions as small as the size of a match head in parts of the reticular activating system—obliterates all consciousness. That fact alone demonstrates that more complex cognitive consciousness is dependent upon the basic affective form of consciousness that’s generated in the upper brainstem.

So we place too much emphasis on the cortex, which we celebrate because it’s what makes humans smart.

Exactly. Our evolutionary pride and joy is the huge cortical expanse that only mammals have, and we humans have even more of it. That was the biggest mistake we’ve made in the history of the neuroscience of consciousness. The evidence for the cortex being the seat of consciousness is really weak. If you de-corticate a neonatal mammal—say, a rat or a mouse—it doesn’t lose consciousness. Not only does it wake up in the morning and go to sleep at night, it runs and hangs from bars, swims, eats, copulates, plays, raises its pups to maturity. All of this emotional behavior remains without any cortex.

And the same applies to human beings. Children born with no cortex, a condition called hydranencephaly—not to be confused with hydrocephaly—are exactly the same as what I’ve just described in these experimental animals. They wake up in the morning, go to sleep at night, smile when they’re happy and fuss when they’re frustrated. Of course, you can’t speak to them, because they’ve got no cortex. They can’t tell you that they’re conscious, but they show consciousness and feeling in just the same way as our pets do.

You say we really have two brains—the brainstem and the cortex.

Yes, but the cortex is incapable of generating consciousness by itself. The cortex borrows, as it were, its consciousness from the brainstem. Moreover, consciousness is not intrinsic to what the cortex does. The cortex can perform high level, uniquely human cognitive operations as reading with comprehension, without consciousness being necessary at all. So why does it ever become conscious? The answer is that we have to feel our way into cognition because this is where the values come from. Is this going well or badly? All choices, any decision-making, has to be grounded in a value system where one thing is better than another thing.

So what is thinking? Can we even talk about the neurochemistry of a thought?

A thought in its most basic form is about choice. If you don’t have to make a choice, then it can all happen automatically. I’m now faced with two alternatives and I need to decide which one I’m going to do. Consciousness enables you to make those choices because it contributes value. Thinking goes on unconsciously until you’re in a state of uncertainty as to what to do. Then you need feeling to feel your way through the problem. The bulk of our cognition—our day-to-day psychological life—goes on unconsciously.

How does memory figure into consciousness?

The basic building block of all cognition is the memory we have. We have sensory impressions coming in and they leave traces which we can then reactivate in the form of cognitions and reassemble in all sorts of complicated ways, including coming up with new ideas. But the basic fabric of cognition is memory traces. The cortex is this vast storehouse of representations. So when I said earlier that cognition is not intrinsically conscious, that’s just saying that memories are, for the most part, latent. You couldn’t possibly be conscious of all of those billions of bits of information you have imbibed during your lifetime. So what is conscious is drawn up from this vast storehouse of long-term memory into short-term working memory. The conscious bit is just a tiny fragment of what’s there.

You say the function of memory is to predict our future needs. And the hippocampus, which we typically regard as the brain’s memory center, is used for imagining the future as well as storing information about the past.

The only point of learning from past events is to better predict future events. That’s the whole point of memory. It’s not just a library where we file away everything that’s happened to us. And the reason why we need to keep a record of what’s happened in the past is so that we can use it as a basis for predicting the future. And yes, the hippocampus is every bit as much for imagining the future as remembering the past. You might say it’s remembering the future.

Wouldn’t a true science of consciousness, of subjective experience, explain why particular thoughts and memories pop into my brain?

Sure, and that’s exactly why I take more seriously than most neuroscientists what psychoanalysts try to do. They ask, Why this particular content for Steve at this point in his life? How does it happen that neurons in my brain generate all of this? I’m saying if you start with the most rudimentary causal mechanisms, you’re just talking about a feeling and they’re not that difficult to understand in ordinary biological terms. Then there’s all this cognitive stuff based on your whole life. How do I go about meeting my emotional needs? And there’s your brain churning out predictions and feeling its way through the problem and trying to solve it.

So this is the premise of neuropsychoanalysis. There’s one track to explain the biology of what’s happening in the brain, and another track is psychological understanding. And maybe I need a psychotherapist to help me unpack why a particular thought suddenly occurs to me.

You’ve just summed up my entire scientific life in a nutshell. I think we need both. . . .

Truth About Truth

Pontius Pilate supposedly asked “Quid est veritas?” What is truth? Daniel Detmer teaches philosophy in Indiana. He was asked about postmodernism and ended up talking about objective truth. Below is a fairly long selection from a longer interview conducted by Richard Marshall at 3:16:

DD: As you know, “postmodernism” is a very loose, imprecise term, which means different things in different contexts. The only aspect of it that I have written about at length concerns a certain stance with regard to truth—more specifically either the denial that there is such a thing as objective truth or else the slightly milder thesis that there might as well be no such thing since, in any case, we (allegedly) have no access to it. It is a stance that is reflected well in Richard Rorty’s complaint that we “can still find philosophy professors who will solemnly tell you that they are seeking  the truth , not just a story or a consensus but an accurate representation of the way the world is.” Rorty goes on to call such professors “lovably old-fashioned . . .”

. . . Some of those who thought postmodern truth denial was politically liberatory explained that they thought it enabled one to show that the claims that prop up oppressive political structures are not (simply) true, but rather are to be understood as merely comprising one narrative among others, with no special status. One problem with that, from a political point of view, is that it also entails that the critique of such structures as oppressive is itself also not (simply) true, but rather one narrative among others. . . .

3:16: What do you think postmoderns get wrong and what do they get right . . . ?

DD: Often what they have gotten right is the specifics as to how some specific claim is untrue, or misleading because it is only partially true, because some important thing has been left out. What do they get wrong? Well, consider [Richard] Rorty’s rejection of the notion of objective truth. One of his main arguments is that such a concept is of no help to us in practice, since we have no way to examine reality as it is in itself so as to determine whether or not our beliefs about it are accurate. To put it another way, we have no way of knowing whether or not our beliefs give us information about the way things really are, since “we cannot get outside the range of our lights” and “cannot stand on neutral ground illuminated only by the natural light of reason.” Thus, “there is no way to get outside our beliefs and language so as to find some test other than coherence,” and “there is no method for knowing  when one has reached the truth, or when one is closer to it than before.”

The first problem is that of figuring out what such statements mean. Rorty obviously cannot claim that they are  objectively true—revelatory of the way things really are, so that anyone who disagreed would be simply mistaken—since such a claim would obviously render him vulnerable to charges of self-refutation. But what, then,  does he mean? How, for example, could Rorty, consistent with his strictures regarding the impossibility of knowing the objective truth,  know that “we cannot get outside the range of our lights” and “cannot stand on neutral ground illuminated only by the natural light of reason”? Does he just mean that this is how things  look from  his lights? And how can he  know that there is no method for knowing when one has reached the truth, or when one is closer to it than before? Does he know that  this view is closer to the truth than is the one that holds that there  are methods for knowing when one is closer to the truth than one was before?

At a conference Rorty was once challenged to explain why he would deny that it is objectively true that there was not, at that time, a big green giraffe standing behind him. He replied as follows:

Now about giraffes: I want to urge that if you have the distinction between the idiosyncratic and the intersubjective, or the relatively idiosyncratic and the relatively intersubjective, that is the only distinction you need to take care of real versus imaginary giraffes. You do not need a further distinction between the made and the found or the subjective and the objective. You do not need a distinction between reality and appearance, or between inside and outside, but only one between what you can get a consensus about and what you cannot.

But if it is possible to find out that there really is a consensus about the presence, or lack thereof, of a real giraffe, then why isn’t it also possible, even without such knowledge of a consensus, to find out whether or not there really is a giraffe present? Or, to put it another way, if there is a problem in finding out directly that a giraffe really is or is not present, why does this problem not also carry over to the project of finding out whether or not there really is a consensus about the presence or non-presence of a giraffe? Why are consensuses easier to know about than giraffes? If they aren’t, then what is to be gained, from a practical standpoint, by defining “truth” or “reality” in terms of consensus?

It is as if Rorty were claiming that society’s norms and judgments are unproblematically available to us, when nothing else is. But why would anyone think that it is easier to see, for example, that society  judges giraffes to be taller than ants than it is to see that giraffes  are taller than ants? If anything, this gets things backwards. I would argue that the category “the way things are” is, over a wide range of cases, significantly  more obvious and accessible to us than is the category “what our culture thinks.” Is it a  more clear and obvious truth that we  think that giraffes are taller than ants than that giraffes  are taller than ants? I am quite certain of the latter truth from my own observation, but I have never heard anyone else address their own thoughts on the relative heights of giraffes and ants, let alone discuss their impressions of public opinion on the issue. Similar remarks apply to many elementary moral, mathematical, and logical truths.

Moreover, this problem remains no matter how one understands such phrases as “reality” or “the way things are.” For example, if we understand them in some jacked-up, metaphysical sense, to be expressed with upper-case lettering as Reality-as-it-Really-Is, beyond language or thought or anything human, then, while it is understandable that we might want to deny that we know whether or not a giraffe is “really” present, so should we deny that we know whether or not we “really” have achieved a consensus on the matter. (For notice that knowledge of consensus seems to require knowledge of other minds and their thoughts, and it is unclear why anyone would think that our knowledge of the existence of other minds is any less problematic than is our knowledge of the existence of an independent physical world.)

If, on the other hand, we understand them in a more humdrum sense, merely as meaning that things typically are the way they are no matter what we might think about them, and that some of our thoughts about them are made wrong by the way the things are, then, while it is easy to see how we might be able to gather evidence fully sufficient to entitle us to claim to “know” that we have achieved a consensus on giraffes, so is it clear that we might be able to claim to “know” some things about giraffes, even in the absence of any consensus about, or knowledge of consensus about, such matters. Of course, one could use the jacked-up sense of “reality” when saying that we don’t know what giraffes are “really” like, while simultaneously using the humdrum sense of “reality” when saying that we can nevertheless cope by knowing what our culture’s consensus view of giraffes is, but what would be the sense or purpose of this double standard?

Or again, consider Rorty’s statement that we should be “content to call ‘true’ whatever the upshot of free and open encounters turns out to be,”and that he “would like to substitute the idea of ‘unforced agreement’ for that of ‘objectivity.’” Notice that on this view, in order to know whether or not giraffes are taller than ants we must first know (a) whether or not there is a consensus that giraffes are taller than ants and (b) if there is, whether or not the communication that produced that consensus was free, open, and undistorted. But isn’t it obvious that it is easier to determine whether or not giraffes are taller than ants than it is to determine either (a) or (b)?

. . . At other times Rorty defines “truth” not in terms of consensus, but rather in terms of utility. For example, he characterizes his position as one which “repudiates the idea that reality has an intrinsic nature to which true statements correspond…in favor of the idea that the beliefs we call ‘true’ are the ones we find most useful,” declares that its “whole point is to stop distinguishing between the usefulness of a way of talking and its truth,” and says that it would be in our best interest to discard the notion of “objective truth.” This appears, at first glance, a clever way to avoid the problem of self-refutation. As Rorty obviously recognizes that it would be inconsistent for him to claim to have discovered the objective truth that there is no objective truth to discover, he here instead bases his rejection of “objective truth” solely on the claim that such a notion is not useful to us—we would benefit from abandoning it

But as soon as we ask ourselves whether or not it is indeed  true that the notion of objective truth is not useful to us and that we would therefore benefit from discarding it, all of the old problems return. For either we understand this as an objective truth claim, in which case we get a performative contradiction (because we make use of a notion in issuing the very utterance in which we urge that it be discarded), or else we understand it in terms of Rorty’s pragmatist understanding of “truth,” in which case we generate an infinite regress (because the claim that the notion of objective truth is not useful to us would then have to be understood as true only insofar as it  is useful to us, and  this , in turn, would be true only insofar as  it  is useful to us, and so on).

And insofar as Rorty’s move to pragmatism is motivated by doubts about our ability to know how things really are, the problem remains unsolved. For any grounds we might have for doubting that we can know whether or not giraffes “really” are taller than ants would easily carry over to our efforts to find out whether or not it “really” is useful to believe that giraffes are taller than ants. On the other hand, any standard of “knowledge” sufficiently relaxed as to allow us to “know” that it is useful to believe that giraffes are taller than ants would also be lax enough to enable us to “know,” irrespective of the issue of the utility of belief, that giraffes are taller than ants.

In short, I regard postmodern truth denial of the sort just described as confused, incoherent, and illogical, as well as, from a political standpoint, worse than useless. One might hope that Dxxxx Txxxx’s very different kind of assault on truth might help to reawaken our awareness of the political importance of truth, and of the value commitments (such as a prioritizing of evidence over opinion, and of realism over wishful thinking) necessary to attain it. 

Parmenides Was Unreal (in the Modern Sense)

Parmenides of Elea doesn’t get much publicity these days. He lived 2,500 years ago on the edge of Greece and only one of his philosophical works survives. It’s a poem usually referred to as “On Nature”. The publicity he happens to get derives from the fact that he helped invent metaphysics, the branch of philosophy that deals with the general nature of reality (as it’s been practiced by philosophers in the Western world ever since).

Parmenides is the subject of the latest entry in a series called “Footnotes to Plato”, a periodic consideration of famous philosophers from The Times Literary Supplement. Here’s a bit of the article:

If Parmenides’ presence in the collective consciousness is relatively dim, it is in part because he is eclipsed by the thinkers he influenced. And then there is the small detail that his opinions are, as Aristotle said, “near to madness”.  Let us cut to the chase: Parmenides’ central argument. It is so quick that if you blink, you will miss it. You may need to read the following paragraphs twice.

That which is not – “What-is-not” – he says, is not. Since anything that comes into being would have to come into being out of what-is-not, things cannot come into being. Likewise, nothing can pass away because, in order to do so, it would have to enter the non-existent realm of what-is-not. The notion of beings as generated or perishing is therefore literally unthinkable: it would require of us that we think at once of the same thing that it is and it is not. The no-longer and the not-yet are modes of what-is-not. Consequently, the past and future do not exist either.

All of this points to one conclusion: there can be no change. The empty space necessary to separate one object from another would be another mode of what-is-not, so a multiplicity of beings separated by non-being is ruled out. What-is must be continuous. Since beings cannot be to a greater or lesser degree – this would require what-is to be commingled with the (non-existent) diluent of what-is-not – the universe must be fundamentally homogeneous. And so we arrive at the conclusion that the sum total of things is a single, unchanging, timeless, undifferentiated unity.

All of this is set out in a mere 150 lines, many of which are devoted to the philosopher’s mythical encounter with a Goddess who showed him the Way of Truth as opposed to that of the Way of (mere) Opinion. Scholars have, of course, quarreled over what exactly is meant by this 2,500-year-old text that has reached us by a precarious route. The poem survives only in fragments quoted and/or transcribed by others. The main transmitter was Simplicius, who lived over a thousand years after Parmenides’ death. The earliest sources of Simplicius’ transcriptions are twelfth-century manuscripts copied a further 600 years after he wrote them down.

Unsurprisingly, commentators have argued over Parmenides’ meaning. Did he really claim that the universe was an unbroken unity or only that it was homogeneous? They have also wondered whether he was using “is” in a purely predicative sense, as in “The cat is black”, or in a genuine existential sense, as in “The cat is”. Some have suggested that his astonishing conclusions depend on a failure to distinguish these two uses, which were not clearly separated until Aristotle.

What I took away from my philosophy classes is that Parmenides was a “monist”, someone who thinks that, in some significant sense, Reality Is One. The variety and change we see around us is somehow illusory or unreal or unimportant. One textbook suggest Parmenides believed that “Being or Reality is an unmoving perfect sphere, unchanging, undivided”. A later monist, the 17th century philosopher Baruch Spinoza, argued that reality consists of a single infinite substance that we call “God” or “Nature”. There are various ways to be a monist.

Well, I’ve read the paragraphs above, the ones that try to lay out Parmenides’s central argument, more than twice. You may share my feeling that the argument doesn’t succeed.

Where I think it goes wrong is that Parmenides treats things that don’t exist too much like things that do.

Although it’s easy to talk about things that don’t exist (e.g. a four-sided triangle or a mountain of gold), that only takes us so far. If I imagine a certain configuration of reality (say, me getting a cold) and what I imagined then becomes real (I do get a cold), the imaginary, unreal state of affairs (getting a real cold in the future) hasn’t actually transformed into a real state of affairs (actually getting a cold). All that’s happened is the reality of me imagining getting a cold has been replaced in the world’s timeline (and my experience) by the reality of me getting a cold. One reality was followed by another. It’s not a literal change from something that didn’t exist into something that did.

Saying that the unreal has become real is a manner of speaking. It shouldn’t be understood as a kind of thing (an imaginary situation) somehow changing its properties or relations in such a way that it becomes another kind of thing (a real situation). Philosophers have a way of putting this: “existence is not a predicate”. They mean that existing isn’t the same kind of thing as being square or purple or between two ferns. Existence isn’t a property or relation that can be predicated of something in the way those properties or relations can be. 

When Parmenides says “what is not” cannot become “what is”, he’s putting “what is not” and “what is” in a single category that we might call “things that are or are not”. That leads him, rather reasonably, to point out that “are not” things can’t become “are” things. It’s reasonable to rule that out, because a transition from an “are not” thing to an “are” thing would be something like spontaneous generation. Putting aside what may happen in the realm of quantum physics, when sub-atomic stuff is sometimes said to instantly pop into existence, the idea that “Something can come from nothing” is implausible even today. Parmenides made use of that implausibility in the 5th century BCE when he argued that what isn’t real can’t change into what’s real, so changes never happen at all.

What Parmenides should have kept in mind is that things that “are not” aren’t really things at all — they’re literally nothing — so they can’t change into something. Change doesn’t involve nothing turning into something. Change occurs when one thing that exists (a fresh piece of bread or an arrangement of atoms) becomes something else that exists (a stale piece of bread or a different arrangement of atoms). Real stuff gets rearranged, and we perceive that as something coming into existence or going out of it, i.e. changing.

So I think Parmenides was guilty of a kind of reification or treating the unreal as real. He puts what doesn’t exist into a realm that’s different from the realm of things that do exist, but right next door to it. Those two realms aren’t next door to each other, however. They’re in totally different neighborhoods, one that’s real and one that’s imaginary. It’s impossible and unnecessary to travel from one realm to the other.

By the way, the gist of the Times Literary Supplement article is that Parmenides “insisted that we must follow the rigours of an argument, no matter how surprising the conclusion – setting in motion the entire scientific world view”. Maybe so. I was more interested in his strange idea that change never happens.