I’m Glad They Agree

If you express an opinion and somebody disagrees, they’ve given you an opportunity to change your mind. If the other person’s opinion is better than yours, you’ve learned something. That’s a positive outcome. There can also be a positive outcome if the other person agrees with you. It makes you feel good (although if you were wrong to begin with, agreement will just make the situation worse). 

I had two instances today where somebody agreed with me. This made me feel good (I’m going with the assumption that I wasn’t wrong to begin with).

First, the philosopher Justin E. H. Smith criticized the idea that we may be living in a computer simulation, in response to David Chalmers’s book Reality + (my contribution, not as elegant and with a lot fewer words, was “Reality, the Virtual Kind and the Unlikely Kind”):

According to Chalmers’s construal of the “it-from-bit” hypothesis, to be digital is in itself no grounds for being excluded from reality, and what we think of as physical objects may be both real and digital. One is in fact free to accept the first conjunct, and reject the latter, even though they are presented as practically equivalent. I myself am prepared to accept that a couch in VR [virtual reality] is a real couch — more precisely, a real digital couch, or at least that it may be real or reified in consequence of the way I relate to it. But this does not compel me to accept that the couch on which I am currently sitting is digital.

There is a persistent conflation of these two points throughout discussions of the so-called “simulation argument”, which Chalmers treats in several of his works but which is most strongly associated with the name of Nick Bostrom, who in 2003 published an influential article entitled “Are You Living in a Computer Simulation?” … Here I just want to point out one significant feature of it that occurs early in the introduction and that the author seems to hope the reader will pass over smoothly without getting hung up on the problems it potentially opens up. Consciousness, Bostrom maintains, might arise among simulated people if, first of all, “the simulations were sufficiently fine-grained”, and, second of all, “a certain quite widely accepted position in the philosophy of mind is correct.”

What is this widely accepted position, you ask? … It is, namely, the view, which Bostrom calls “substrate-independence”, that “mental states can supervene on any of a broad class of physical substrates. Provided a system implements the right sort of computational structures and processes, it can be associated with conscious experiences.” Arguments for functionalism or computationalism have been given in the literature, Bostrom notes, and “while it is not entirely uncontroversial, we shall here take it as a given.”

It is of course possible that conscious experiences may be realized in a silicon substrate or in a complex arrangement of string and toilet-paper rolls, just as they may be realized in brains. But do we have any evidence that the arrangements that we have come up with for the machine-processing of information are in principle the kind of arrangements that, as they become more and more complex or fine-grained, cross over into conscious experience? In fact, there is very good reason to think that the appearance of consciousness in some evolved biological systems is the result of a very different sort of developmental history than anything we have seen so far since the dawn of artificial intelligence in the mid-twentieth century….

Unquote.

Second, Michael Tomasky of The New Republic responded to the Republican National Committee’s characterization of what happened on January 6, 2021, as “legitimate political discourse”:

It’s now official: The Republican Party is no longer a political party in any known American sense. Honestly, it hasn’t been for a quite some time, but with last week’s resolution condemning Liz Cheney and Adam Kinzinger, the party made it official. We don’t always grasp the historic importance of events in real time, but rest assured that future historians, assuming the United States remains enough of a democracy to have honest ones, will point to Friday, February 4 as a pivotal day in the party’s war on democracy….

The money quote in this episode is the line in the resolution that condemns Cheney and Kinzinger for “participating in a Democrat-led persecution of ordinary citizens engaged in legitimate political discourse.” This is right out of 1984. When The New York Times reported that this meant that the RNC was referring to the January 6 insurrection as “legitimate political discourse,” RNC gauleiter Ronna McDaniel howled that of course she has condemned violence, and the legit discourse business referred to other stuff.

What other stuff, it’s hard to say. The text of the resolution didn’t leave room to interpretation. And the select committee on January 6 is not exactly investigating Republicans across the country who are, say, protesting mask mandates. In fact, it’s not investigating any kind of “discourse.” It’s looking specifically at actions by people on and around the date of the infamous riot….

The truth here is obvious: The party is talking out of both sides of its mouth. The obvious intent with that sentence is to minimize and legitimize what happened on January 6…. And now that T____ himself has said he may pardon everyone charged with January 6–related crimes, it was clear that McDaniel saw her job as aiding [him] in that project: If it’s the official party line that the insurrection was legitimate, then there’s nothing outrageous about pardons.

The Anti-Defamation League recently released a report finding that more than 100 Republican candidates on various ballots in 2022 have explicitly embraced extremism or violence … This is not some aberration that time will correct. It is a storm that will continue to gather strength, because it’s where the action and the money are, and no one in the GOP is opposing it—except the two people who were just essentially read out of the party….

The Republican Party … has become an appendage of T____ dedicated to doing his will and smiting his enemies. I had to laugh at the part of the resolution that denounced Joe Biden for his alleged pursuit of “socialism”…..

The Republican Party is further down the road to fascism than the Democrats are to socialism. And when, by the way, might Democrats start saying that? What are you waiting for, people? How much deeper does this crisis have to get before you start telling the American people the truth about what the GOP has become? It’s time to say it and to put Republicans on the defensive….We are at a moment of historical reckoning…. But Americans won’t know it, Democrats, unless you tell them.

Unquote. 

In other words: “When Do We All Get To Say They’re Fascists?”

Reality, the Virtual Kind and the Unlikely Kind

David Chalmers, the philosopher whose gravestone will probably say he came up with the phrase “the hard problem of consciousness”, has a new book out. It’s called Reality +: Virtual Worlds and the Problems of Philosophy. From the publisher’s blurb:

Virtual reality is genuine reality; that’s the central thesis of Reality+. In a highly original work of “technophilosophy,” David J. Chalmers gives a compelling analysis of our technological future. He argues that virtual worlds are not second-class worlds, and that we can live a meaningful life in virtual reality. We may even be in a virtual world already.

The Three Quarks Daily site linked to an interview Prof. Chalmers gave to promote the book. 

When discussing simulations (like what we could be living in already), it’s helpful to keep in mind that there are at least two kinds. The first kind is what’s usually called “virtual reality”. It can be described as “not physically existing as such, but made by software to appear to do so”. Despite what Chalmers’s interviewer says, this type of virtual reality doesn’t raise a bunch of deep philosophical questions. The machines that created the Matrix in the movies did an amazing job, but from a philosophical perspective, so what? When he was plugged into the Matrix, fully immersed in what Chalmers calls “digital reality”, Neo was still an organism with a physical body. In the future Chalmers envisions, many of us might spend most of our time in a “place” like that. But lots of people play video games. They make friends playing those games, they spend money, they laugh, they cry. So what?

The second kind of virtual reality would look like the Matrix, but it would be very different, so different that it would deserve to be called something other than “virtual reality” (maybe it already is). It’s the kind the philosopher Nick Bostrom referred to in his famous Simulation Argument (quoting from a 2003 article): “You exist in a virtual reality simulated in a computer built by some advanced civilization. Your brain, too, is merely a part of that simulation”.

Bostrom’s argument assumes that “what allows you to have conscious experiences is not the fact that your brain is made of squishy biological matter but rather that it implements a certain computational architecture . . . This assumption is quite widely (although not universally) accepted among cognitive scientists and philosophers of mind”.

Maybe I’m in the minority, but I don’t see any reason to think that consciousness is purely computational and that it could be created on a computer. Presumably, a being could be made out of silicon or whatever and be conscious (feel pain, for example) but I believe it would still require a physical body. Chalmers thinks otherwise, that “algorithmic creatures” that only exist as software running on a computer could be conscious. That assumes something about consciousness that isn’t necessarily true and is much different from saying you could build something like a human using non-standard material.

Does Consciousness Reside in the Brain’s Electromagnetic Field?

At bottom, it all seems to be a bunch of fields:

In the modern framework of the quantum theory of fields, a field occupies space, contains energy, and its presence precludes a classical “true vacuum”. This has led physicists to consider electromagnetic fields to be a physical entity, making the field concept a supporting paradigm of the edifice of modern physics [Wikipedia].

So maybe consciousness is a special type of field generated by brains. Johnjoe McFadden is a professor of molecular genetics in England. He’s written about his electromagnetic field theory of consciousness for Aeon:

Just how do the atoms and molecules that make up the neurons in our brain . . . manage to generate human awareness and the power of thought? In answering that longstanding question, most neurobiologists today would point to the information-processing performed by brain neurons. . . . This [begins] as soon as light and sound [reach the] eyes and ears, stimulating . . . neurons to fire in response to different aspects of [the] environment. . . .

Each ‘firing’ event involves the movement of electrically charged atoms called ions in and out of the neurons. That movement triggers a kind of chain reaction that travels from one nerve cell to another via logical rules, roughly analogous to the AND, OR and NOT Boolean operations performed by today’s computer gates, in order to generate outputs such as speech. So, within milliseconds of . . . glancing at [an object], the firing rate of millions of neurons in [the] brain [correlates] with thousands of visual features of the [object] and its [surroundings]. . . .

Yet information-processing clearly isn’t sufficient for conscious knowing. Computers process lots of information yet have not exhibited the slightest spark of consciousness [note: or so we believe]. Several decades ago, in an essay exploring the phenomenology of consciousness, the philosopher Thomas Nagel asked us to imagine what it’s like to be a bat. This feature of being-like-something, of having a perspective on the world, captures something about what it means to be a truly conscious ‘knower’. In [a] hospital room watching my son’s EEG, I wondered what it was like to be one of his neurons, processing the information [from] the slamming of a door [in the hall]. As far as we can tell, an individual neuron knows just one thing – its firing rate.

It fires or doesn’t fire based on its inputs, so the information it carries is pretty much equivalent to the zero or one of binary computer language. It thereby encodes just a single bit of information. The value of that bit, whether a zero or a one, might correlate with the slamming of a door, but it says nothing about the door’s shape, its colour, its use as a portal between rooms or the noise of its slamming – all features that I’m sure were part of my son’s conscious experience. I concluded that being a single neuron in my son’s brain would not feel like anything.

Of course, you could argue, as neurobiologists usually do, that although a single neuron might know next to nothing, the collection of 100 billion neurons in my son’s brain knew everything in his mind and would thereby feel like something. But this explanation bumps into what’s known as the binding problem, which asks how all the information in millions of widely distributed neurons in the brain come together to create a single complex yet unified conscious perception of, say, a room . . .

Watching those wiggly lines march across the EEG screen gave me the germ of a different idea, something that didn’t boil down to pure neuronal computation or information-processing. Every time a neuron fires, along with the matter-based signal that travels down its wire-like nerve fibre, it also projects a tiny electromagnetic (EM) pulse into the surrounding space, rather like the signal from your phone when you send a text. So when my son heard the door close, as well as triggering the firing of billions of nerves, its slamming would have projected billions of tiny pulses of electromagnetic energy into his brain. These pulses flow into each other to generate a kind of pool of EM energy that’s called an electromagnetic field – something that neurobiologists have neglected when probing the nature of consciousness.

Neurobiologists have known about the brain’s EM field for more than a century but have nearly always dismissed it as having no more relevance to its workings than the exhaust of a car has to its steering. Yet, since information is just correlation, I knew that the underlying brain EM field tremors that generated the spikes on the EEG screen knew the slamming of the hospital door, just as much as the neurons whose firing generated those tremors. However, I also had enough physics to know that there was a crucial difference between a million scattered neurons firing and the EM field generated by their firing. The information encoded by the million discrete bits of information in a million scattered neurons is physically unified within a single brain EM field.

The unity of EM fields is apparent whenever you use wifi. Perhaps you’re streaming a radio documentary . . . on your phone while another family member is watching a movie, and another is listening to streamed music. Remarkably, all this information, whether movies, pictures, messages or music, is instantly available to be downloaded from any point in the vicinity of your router. This is because – unlike the information encoded in discrete units of matter such as computer gates or neurons – EM field information is encoded as immaterial waves that travel at the speed of light from their source to their receiver. Between source and receiver, all those waves encoding different messages overlap and intermingle to become a single EM field of physically bound information with as much unity as a single photon or electron, and which can be downloaded from any point in the field. The field, and everything encoded in it, is everywhere.

While watching my son’s EEG marching across the screen, I wondered what it was like to be his brain’s EM field pulsing with physically bound information correlating with all of his sense perceptions. I guessed it would feel a lot like him.

Locating consciousness in the brain’s EM field might seem bizarre, but is it any more bizarre than believing that awareness resides in matter? Remember Albert Einstein’s equation, E = mc2. All it involves is moving from the matter-based right-hand side of the equation to energy located on the left-hand side. Both are physical, but whereas matter encodes information as discrete particles separated in space, energy information is encoded as overlapping fields in which information is bound up into single unified wholes. Locating the seat of consciousness in the brain’s EM field thereby solves the binding problem of understanding how information encoded in billions of distributed neurons is unified in our (EM field-based) conscious mind. It is a form of dualism, but a scientific dualism based on the difference between matter and energy, rather than matter and spirit.

Awareness is then what this joined-up EM field information feels like from the inside. So, for example, the experience of hearing a door slam is what an EM field perturbation in the brain that correlates with a door slamming, and all of its memory neuron-encoded associations, feels like, from the inside.

But why? Whether neurons are firing synchronously should make no difference to their information-processing operations. Synchrony makes no sense for a consciousness located in neurons – but if we place consciousness in the brain’s EM field, then its association with synchrony becomes inevitable.

Toss a handful of pebbles into a still pond and, where the peak of one wave meets the trough of another, they cancel out each other to cause destructive interference. However, when the peaks and troughs line up, then they reinforce each other to make a bigger wave: constructive interference. The same will happen in the brain. When millions of disparate neurons recording or processing features of my desk fire asynchronously, then their waves will cancel out each other to generate zero EM field. Yet when those same neurons fire synchronously, then their waves will line up to cause constructive interference to project a strong EM signal into my brain’s EM field, what I now call the conscious electromagnetic information (cemi) field. I will see my desk.

I’ve been publishing on cemi field theory since 2000, and recently published an update in 2020. A key component of the theory is its novel insight into the nature of what we call ‘free will’. . . . Most non-modern people . . . probably believed that [a] supernatural soul was the driver of . . . willed actions. When . . . secular philosophers and scientists exorcised the soul from the body, voluntary actions became just another motor output of neuronal computation – no different from those that drive non-conscious actions such as walking, blinking, chewing or forming grammatically correct sentences.

Then why do willed actions feel so different? In a 2002 paper, I proposed that free will is our experience of the cemi field acting on neurons to initiate voluntary actions. Back then, there wasn’t much evidence for EM fields influencing neural firing – but experiments by David McCormick at Yale University School of Medicine in 2010 and Christof Koch at Caltech in 2011 have demonstrated that neurons can indeed be perturbed by weak, brain-strength, EM fields. At the very least, their experiments suggest the plausibility of a wifi component of neuronal information processing, which I claim is experienced as ‘free will’.

The cemi field theory also accounts for why our non-conscious and conscious minds operate differently. One of the most striking differences between the two is that our non-conscious mind can do many things at once, but we are able to engage in only one conscious task at a time. [Try to] divide a number like 11,357 by 71 while concentrating on a game of chess. Our non-conscious mind appears to be a parallel processor, whereas our conscious mind is a serial processor that can operate only one task at a time.

The cemi field theory accounts for these two modes by first accepting that most brain information-processing – the non-conscious sort – goes solely through its neuronal ‘wires’ that don’t interact through EM fields. This allows different tasks to be allocated to different circuits. In our distant past, all neural computation likely took this parallel-processing neuronal route. . . . However, at some point in our evolutionary history, our ancestors’ skulls became packed with more and more neurons such that adjacent neurons started to interfere with each other through their EM field interactions. Mostly, the interference would have impaired function. Natural selection would then have kicked in to insulate neurons involved in these vital functions.

Occasionally, electrical interference might have been beneficial. For example, the EM field interactions might have conferred the ability to compute with complex joined-up packets of EM field information, rather than mere bits. When this happened, natural selection would have pulled in the other direction, to increase EM field sensitivity. Yet there was also a downside to this way of processing information. Remember the pebbles tossed into the pond: they interfere with one another. Different ideas dropped into the brain’s cemi field similarly interfere with one another. Our conscious cemi-field mind inevitably became a serial computer that can do only one thing at a time.

Consciousness and Primitive Feelings

I’ve been thinking lately that all value — whether ethical, aesthetic or practical — comes down to feelings in the end. Was that the right thing to do? Is that a beautiful song? Is this expensive hammer better than the cheaper one? Only if it tends in the past, present or future to make me or you or somebody else have certain feelings.

Below is most of an interview with Mark Solms, a South African psychoanalyst and neuropsychologist, who has a new book out: The Hidden Spring: A Journey to the Source of Consciousness. The Nautilus site gave the interview the title “Consciousness Is Just A Feeling”, although that’s not what Solms says. The interviewer’s questions are in italics:

. . . You made a big discovery that overturned the prevailing theory that we only dream during REM sleep. What did you find?

It was just assumed that when your REM sleep stops, your dreams also stop. But I found that human patients with damage to the part of the brain generating REM sleep nevertheless continue to experience dreams. In retrospect, you realize what a significant methodological error we made. That’s the price we pay for not gathering subjective data. You know, the actual subjective experience of dreams is an embarrassment to science. And this is what my professors had in mind when they were saying, don’t study things like that. But you’re going to be missing something rather important about how the brain works if you leave out half of the available data.

Your interest in Freud is very unusual for a neuroscientist. You actually trained to become a psychoanalyst, and since then, you’ve edited the complete psychological works of Freud.

Yes, and my colleagues were horrified. I had been taught this was pseudoscience. One of them said to me, “You know, astronomers don’t study astrology.” It’s true that psychoanalysis had lost its bearings. Freud was a very well-trained neuroscientist and neurologist, but in successive generations that grounding of psychoanalysis in the biological sciences had been lost. So I can understand where some of the disdain for psychoanalysis came from. But to its credit, it studied the actual lived life of the mind, which was the thing that interested me, and was missing from neuropsychology. So I turned to psychoanalysis to find any kind of systematic attempt to study subjective experience and to infer what kinds of mechanisms lay behind it.

Did we get Freud wrong? Did he have scientific insights that we’ve ignored?

Very much so. I’m not going to pretend that Freud didn’t make some gigantic mistakes. That’s to be expected. He was a pioneer, taking the very first steps in trying to systematically study subjective experience. The reason he made so little progress and abandoned neuroscience was because there weren’t scientific methods by which you could study things. Even the EEG was only brought into common use after the Second World War. So there were no methods for studying in vivo what’s going on in the brain, let alone the methods we have nowadays. But the sum of his basic observations, the centrality of emotion, was how much affective feelings influence cognitive processes. That’s the essence of what psychoanalysis is all about, how our rational, logical, cognitive processes can be distorted by emotional forces.

You founded the new field of “neuropsychoanalysis.” What’s the basic premise of this approach?

The neuropsychology I was taught might as well have been neurobehaviorism. Oliver Sacks famously wrote in 1984 that neuropsychology is admirable, but it excludes the psyche, by which he meant the active living subject of the mind. That really caught my attention. So I wanted to bring the psyche back into neuropsychology. Emotion was just not studied in the neuropsychology of the 1980s. The centrality of emotion in the life of the mind and what lies behind emotion is what Freud called “drive.” Basically, his idea was that unpleasant feelings represent the failures to meet those needs and pleasant feelings represent the opposite. It’s how we come to know how we’re meeting our deepest biological needs. And that idea gives an underpinning to cognition that I think is sorely lacking in cognitive science, pure and simple.

There are huge debates about the science of consciousness. Explaining the causal connection between brain and mind is one of the most difficult problems in all of science. On the one hand, there are the neurons and synaptic connections in the brain. And then there’s the immaterial world of thinking and feeling. It seems like they exist in two entirely separate domains. How do you approach this problem?

Subjective experience—consciousness—surely is part of nature because we are embodied creatures and we are experiencing subjects. So there are two ways in which you can look on the great problem you’ve just mentioned. You can either say it’s impossibly difficult to imagine how the physical organ becomes the experiencing subject, so they must belong to two different universes and therefore, the subjective experience is incomprehensible and outside of science. But it’s very hard for me to accept a view like that. The alternative is that it must somehow be possible to bridge that divide.

The major point of contention is whether consciousness can be reduced to the laws of physics or biology. The philosopher David Chalmers has speculated that consciousness is a fundamental property of nature that’s not reducible to any laws of nature.

I accept that, except for the word “fundamental.” I argue that consciousness is a property of nature, but it’s not a fundamental property. It’s quite easy to argue that there was a big bang very long ago and long after that, there was an emergence of life. If Chalmers’ view is that consciousness is a fundamental property of the universe, it must have preceded even the emergence of life. I know there are people who believe that. But as a scientist, when you look at the weight of the evidence, it’s just so much less plausible that there was already some sort of elementary form of consciousness even at the moment of the Big Bang. That’s basically the same as the idea of God. It’s not really grappling with the problem.

You can certainly find all kinds of correlations between brain function and mental activity. We know that brain damage . . . can change someone’s personality. But it still doesn’t explain causation. As the philosopher John Searle said, “How does the brain get over the hump from electrochemistry to feeling?”

I think we have made that problem harder for ourselves by taking human consciousness as our model of what we mean by consciousness. The question sounds so much more magical. How is it possible that all of this thinking and feeling and philosophizing can be the product of brain cells? But we should start with the far more elementary rudiment of consciousness—feeling. Think about consciousness as just being something to do with existential value. Survival is good and dying is bad. That’s the basic value system of all living things. Bad feelings mean you’re doing badly—you’re hungry, you’re thirsty, you’re sleepy, you’re under threat of damage to life and limb. Good feelings mean the opposite—this is good for your survival and reproductive success.

You’re saying consciousness is essentially about feelings. It’s not about cognition or intelligence.

That’s why I’m saying the most elementary forms of consciousness give us a much better prospect of being able to solve the question you’re posing. How can it happen that a physical creature comes to have this mysterious, magical stuff called consciousness? You reduce it down to something much more biological, like basic feelings, and then you start building up the complexities. A first step in that direction is “I feel.” Then comes the question, What is the cause of this feeling? What is this feeling about? And then you have the beginnings of cognition. “I feel like this about that.” So feeling gets extended onto perception and other cognitive representations of the organism in the world.

Where are those feelings rooted in the brain?

Feeling arises in a very ancient part of the brain, in the upper brainstem in structures we share with all vertebrates. This part of the brain is over 500 million years old. The very telling fact is that damage to those structures—tiny lesions as small as the size of a match head in parts of the reticular activating system—obliterates all consciousness. That fact alone demonstrates that more complex cognitive consciousness is dependent upon the basic affective form of consciousness that’s generated in the upper brainstem.

So we place too much emphasis on the cortex, which we celebrate because it’s what makes humans smart.

Exactly. Our evolutionary pride and joy is the huge cortical expanse that only mammals have, and we humans have even more of it. That was the biggest mistake we’ve made in the history of the neuroscience of consciousness. The evidence for the cortex being the seat of consciousness is really weak. If you de-corticate a neonatal mammal—say, a rat or a mouse—it doesn’t lose consciousness. Not only does it wake up in the morning and go to sleep at night, it runs and hangs from bars, swims, eats, copulates, plays, raises its pups to maturity. All of this emotional behavior remains without any cortex.

And the same applies to human beings. Children born with no cortex, a condition called hydranencephaly—not to be confused with hydrocephaly—are exactly the same as what I’ve just described in these experimental animals. They wake up in the morning, go to sleep at night, smile when they’re happy and fuss when they’re frustrated. Of course, you can’t speak to them, because they’ve got no cortex. They can’t tell you that they’re conscious, but they show consciousness and feeling in just the same way as our pets do.

You say we really have two brains—the brainstem and the cortex.

Yes, but the cortex is incapable of generating consciousness by itself. The cortex borrows, as it were, its consciousness from the brainstem. Moreover, consciousness is not intrinsic to what the cortex does. The cortex can perform high level, uniquely human cognitive operations as reading with comprehension, without consciousness being necessary at all. So why does it ever become conscious? The answer is that we have to feel our way into cognition because this is where the values come from. Is this going well or badly? All choices, any decision-making, has to be grounded in a value system where one thing is better than another thing.

So what is thinking? Can we even talk about the neurochemistry of a thought?

A thought in its most basic form is about choice. If you don’t have to make a choice, then it can all happen automatically. I’m now faced with two alternatives and I need to decide which one I’m going to do. Consciousness enables you to make those choices because it contributes value. Thinking goes on unconsciously until you’re in a state of uncertainty as to what to do. Then you need feeling to feel your way through the problem. The bulk of our cognition—our day-to-day psychological life—goes on unconsciously.

How does memory figure into consciousness?

The basic building block of all cognition is the memory we have. We have sensory impressions coming in and they leave traces which we can then reactivate in the form of cognitions and reassemble in all sorts of complicated ways, including coming up with new ideas. But the basic fabric of cognition is memory traces. The cortex is this vast storehouse of representations. So when I said earlier that cognition is not intrinsically conscious, that’s just saying that memories are, for the most part, latent. You couldn’t possibly be conscious of all of those billions of bits of information you have imbibed during your lifetime. So what is conscious is drawn up from this vast storehouse of long-term memory into short-term working memory. The conscious bit is just a tiny fragment of what’s there.

You say the function of memory is to predict our future needs. And the hippocampus, which we typically regard as the brain’s memory center, is used for imagining the future as well as storing information about the past.

The only point of learning from past events is to better predict future events. That’s the whole point of memory. It’s not just a library where we file away everything that’s happened to us. And the reason why we need to keep a record of what’s happened in the past is so that we can use it as a basis for predicting the future. And yes, the hippocampus is every bit as much for imagining the future as remembering the past. You might say it’s remembering the future.

Wouldn’t a true science of consciousness, of subjective experience, explain why particular thoughts and memories pop into my brain?

Sure, and that’s exactly why I take more seriously than most neuroscientists what psychoanalysts try to do. They ask, Why this particular content for Steve at this point in his life? How does it happen that neurons in my brain generate all of this? I’m saying if you start with the most rudimentary causal mechanisms, you’re just talking about a feeling and they’re not that difficult to understand in ordinary biological terms. Then there’s all this cognitive stuff based on your whole life. How do I go about meeting my emotional needs? And there’s your brain churning out predictions and feeling its way through the problem and trying to solve it.

So this is the premise of neuropsychoanalysis. There’s one track to explain the biology of what’s happening in the brain, and another track is psychological understanding. And maybe I need a psychotherapist to help me unpack why a particular thought suddenly occurs to me.

You’ve just summed up my entire scientific life in a nutshell. I think we need both. . . .

Touching a Nerve: Our Brains, Our Selves by Patricia S. Churchland

Patricia Churchland is a well-known professor of philosophy. She is married to another well-known professor of philosophy, Paul Churchland. The Churchlands were profiled in The New Yorker in 2014 in an article called “Two Heads: A Marriage Devoted to the Mind-Body Problem”. They are both associated with a philosophical view known as “eliminative materialism”. Very briefly, it’s the idea that we are mammals, but with especially complex mammalian brains. and that understanding the brain is all we need in order to understand the mind. In fact, once we understand the brain sufficiently well, we (or scientists anyway) will be able to stop using (eliminate) common mental terms like “belief” and “desire” and “intention”, since those terms won’t correspond very well to what actually goes on in the brain.

So when I began reading Touching a Nerve, I expected to learn more about their distinctive philosophical position. Instead, Prof. Churchland describes the latest results in neuroscience and explains what scientists believe goes on in the brain when we live our daily lives, i.e. when we walk around, look at things, think about things, go to sleep, dream or suffer from illnesses like epilepsy and somnambulism. She admits that we still don’t understand a lot about the brain, but points out that neuroscience is a relatively new discipline and that it’s made a great deal of progress. I especially enjoyed her discussion of what happens in the brain that apparently allows us to be conscious in general (not asleep and not in a coma) vs. what happens when we are conscious of something in particular (like a particular sound), and her reflections on reductionism and scientism, two terms often used as pejoratives but that sound very sensible coming from her.

The closest she comes to mentioning eliminative materialism is in the following passage, when she seems to agree (contrary to my expectations given what I knew about the Churchlands) that common mental terms won’t ever wither away:

If, as seems increasingly likely, dreaming, learning, remembering, and being consciously aware are activities of the physical brain, it does not follow that they are not real. Rather, the point is that their reality depends on a neural reality… Nervous systems have many levels of organization, from molecules to the whole brain, and research on all levels contributes to our wider and deeper understanding [263].

I should also mention that the professor shares a number of stories from her childhood, growing up on a farm in Canada, that relate to the subject of the book. She also has an enjoyable style, mixing in expressions you might not expect in a book like this. For example, she says that reporting scientific discoveries “in a way that is both accurate and understandable” in the news media “takes a highly knowledgeable journalist who has the writing talent to put the hay down where the goats can get it” [256].

Here is how the book ends [266]:

Bertrand Russell, philosopher and mathematician, has the last word:

“Even if the open windows of science at first make us shiver after the cozy indoor warmth of traditional humanizing myths, in the end the fresh air brings vigor, and the great spaces have a splendor of their own.”

Rock on, Bertie.