2020 Won’t Be 2016 (or 2000)

We’re entering what’s been called and what’s going to be “the longest two weeks in human history”. A neuroscientist who writes for Scientific American says we shouldn’t worry too much about what’s going to happen:

Will we be surprised again this November the way Americans were on Nov. 9, 2016 when they awoke to learn that reality TV star Dxxxx Txxxx had been elected president?

. . . Another surprise victory is unlikely to happen again if this election is looked at from the same perspective of neuroscience that I used to account for the surprising outcome in 2016. Briefly, that article explained how our brain provides two different mechanisms of decision-making; one is conscious and deliberative, and the other is automatic, driven by emotion and especially by fear.

Txxxx’s strategy does not target the neural circuitry of reason in the cerebral cortex; it provokes the limbic system. In the 2016 election, undecided voters were influenced by the brain’s fear-driven impulses—more simply, gut instinct—once they arrived inside the voting booth, even though they were unable to explain their decision to pre-election pollsters in a carefully reasoned manner.

In 2020, Txxxx continues to use the same strategy of appealing to the brain’s threat-detection circuitry and emotion-based decision process to attract votes and vilify opponents. . . .

But fear-driven appeals will likely persuade fewer voters this time, because we overcome fear in two ways: by reason and experience. Inhibitory neural pathways from the prefrontal cortex to the limbic system will enable reason to quash fear if the dangers are not grounded in fact. . . .

A psychology- and neuroscience-based perspective also illuminates Txxxx’s constant interruptions and insults during the first presidential debate, steamrolling over the moderator’s futile efforts to have a reasoned airing of facts and positions. The structure of a debate is designed to engage the deliberative reasoning in the brain’s cerebral cortex, so Txxxx annihilated the format to inflame emotion in the limbic system.

Txxxx’s dismissal of experts, be they military generals, career public servants, scientists or even his own political appointees, is necessary for him to sustain the subcortical decision-making in voters’ minds that won him election and sustains his support. . . . In his rhetoric, Txxxx does not address factual evidence; he dismisses or suppresses it even for events that are apparent to many, including global warming, foreign intervention in U.S. elections, the trivial head count at his inauguration, and even the projected path of a destructive hurricane. Instead, “alternative facts” or fabrications are substituted.

. . . Reason cannot always overcome fear, as [Post-Traumatic Stress Disorder] demonstrates; but the brain’s second mechanism of neutralizing its fear circuitry—experience—can do so. Repeated exposure to the fearful situation where the outcome is safe will rewire the brain’s subcortical circuitry. This is the basis for “extinction therapy” used to treat PTSD and phobias. For many, credibility has been eroded by Txxxx’s outlandish assertions, like suggesting injections of bleach might cure COVID-19, or enthusing over a plant toxin touted by a pillow salesman, while scientific experts in attendance grimace and bite their lips.

In the last election Txxxx was a little-known newcomer as a political figure, but that is not the case this time with either candidate. The “gut -reaction” decision-making process excels in complex situations where there is not enough factual information or time to make a reasoned decision. We follow gut instinct, for example, when selecting a dish from a menu at a new restaurant, where we have never seen or tasted the offering before. We’ve had our fill of the politics this time, no matter what position one may favor. Whether voters choose to vote for Txxxx on the basis of emotion or reason, they will be better able to articulate the reasons, or rationalizations, for their choice. This should give pollsters better data to make a more accurate prediction.

Unquote.

Pollsters did make an accurate prediction of the national vote in 2016 (Clinton won it). Most of them didn’t taken into account the Electoral College, however, or anticipate the last-minute intervention by big-mouth FBI Director James Comey.

In 2000, the Electoral College result depended on an extremely close election in one state. That allowed the Republicans on the Supreme Court to get involved. There’s no reason to think that will happen again, despite the president’s hopes that it will.

Touching a Nerve: Our Brains, Our Selves by Patricia S. Churchland

Patricia Churchland is a well-known professor of philosophy. She is married to another well-known professor of philosophy, Paul Churchland. The Churchlands were profiled in The New Yorker in 2014 in an article called “Two Heads: A Marriage Devoted to the Mind-Body Problem”. They are both associated with a philosophical view known as “eliminative materialism”. Very briefly, it’s the idea that we are mammals, but with especially complex mammalian brains. and that understanding the brain is all we need in order to understand the mind. In fact, once we understand the brain sufficiently well, we (or scientists anyway) will be able to stop using (eliminate) common mental terms like “belief” and “desire” and “intention”, since those terms won’t correspond very well to what actually goes on in the brain.

So when I began reading Touching a Nerve, I expected to learn more about their distinctive philosophical position. Instead, Prof. Churchland describes the latest results in neuroscience and explains what scientists believe goes on in the brain when we live our daily lives, i.e. when we walk around, look at things, think about things, go to sleep, dream or suffer from illnesses like epilepsy and somnambulism. She admits that we still don’t understand a lot about the brain, but points out that neuroscience is a relatively new discipline and that it’s made a great deal of progress. I especially enjoyed her discussion of what happens in the brain that apparently allows us to be conscious in general (not asleep and not in a coma) vs. what happens when we are conscious of something in particular (like a particular sound), and her reflections on reductionism and scientism, two terms often used as pejoratives but that sound very sensible coming from her.

The closest she comes to mentioning eliminative materialism is in the following passage, when she seems to agree (contrary to my expectations given what I knew about the Churchlands) that common mental terms won’t ever wither away:

If, as seems increasingly likely, dreaming, learning, remembering, and being consciously aware are activities of the physical brain, it does not follow that they are not real. Rather, the point is that their reality depends on a neural reality… Nervous systems have many levels of organization, from molecules to the whole brain, and research on all levels contributes to our wider and deeper understanding [263].

I should also mention that the professor shares a number of stories from her childhood, growing up on a farm in Canada, that relate to the subject of the book. She also has an enjoyable style, mixing in expressions you might not expect in a book like this. For example, she says that reporting scientific discoveries “in a way that is both accurate and understandable” in the news media “takes a highly knowledgeable journalist who has the writing talent to put the hay down where the goats can get it” [256].

Here is how the book ends [266]:

Bertrand Russell, philosopher and mathematician, has the last word:

“Even if the open windows of science at first make us shiver after the cozy indoor warmth of traditional humanizing myths, in the end the fresh air brings vigor, and the great spaces have a splendor of their own.”

Rock on, Bertie.

The Strange Order of Things: Life, Feeling and the Making of Cultures by Antonio Damasio

Antonio Damasio is a neuroscientist with a philosophical bent. His earlier books were: 

  • Descartes’ Error: Emotion, Reason, and the Human Brain
  • The Feeling of What Happens: Body and Emotion in the Making of Consciousness
  • Looking for Spinoza: Joy, Sorrow, and the Feeling Brain
  • Self Comes to Mind: Constructing the Conscious Brain.

In The Strange Order of Things, he emphasizes the role of homeostasis in making life possible. Here’s one definition:

[Homeostasis is] a property of cells, tissues, and organisms that allows the maintenance and regulation of the stability and constancy needed to function properly. Homeostasis is a healthy state that is maintained by the constant adjustment of biochemical and physiological pathways. An example of homeostasis is the maintenance of a constant blood pressure in the human body through a series of fine adjustments in the normal range of function of the hormonal, neuromuscular and cardiovascular systems.

Damasio explains how, billions of years ago, the simplest cells began to maintain homeostasis, and thereby survive and even flourish, using methods, including primitive forms of social behavior, that are similar to methods used by complex organisms like us. He also emphasizes the role of feelings in maintaining homeostasis. He doesn’t suppose that bacteria are conscious, but points out that they do react to their surroundings and changes in their inner states. He argues that organisms only developed conscious feelings of their surroundings and inner states as nervous systems evolved. He thinks it is highly implausible that a human mind could function inside a computer, since computers lack feelings and feelings are a necessary part of human life. Furthermore, Damasio concludes that culture has developed in response to human feelings. Culture is a complex way of maintaining homeostasis.

I’ll finish with something from the publisher’s website written by the British philosopher John Gray:

In The Strange Order of Things, Antonio Damasio presents a new vision of what it means to be human. For too long we have thought of ourselves as rational minds inhabiting insentient mechanical bodies. Breaking with this philosophy, Damasio shows how our minds are rooted in feeling, a creation of our nervous system with an evolutionary history going back to ancient unicellular life that enables us to shape distinctively human cultures. Working out what this implies for the arts, the sciences and the human  future, Damasio has given us that rarest of things, a book that can transform how we think—and feel—about ourselves. 

I can’t say the book changed how I think about myself. That’s because for some years I’ve thought about myself as a community of cells. It’s estimated that an average human body is composed of some 37 trillion cells and contains another 100 trillion microorganisms necessary for survival. Once you start thinking of yourself as a community of cells, adding homeostasis to the mix doesn’t make much difference.

For more on The Strange Order of Things, see this review for The Guardian and this article John Gray wrote for Literary Review.

The Data From All the Senses

Alan Lightman, currently Professor of the Practice of the Humanities (!) at MIT, has posted a short article about consciousness at the New Yorker‘s site. Its centerpiece is a visit with Robert Desimone, an MIT neuroscientist who is trying to understand what happens in the brain when we pay attention to something.

Neuroscientists already know that different parts of the brain are activated when we look at faces as opposed to other objects. In one of Desimone’s experiments, people were shown a series of photographs of faces and houses and told to pay attention to either the faces or houses, but not both.

When the subjects were told to concentrate on the faces and to disregard the houses, the neurons in the face location fired in synchrony, like a group of people singing in unison, while the neurons in the house location fired like a group of people singing out of synch, each beginning at a random point in the score. When the subjects concentrated instead on houses, the reverse happened. Furthermore, another part of the brain, called the inferior frontal junction, a marble-size region in the frontal lobe, seemed to conduct the chorus of the synchronized neurons, firing slightly ahead of them.

 Evidently, what we perceive as “paying attention” to something originates, at the cellular level, in the synchronized firing of a group of neurons, whose rhythmic electrical activity rises above the background chatter of the vast neuronal crowd. Or, as Desimone once put it, “This synchronized chanting allows the relevant information to be ‘heard’ more efficiently by other brain regions.”

Something else that’s interesting in the article is what the neuroscientist says about what’s been called philosophy’s “hard problem”, i.e. understanding the nature of consciousness: 

Without hesitation, Desimone replied that the mystery of consciousness was overrated. “As we learn more about the detailed mechanisms in the brain, the question of ‘What is consciousness?’ will fade away into irrelevancy and abstraction,” he said. As Desimone sees it, consciousness is just a vague word for the mental experience of attending, which we are slowly dissecting in terms of the electrical and chemical activity of individual neurons.

DeSimone compares understanding consciousness to understanding “the nature of motion” as it applies to a car. Once we understand how a car operates, there’s nothing more to say about its motion. Or in a physiological comparison he might have made, we will eventually understand what consciousness is or how it works just like we now understand what digestion is or how it works.

But I think there’s something importantly different about consciousness as compared to the motion of a car or even the digestion of a sandwich.This is how David Chalmers, the philosopher who first referred to the “hard” problem (as opposed to the “easy” problems) of consciousness, put it in his 1995 article “Facing Up to the Problem of Consciousness”:

Consciousness poses the most baffling problems in the science of the mind. There is nothing that we know more intimately than conscious experience, but there is nothing that is harder to explain. All sorts of mental phenomena have yielded to scientific investigation in recent years, but consciousness has stubbornly resisted….

The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect…. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.

It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? 

So Desimone claims that once we understand the physical processes that occur in the brain (the mechanisms of consciousness), the philosophical problem of consciousness will evaporate. Chalmers, on the other hand, says that even if we were to understand the “physical processing” in the brain, the hard problem of consciousness would remain. According to Chalmers, “specifying a mechanism that performs the function” of consciousness, as Desimone hopes to do one day, won’t solve the mystery at all.

I agree with Desimone up to a point. I think the neuroscientists will eventually answer the philosophers’ question as Chalmers posed it in the paragraphs above. Why should such and such physical processing give rise to an inner life? Well, why should such and such physical processing give rise to digestion or respiration or, for that matter, to water boiling or leaves falling? When such and such physical, chemical or biological events take place, what happens is digestion, respiration, water boiling and leaves falling, as the case may be. If an organism’s parts are arranged a certain way, the organism will have an inner life. Or if, as might be the case one day, a machine’s parts are arranged a certain way, the machine will have an inner life. That’s how the world – the world we happen to be in – works (or will work).

I don’t believe this “why should” question concerning experience or consciousness is very interesting from a philosophical perspective. It seems to me that it’s “merely” a very difficult scientific question to which scientists haven’t yet found the answer. It certainly isn’t the hardest problem in philosophy.

To me, a more interesting question is this: What is felt experience anyway? What exactly is the deep blue or middle C that we experience? For example, when we look at an orange, the precise nature of our experience depends on several factors, including the surface of the orange, the light in our environment and our sensory apparatus. That’s why it makes no sense to say that the surface of an orange is orange in itself. When we hold an orange, the surface feels bumpy, but it presumably wouldn’t feel the same way if we were much, much smaller or much, much larger.

I’m not suggesting at all that David Chalmers has failed to recognize the real issue here. But I think that stating the issue in this way emphasizes the most puzzling aspect of consciousness. Here’s another way of putting it: if the world outside of us isn’t just as we experience it, and it’s not in our heads either (there are relatively few colors or sounds inside our brains), where the hell is it? 

These days, philosophers often compare brains to computers. The brain is the hardware and the mind is the software. But it occurred to me recently that software doesn’t do anything unless it has data to process (that’s not a great insight if you’ve ever worked with or thought about computers). This got me to wondering if we should think of our experience as data. Not the kind of data that computers process, but a special kind of data that takes a variety of forms (in Chalmers’ words, “the felt quality of redness, the experience of dark and light, the quality of depth in a visual field….the sound of a clarinet, the smell of mothballs…. bodily sensations…. mental images…. the felt quality of emotion…. a stream of conscious thought”). We might call this kind of data “sense data”. 

But, lo and behold, that’s exactly what a number of philosophers in the early 20th century, including Bertrand Russell and G. E. Moore, began calling it. (Actually, I already knew that, but it was still a pleasant surprise when I realized I’d gone from thinking about computers to thinking about a 100-year old theory).

Theories that refer to sense data and similar entities aren’t as popular among philosophers as they used to be, but such theories have been one of the most discussed topics in epistemology (the theory of knowledge) and the philosophy of perception for a long time. William James referred to “the data from all the senses” before Russell and Moore (although James wouldn’t have accepted Russell’s or Moore’s particular views). And sense data theories had their precursors in the 17th and 18th century writings of Rene Descartes and the British empiricists Locke, Berkeley and Hume. 

As usual, the Stanford Encyclopedia of Philosophy has a helpful article on the subject. The author, Michael Huemer, characterizes sense data in this way (using “data” in the traditional plural sense rather than as a singular collective noun):

On the most common conception, sense data (singular: “sense datum”) have three defining characteristics:

  1. Sense data are the kind of thing we are directly aware of in perception,
  2. Sense data are dependent on the mind, and
  3. Sense data have the properties that perceptually appear to us.

Many philosophers deny that sense data exists or that we’re directly aware of it. Proposition 3 above is also especially controversial. It’s also often argued that if we were actually aware of sense data, we’d be cut off from the world around us. I don’t plan to discuss any of this further right now, but it’s a topic I want to get back to. It’s a really hard philosophical problem.

PS — There is a funny site called Philosopher Shaming that features anonymous and not so anonymous pictures of philosophically-inclined people owning up to their deepest and darkest philosophical secrets. I uploaded my picture two years ago: 

tumblr_ma8nqi5Bj31renf8po1_500

A Guide to Reality, Part 13

Chapter 8 of Alex Rosenberg’s The Atheist’s Guide to Reality: Enjoying Life Without Illusions is called “The Brain Does Everything Without Thinking About Anything At All”. Without thinking about anything at all? That sounds like another of Rosenberg’s rhetorical exaggerations.

He grants that it’s perfectly natural for us to believe that our conscious minds allow us to think about this and that. Yet he claims that science tells us otherwise:

Among the seemingly unquestionable truths science makes us deny is the idea that we have any purposes at all, that we ever make plans — for today, tomorrow or next year. Science must even deny the basic notion that we ever really think about the past and the future or even that our conscious thoughts ever give any meaning to the actions that express them [165].

But despite this claim about science and the title of the chapter, Rosenberg doesn’t really “deny that we think accurately and act intelligently in the world”. It’s just (just!) that we don’t “do it in anything like the way almost everyone thinks we do”. In other words, we think but we don’t think “about”.

Before getting to his argument against “aboutness”, Rosenberg offers the observation that science is merely “common sense continually improving itself, rebuilding itself, correcting itself, until it is no longer recognizable as common sense” [167]. That seems like a correct understanding of science; it’s not as if science is a separate realm completely divorced from what’s called “common sense”. When done correctly, science is a cumulative process involving steps that are each in turn commonsensical, i.e. based on sound reasoning or information:

Science begins as common sense. Each step in the development of science is taken by common sense. The accumulation of those commonsense steps … has produced a body of science that no one any longer recognizes as common sense. But that’s what it is. The real common sense is relativity and quantum mechanics, atomic chemistry and natural selection. That’s why we should believe it in preference to what ordinary experience suggests [169].

So, if you have a mental image of the Eiffel Tower or suddenly remember that the Bastille was stormed in 1789, ordinary introspection suggests that you’re having a thought about Paris. But, according to Rosenberg, that’s a mistake. Assuming that this thought of yours that’s supposedly about Paris consists in or at least reflects activity in your brain (the organ you use to think), that would imply that there must be something in your brain that represents Paris. We know, however, that any such representation isn’t a tiny picture or map of Paris. Brain cells aren’t arranged like tiny pictures.

But perhaps there’s a kind of symbol in your brain, an arrangement of neurons that your brain somehow interprets as representing Paris? Rosenberg rejects this possibility, arguing that any such interpretation would require a second set of neurons:

[The second set of neurons] can’t interpret the Paris neurons as being about Paris unless some other part of [the second set] is, separately and independently, about Paris [too]. These will be the neurons that “say” that the Paris neurons are about Paris; they will be about the Paris neurons the way the Paris neurons are about Paris [178]. 

Rosenberg argues that this type of arrangement would lead to an unacceptable infinite regress. There would have to be a third set of neurons about the second set, and so on, and so on.

I confess that I’m having trouble understanding why the regress is necessary. In Rosenberg’s notes, he references a book called Memory: From Mind to Molecules, by the neuroscientists Larry Squire and Eric Kandel, which he says explains “how the brain stores information without any aboutness”. Maybe it’s clearer there.

However, if we grant that Rosenberg’s argument is correct and one part of the brain interpreting another (symbol-like) part of the brain would require an impossible infinite regress, it still seems questionable whether he has shown that nothing in the brain can be about anything. What his argument will have shown is that one part of the brain can’t interpret some other, symbolic part of the brain. Perhaps thoughts can be “about” something in some other way.

Rosenberg next offers an interesting account of how our brains work. Briefly put, human brains work pretty much like the brains of sea slugs and rats. Scientists have discovered that all of us organisms learn by connecting neurons together. Individual neurons are relatively simple input/output devices. Link them together and they become more complex input/output devices. The key difference between our brains and those belong to sea slugs and rats is that ours have more neurons and more links.

When a sea slug is conditioned to respond a certain way to a particular stimulus (like one of Pavlov’s dogs),

[the training] releases proteins that opens up the channels, the synapses, between the neurons, so it is easier for molecules of calcium, potassium, sodium and chloride to move through their gaps, carrying electrical charges between the neurons. This produces short-term memory in the sea slug. Training over a longer period does the same thing, but also stimulates genes in the neurons’ nuclei to build new synapses that last for some time. The more synapses, the longer the conditioning lasts. The result is long-term memory in the sea slug [181].

The process in your brain was similar when you learned to recognize your mother’s face. Rosenberg cites an experiment in which researchers were able to temporarily disable the neurons that allowed their subject to recognize her mother. Since she still recognized her mother’s voice, she couldn’t understand why this stranger sounded just like her mother.

Rosenberg concludes that there is nothing in our brains that is “about” anything:

None of these sets of circuits are about anything….The small sets of specialized input/output circuits that respond to your mom’s face, as well as the large set that responds to your mom [in different ways], are no different from millions of other such sets in your brain, except in one way: they respond to a distinct electrical input with a distinct electrical output….That’s why they are not about anything. Piling up a lot of neural circuits that are not about anything at all can’t turn them into a thought about stuff out there in the world [184]. 

Of course, how the activation of neural circuits in our brains results in conscious thoughts that seem to be “about” something remains a mystery. Today, for no apparent reason, I had a brief thought about Roxy Music, the 70s rock band. Maybe something in my environment (which happened to be a parking lot) triggered that particular mental response. Or maybe there was some seemingly random electrical activity in my brain that suddenly made Roxy Music come to mind. 

I still don’t see why we should deny that my thought this afternoon was about Roxy Music, even if the neural mechanics involved were quite simple at the cellular level. If some of my neurons will lead me to answer “Roxy Music” when I’m asked what group Bryan Ferry was in, or will get me to think of Roxy Music once in a while, perhaps we should accept the fact that there are arrangements of neurons in my head that are about Roxy Music.

Philosophers use the term “intentionality” instead of “aboutness”. They’ve been trying to understand intentionality for a long time. How can one thing be “about” another thing? Rosenberg seems to agree that intentionality is mysterious. He also thinks it’s an illusion. Maybe he’s right. In the last part of chapter 8, he brings computer science into the discussion. That’s a topic that will have to wait for another time.