The Data From All the Senses

Alan Lightman, currently Professor of the Practice of the Humanities (!) at MIT, has posted a short article about consciousness at the New Yorker‘s site. Its centerpiece is a visit with Robert Desimone, an MIT neuroscientist who is trying to understand what happens in the brain when we pay attention to something.

Neuroscientists already know that different parts of the brain are activated when we look at faces as opposed to other objects. In one of Desimone’s experiments, people were shown a series of photographs of faces and houses and told to pay attention to either the faces or houses, but not both.

When the subjects were told to concentrate on the faces and to disregard the houses, the neurons in the face location fired in synchrony, like a group of people singing in unison, while the neurons in the house location fired like a group of people singing out of synch, each beginning at a random point in the score. When the subjects concentrated instead on houses, the reverse happened. Furthermore, another part of the brain, called the inferior frontal junction, a marble-size region in the frontal lobe, seemed to conduct the chorus of the synchronized neurons, firing slightly ahead of them.

 Evidently, what we perceive as “paying attention” to something originates, at the cellular level, in the synchronized firing of a group of neurons, whose rhythmic electrical activity rises above the background chatter of the vast neuronal crowd. Or, as Desimone once put it, “This synchronized chanting allows the relevant information to be ‘heard’ more efficiently by other brain regions.”

Something else that’s interesting in the article is what the neuroscientist says about what’s been called philosophy’s “hard problem”, i.e. understanding the nature of consciousness: 

Without hesitation, Desimone replied that the mystery of consciousness was overrated. “As we learn more about the detailed mechanisms in the brain, the question of ‘What is consciousness?’ will fade away into irrelevancy and abstraction,” he said. As Desimone sees it, consciousness is just a vague word for the mental experience of attending, which we are slowly dissecting in terms of the electrical and chemical activity of individual neurons.

DeSimone compares understanding consciousness to understanding “the nature of motion” as it applies to a car. Once we understand how a car operates, there’s nothing more to say about its motion. Or in a physiological comparison he might have made, we will eventually understand what consciousness is or how it works just like we now understand what digestion is or how it works.

But I think there’s something importantly different about consciousness as compared to the motion of a car or even the digestion of a sandwich.This is how David Chalmers, the philosopher who first referred to the “hard” problem (as opposed to the “easy” problems) of consciousness, put it in his 1995 article “Facing Up to the Problem of Consciousness”:

Consciousness poses the most baffling problems in the science of the mind. There is nothing that we know more intimately than conscious experience, but there is nothing that is harder to explain. All sorts of mental phenomena have yielded to scientific investigation in recent years, but consciousness has stubbornly resisted….

The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect…. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.

It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? 

So Desimone claims that once we understand the physical processes that occur in the brain (the mechanisms of consciousness), the philosophical problem of consciousness will evaporate. Chalmers, on the other hand, says that even if we were to understand the “physical processing” in the brain, the hard problem of consciousness would remain. According to Chalmers, “specifying a mechanism that performs the function” of consciousness, as Desimone hopes to do one day, won’t solve the mystery at all.

I agree with Desimone up to a point. I think the neuroscientists will eventually answer the philosophers’ question as Chalmers posed it in the paragraphs above. Why should such and such physical processing give rise to an inner life? Well, why should such and such physical processing give rise to digestion or respiration or, for that matter, to water boiling or leaves falling? When such and such physical, chemical or biological events take place, what happens is digestion, respiration, water boiling and leaves falling, as the case may be. If an organism’s parts are arranged a certain way, the organism will have an inner life. Or if, as might be the case one day, a machine’s parts are arranged a certain way, the machine will have an inner life. That’s how the world – the world we happen to be in – works (or will work).

I don’t believe this “why should” question concerning experience or consciousness is very interesting from a philosophical perspective. It seems to me that it’s “merely” a very difficult scientific question to which scientists haven’t yet found the answer. It certainly isn’t the hardest problem in philosophy.

To me, a more interesting question is this: What is felt experience anyway? What exactly is the deep blue or middle C that we experience? For example, when we look at an orange, the precise nature of our experience depends on several factors, including the surface of the orange, the light in our environment and our sensory apparatus. That’s why it makes no sense to say that the surface of an orange is orange in itself. When we hold an orange, the surface feels bumpy, but it presumably wouldn’t feel the same way if we were much, much smaller or much, much larger.

I’m not suggesting at all that David Chalmers has failed to recognize the real issue here. But I think that stating the issue in this way emphasizes the most puzzling aspect of consciousness. Here’s another way of putting it: if the world outside of us isn’t just as we experience it, and it’s not in our heads either (there are relatively few colors or sounds inside our brains), where the hell is it? 

These days, philosophers often compare brains to computers. The brain is the hardware and the mind is the software. But it occurred to me recently that software doesn’t do anything unless it has data to process (that’s not a great insight if you’ve ever worked with or thought about computers). This got me to wondering if we should think of our experience as data. Not the kind of data that computers process, but a special kind of data that takes a variety of forms (in Chalmers’ words, “the felt quality of redness, the experience of dark and light, the quality of depth in a visual field….the sound of a clarinet, the smell of mothballs…. bodily sensations…. mental images…. the felt quality of emotion…. a stream of conscious thought”). We might call this kind of data “sense data”. 

But, lo and behold, that’s exactly what a number of philosophers in the early 20th century, including Bertrand Russell and G. E. Moore, began calling it. (Actually, I already knew that, but it was still a pleasant surprise when I realized I’d gone from thinking about computers to thinking about a 100-year old theory).

Theories that refer to sense data and similar entities aren’t as popular among philosophers as they used to be, but such theories have been one of the most discussed topics in epistemology (the theory of knowledge) and the philosophy of perception for a long time. William James referred to “the data from all the senses” before Russell and Moore (although James wouldn’t have accepted Russell’s or Moore’s particular views). And sense data theories had their precursors in the 17th and 18th century writings of Rene Descartes and the British empiricists Locke, Berkeley and Hume. 

As usual, the Stanford Encyclopedia of Philosophy has a helpful article on the subject. The author, Michael Huemer, characterizes sense data in this way (using “data” in the traditional plural sense rather than as a singular collective noun):

On the most common conception, sense data (singular: “sense datum”) have three defining characteristics:

  1. Sense data are the kind of thing we are directly aware of in perception,
  2. Sense data are dependent on the mind, and
  3. Sense data have the properties that perceptually appear to us.

Many philosophers deny that sense data exists or that we’re directly aware of it. Proposition 3 above is also especially controversial. It’s also often argued that if we were actually aware of sense data, we’d be cut off from the world around us. I don’t plan to discuss any of this further right now, but it’s a topic I want to get back to. It’s a really hard philosophical problem.

PS — There is a funny site called Philosopher Shaming that features anonymous and not so anonymous pictures of philosophically-inclined people owning up to their deepest and darkest philosophical secrets. I uploaded my picture two years ago: 

tumblr_ma8nqi5Bj31renf8po1_500

A Guide to Reality, Part 13

Chapter 8 of Alex Rosenberg’s The Atheist’s Guide to Reality: Enjoying Life Without Illusions is called “The Brain Does Everything Without Thinking About Anything At All”. Without thinking about anything at all? That sounds like another of Rosenberg’s rhetorical exaggerations.

He grants that it’s perfectly natural for us to believe that our conscious minds allow us to think about this and that. Yet he claims that science tells us otherwise:

Among the seemingly unquestionable truths science makes us deny is the idea that we have any purposes at all, that we ever make plans — for today, tomorrow or next year. Science must even deny the basic notion that we ever really think about the past and the future or even that our conscious thoughts ever give any meaning to the actions that express them [165].

But despite this claim about science and the title of the chapter, Rosenberg doesn’t really “deny that we think accurately and act intelligently in the world”. It’s just (just!) that we don’t “do it in anything like the way almost everyone thinks we do”. In other words, we think but we don’t think “about”.

Before getting to his argument against “aboutness”, Rosenberg offers the observation that science is merely “common sense continually improving itself, rebuilding itself, correcting itself, until it is no longer recognizable as common sense” [167]. That seems like a correct understanding of science; it’s not as if science is a separate realm completely divorced from what’s called “common sense”. When done correctly, science is a cumulative process involving steps that are each in turn commonsensical, i.e. based on sound reasoning or information:

Science begins as common sense. Each step in the development of science is taken by common sense. The accumulation of those commonsense steps … has produced a body of science that no one any longer recognizes as common sense. But that’s what it is. The real common sense is relativity and quantum mechanics, atomic chemistry and natural selection. That’s why we should believe it in preference to what ordinary experience suggests [169].

So, if you have a mental image of the Eiffel Tower or suddenly remember that the Bastille was stormed in 1789, ordinary introspection suggests that you’re having a thought about Paris. But, according to Rosenberg, that’s a mistake. Assuming that this thought of yours that’s supposedly about Paris consists in or at least reflects activity in your brain (the organ you use to think), that would imply that there must be something in your brain that represents Paris. We know, however, that any such representation isn’t a tiny picture or map of Paris. Brain cells aren’t arranged like tiny pictures.

But perhaps there’s a kind of symbol in your brain, an arrangement of neurons that your brain somehow interprets as representing Paris? Rosenberg rejects this possibility, arguing that any such interpretation would require a second set of neurons:

[The second set of neurons] can’t interpret the Paris neurons as being about Paris unless some other part of [the second set] is, separately and independently, about Paris [too]. These will be the neurons that “say” that the Paris neurons are about Paris; they will be about the Paris neurons the way the Paris neurons are about Paris [178]. 

Rosenberg argues that this type of arrangement would lead to an unacceptable infinite regress. There would have to be a third set of neurons about the second set, and so on, and so on.

I confess that I’m having trouble understanding why the regress is necessary. In Rosenberg’s notes, he references a book called Memory: From Mind to Molecules, by the neuroscientists Larry Squire and Eric Kandel, which he says explains “how the brain stores information without any aboutness”. Maybe it’s clearer there.

However, if we grant that Rosenberg’s argument is correct and one part of the brain interpreting another (symbol-like) part of the brain would require an impossible infinite regress, it still seems questionable whether he has shown that nothing in the brain can be about anything. What his argument will have shown is that one part of the brain can’t interpret some other, symbolic part of the brain. Perhaps thoughts can be “about” something in some other way.

Rosenberg next offers an interesting account of how our brains work. Briefly put, human brains work pretty much like the brains of sea slugs and rats. Scientists have discovered that all of us organisms learn by connecting neurons together. Individual neurons are relatively simple input/output devices. Link them together and they become more complex input/output devices. The key difference between our brains and those belong to sea slugs and rats is that ours have more neurons and more links.

When a sea slug is conditioned to respond a certain way to a particular stimulus (like one of Pavlov’s dogs),

[the training] releases proteins that opens up the channels, the synapses, between the neurons, so it is easier for molecules of calcium, potassium, sodium and chloride to move through their gaps, carrying electrical charges between the neurons. This produces short-term memory in the sea slug. Training over a longer period does the same thing, but also stimulates genes in the neurons’ nuclei to build new synapses that last for some time. The more synapses, the longer the conditioning lasts. The result is long-term memory in the sea slug [181].

The process in your brain was similar when you learned to recognize your mother’s face. Rosenberg cites an experiment in which researchers were able to temporarily disable the neurons that allowed their subject to recognize her mother. Since she still recognized her mother’s voice, she couldn’t understand why this stranger sounded just like her mother.

Rosenberg concludes that there is nothing in our brains that is “about” anything:

None of these sets of circuits are about anything….The small sets of specialized input/output circuits that respond to your mom’s face, as well as the large set that responds to your mom [in different ways], are no different from millions of other such sets in your brain, except in one way: they respond to a distinct electrical input with a distinct electrical output….That’s why they are not about anything. Piling up a lot of neural circuits that are not about anything at all can’t turn them into a thought about stuff out there in the world [184]. 

Of course, how the activation of neural circuits in our brains results in conscious thoughts that seem to be “about” something remains a mystery. Today, for no apparent reason, I had a brief thought about Roxy Music, the 70s rock band. Maybe something in my environment (which happened to be a parking lot) triggered that particular mental response. Or maybe there was some seemingly random electrical activity in my brain that suddenly made Roxy Music come to mind. 

I still don’t see why we should deny that my thought this afternoon was about Roxy Music, even if the neural mechanics involved were quite simple at the cellular level. If some of my neurons will lead me to answer “Roxy Music” when I’m asked what group Bryan Ferry was in, or will get me to think of Roxy Music once in a while, perhaps we should accept the fact that there are arrangements of neurons in my head that are about Roxy Music.

Philosophers use the term “intentionality” instead of “aboutness”. They’ve been trying to understand intentionality for a long time. How can one thing be “about” another thing? Rosenberg seems to agree that intentionality is mysterious. He also thinks it’s an illusion. Maybe he’s right. In the last part of chapter 8, he brings computer science into the discussion. That’s a topic that will have to wait for another time.