A Guide to Reality, Part 14

It’s been more than three months since I wrote about Alex Rosenberg’s The Atheist’s Guide to Reality: Enjoying Life Without Illusions. I left off part of the way through chapter 8, “The Brain Does Everything Without Thinking About Anything At All”. Having read the book once before, it’s been difficult going through it again, but I’m now going to finish chapter 8.

The principal thesis of Rosenberg’s book is that since the universe is nothing more than subatomic particles, much of what we take for granted about the world is illusory. In the case of the human brain, this means that the brain does its work without anything happening in the brain being “about” anything at all.

Rosenberg asks us to consider a computer:

Neither the … electrical charges in the computer’s motherboard nor the distribution of magnetic charges in the hard drive can be about anything, right? They are just like red octagons. They get interpreted by us [as stop signs or whatever] [187].

Electrical engineers and computer programmers assign meanings to a computer’s low-level states (“on” or “off”, or 32767, or the letter “w”), but those states have no meaning in themselves. It’s only because people are able to assign meanings to the states of a computer and then interpret those them that those states can be about anything, just the way a red octagonal sign with “STOP” on it only has meaning for those of us who know how to read a traffic sign.

But doesn’t that mean that the physical states of a computer can be about something? Doesn’t our interpretation of those states imply that those states are meaningful?

Rosenberg doesn’t think so. Earlier, he discussed how brain cells function as input/output devices. Now he compares the brain itself to a computer:

The brain is at least in part a computer. It’s composed of an unimaginably large number of electronic input/output circuits…The circuits transmit electrical outputs in different ways, depending on their electrical inputs and on how their parts… But that it is at least a computer is obvious from its anatomy and physiology right down to the individual neurons and their electrochemical on/off connections [188-189].

But if what’s inside a computer isn’t about anything, and your brain works like a computer, what’s inside your brain isn’t about anything either. It’s merely an enormous bunch of interconnected cells that have no intrinsic meaning. That’s Rosenberg’s conclusion.

To clarify his point, he then offers an analogy. The image in a still photograph doesn’t move. But string many photographs together, project them on a screen and you’ve got a motion picture. The motion we perceive in a movie, however, is an illusion. Creatures whose physiology worked faster than ours would simply see a succession of still pictures, not actors or objects in continuous motion. In similar fashion:

The illusion of aboutness projected by the neurons in our brain does not match any aboutness in the world. There isn’t any….There is no aboutness in reality [191].

So, despite what introspection tells us (or “screams” at us, using his term), our thoughts aren’t about anything either:

Consciousness is just another physical process. So, it has as much trouble producing aboutness as any other physical process. Introspection certainly produces the illusion of aboutness. But it’s got to be an illusion, since nothing physical can be about anything [193]. 

But doesn’t that mean The Atheist’s Guide to Reality isn’t about anything? Why bother reading it then?

Rosenberg’s answer is that his book isn’t “conveying statements”. It’s “rearranging neural circuits, removing inaccurate disinformation and replacing it with accurate information” [193]. But, we might ask, isn’t information “about” something? And isn’t the distinction between accurate and inaccurate information dependent on the idea that information can be about something in a more or less satisfactory manner?

At this point, I can’t remember why Rosenberg is so interested in convincing us that there is no real “aboutness” or what philosophers call “intentionality” in the world.

It’s certainly puzzling how our minds are able to assign meaning to and find meaning in the world. Being appreciative of science, I can accept that there is nothing in the universe but quarks, leptons and bosons when you get right down to it (or their component parts if there are any), but there are also arrangements of those things. Some of those arrangements are meaningful to us and some aren’t. The fact that scientists might and probably will explain our experience of aboutness in biological terms, and then in terms of chemistry, and then in terms of physics, doesn’t change the fact that Rosenberg’s book and the words I’m typing are about something.

When I started writing this post, I didn’t know if I’d work through any more chapters in The Atheist’s Guide to Reality (although if the universe is as deterministic as Rosenberg thinks – and I tend to think – that was decided some time ago). But now what I think I’m going to do is skip the next three chapters. They’re concerned with purpose (an illusion), the self (also an illusion), history (it’s blind) and the other social sciences, especially economics (they’re all myopic). Chapter 12, the final chapter, is called “Living With Scientism: Ethics, Politics, the Humanities, and Prozac as Needed”. That seems like a good place to stop.

A Guide to Reality, Part 13

Chapter 8 of Alex Rosenberg’s The Atheist’s Guide to Reality: Enjoying Life Without Illusions is called “The Brain Does Everything Without Thinking About Anything At All”. Without thinking about anything at all? That sounds like another of Rosenberg’s rhetorical exaggerations.

He grants that it’s perfectly natural for us to believe that our conscious minds allow us to think about this and that. Yet he claims that science tells us otherwise:

Among the seemingly unquestionable truths science makes us deny is the idea that we have any purposes at all, that we ever make plans — for today, tomorrow or next year. Science must even deny the basic notion that we ever really think about the past and the future or even that our conscious thoughts ever give any meaning to the actions that express them [165].

But despite this claim about science and the title of the chapter, Rosenberg doesn’t really “deny that we think accurately and act intelligently in the world”. It’s just (just!) that we don’t “do it in anything like the way almost everyone thinks we do”. In other words, we think but we don’t think “about”.

Before getting to his argument against “aboutness”, Rosenberg offers the observation that science is merely “common sense continually improving itself, rebuilding itself, correcting itself, until it is no longer recognizable as common sense” [167]. That seems like a correct understanding of science; it’s not as if science is a separate realm completely divorced from what’s called “common sense”. When done correctly, science is a cumulative process involving steps that are each in turn commonsensical, i.e. based on sound reasoning or information:

Science begins as common sense. Each step in the development of science is taken by common sense. The accumulation of those commonsense steps … has produced a body of science that no one any longer recognizes as common sense. But that’s what it is. The real common sense is relativity and quantum mechanics, atomic chemistry and natural selection. That’s why we should believe it in preference to what ordinary experience suggests [169].

So, if you have a mental image of the Eiffel Tower or suddenly remember that the Bastille was stormed in 1789, ordinary introspection suggests that you’re having a thought about Paris. But, according to Rosenberg, that’s a mistake. Assuming that this thought of yours that’s supposedly about Paris consists in or at least reflects activity in your brain (the organ you use to think), that would imply that there must be something in your brain that represents Paris. We know, however, that any such representation isn’t a tiny picture or map of Paris. Brain cells aren’t arranged like tiny pictures.

But perhaps there’s a kind of symbol in your brain, an arrangement of neurons that your brain somehow interprets as representing Paris? Rosenberg rejects this possibility, arguing that any such interpretation would require a second set of neurons:

[The second set of neurons] can’t interpret the Paris neurons as being about Paris unless some other part of [the second set] is, separately and independently, about Paris [too]. These will be the neurons that “say” that the Paris neurons are about Paris; they will be about the Paris neurons the way the Paris neurons are about Paris [178]. 

Rosenberg argues that this type of arrangement would lead to an unacceptable infinite regress. There would have to be a third set of neurons about the second set, and so on, and so on.

I confess that I’m having trouble understanding why the regress is necessary. In Rosenberg’s notes, he references a book called Memory: From Mind to Molecules, by the neuroscientists Larry Squire and Eric Kandel, which he says explains “how the brain stores information without any aboutness”. Maybe it’s clearer there.

However, if we grant that Rosenberg’s argument is correct and one part of the brain interpreting another (symbol-like) part of the brain would require an impossible infinite regress, it still seems questionable whether he has shown that nothing in the brain can be about anything. What his argument will have shown is that one part of the brain can’t interpret some other, symbolic part of the brain. Perhaps thoughts can be “about” something in some other way.

Rosenberg next offers an interesting account of how our brains work. Briefly put, human brains work pretty much like the brains of sea slugs and rats. Scientists have discovered that all of us organisms learn by connecting neurons together. Individual neurons are relatively simple input/output devices. Link them together and they become more complex input/output devices. The key difference between our brains and those belong to sea slugs and rats is that ours have more neurons and more links.

When a sea slug is conditioned to respond a certain way to a particular stimulus (like one of Pavlov’s dogs),

[the training] releases proteins that opens up the channels, the synapses, between the neurons, so it is easier for molecules of calcium, potassium, sodium and chloride to move through their gaps, carrying electrical charges between the neurons. This produces short-term memory in the sea slug. Training over a longer period does the same thing, but also stimulates genes in the neurons’ nuclei to build new synapses that last for some time. The more synapses, the longer the conditioning lasts. The result is long-term memory in the sea slug [181].

The process in your brain was similar when you learned to recognize your mother’s face. Rosenberg cites an experiment in which researchers were able to temporarily disable the neurons that allowed their subject to recognize her mother. Since she still recognized her mother’s voice, she couldn’t understand why this stranger sounded just like her mother.

Rosenberg concludes that there is nothing in our brains that is “about” anything:

None of these sets of circuits are about anything….The small sets of specialized input/output circuits that respond to your mom’s face, as well as the large set that responds to your mom [in different ways], are no different from millions of other such sets in your brain, except in one way: they respond to a distinct electrical input with a distinct electrical output….That’s why they are not about anything. Piling up a lot of neural circuits that are not about anything at all can’t turn them into a thought about stuff out there in the world [184]. 

Of course, how the activation of neural circuits in our brains results in conscious thoughts that seem to be “about” something remains a mystery. Today, for no apparent reason, I had a brief thought about Roxy Music, the 70s rock band. Maybe something in my environment (which happened to be a parking lot) triggered that particular mental response. Or maybe there was some seemingly random electrical activity in my brain that suddenly made Roxy Music come to mind. 

I still don’t see why we should deny that my thought this afternoon was about Roxy Music, even if the neural mechanics involved were quite simple at the cellular level. If some of my neurons will lead me to answer “Roxy Music” when I’m asked what group Bryan Ferry was in, or will get me to think of Roxy Music once in a while, perhaps we should accept the fact that there are arrangements of neurons in my head that are about Roxy Music.

Philosophers use the term “intentionality” instead of “aboutness”. They’ve been trying to understand intentionality for a long time. How can one thing be “about” another thing? Rosenberg seems to agree that intentionality is mysterious. He also thinks it’s an illusion. Maybe he’s right. In the last part of chapter 8, he brings computer science into the discussion. That’s a topic that will have to wait for another time.