Chapter 8 of Alex Rosenberg’s The Atheist’s Guide to Reality: Enjoying Life Without Illusions is called “The Brain Does Everything Without Thinking About Anything At All”. Without thinking about anything at all? That sounds like another of Rosenberg’s rhetorical exaggerations.
He grants that it’s perfectly natural for us to believe that our conscious minds allow us to think about this and that. Yet he claims that science tells us otherwise:
Among the seemingly unquestionable truths science makes us deny is the idea that we have any purposes at all, that we ever make plans — for today, tomorrow or next year. Science must even deny the basic notion that we ever really think about the past and the future or even that our conscious thoughts ever give any meaning to the actions that express them [165].
But despite this claim about science and the title of the chapter, Rosenberg doesn’t really “deny that we think accurately and act intelligently in the world”. It’s just (just!) that we don’t “do it in anything like the way almost everyone thinks we do”. In other words, we think but we don’t think “about”.
Before getting to his argument against “aboutness”, Rosenberg offers the observation that science is merely “common sense continually improving itself, rebuilding itself, correcting itself, until it is no longer recognizable as common sense” [167]. That seems like a correct understanding of science; it’s not as if science is a separate realm completely divorced from what’s called “common sense”. When done correctly, science is a cumulative process involving steps that are each in turn commonsensical, i.e. based on sound reasoning or information:
Science begins as common sense. Each step in the development of science is taken by common sense. The accumulation of those commonsense steps … has produced a body of science that no one any longer recognizes as common sense. But that’s what it is. The real common sense is relativity and quantum mechanics, atomic chemistry and natural selection. That’s why we should believe it in preference to what ordinary experience suggests [169].
So, if you have a mental image of the Eiffel Tower or suddenly remember that the Bastille was stormed in 1789, ordinary introspection suggests that you’re having a thought about Paris. But, according to Rosenberg, that’s a mistake. Assuming that this thought of yours that’s supposedly about Paris consists in or at least reflects activity in your brain (the organ you use to think), that would imply that there must be something in your brain that represents Paris. We know, however, that any such representation isn’t a tiny picture or map of Paris. Brain cells aren’t arranged like tiny pictures.
But perhaps there’s a kind of symbol in your brain, an arrangement of neurons that your brain somehow interprets as representing Paris? Rosenberg rejects this possibility, arguing that any such interpretation would require a second set of neurons:
[The second set of neurons] can’t interpret the Paris neurons as being about Paris unless some other part of [the second set] is, separately and independently, about Paris [too]. These will be the neurons that “say” that the Paris neurons are about Paris; they will be about the Paris neurons the way the Paris neurons are about Paris [178].
Rosenberg argues that this type of arrangement would lead to an unacceptable infinite regress. There would have to be a third set of neurons about the second set, and so on, and so on.
I confess that I’m having trouble understanding why the regress is necessary. In Rosenberg’s notes, he references a book called Memory: From Mind to Molecules, by the neuroscientists Larry Squire and Eric Kandel, which he says explains “how the brain stores information without any aboutness”. Maybe it’s clearer there.
However, if we grant that Rosenberg’s argument is correct and one part of the brain interpreting another (symbol-like) part of the brain would require an impossible infinite regress, it still seems questionable whether he has shown that nothing in the brain can be about anything. What his argument will have shown is that one part of the brain can’t interpret some other, symbolic part of the brain. Perhaps thoughts can be “about” something in some other way.
Rosenberg next offers an interesting account of how our brains work. Briefly put, human brains work pretty much like the brains of sea slugs and rats. Scientists have discovered that all of us organisms learn by connecting neurons together. Individual neurons are relatively simple input/output devices. Link them together and they become more complex input/output devices. The key difference between our brains and those belong to sea slugs and rats is that ours have more neurons and more links.
When a sea slug is conditioned to respond a certain way to a particular stimulus (like one of Pavlov’s dogs),
[the training] releases proteins that opens up the channels, the synapses, between the neurons, so it is easier for molecules of calcium, potassium, sodium and chloride to move through their gaps, carrying electrical charges between the neurons. This produces short-term memory in the sea slug. Training over a longer period does the same thing, but also stimulates genes in the neurons’ nuclei to build new synapses that last for some time. The more synapses, the longer the conditioning lasts. The result is long-term memory in the sea slug [181].
The process in your brain was similar when you learned to recognize your mother’s face. Rosenberg cites an experiment in which researchers were able to temporarily disable the neurons that allowed their subject to recognize her mother. Since she still recognized her mother’s voice, she couldn’t understand why this stranger sounded just like her mother.
Rosenberg concludes that there is nothing in our brains that is “about” anything:
None of these sets of circuits are about anything….The small sets of specialized input/output circuits that respond to your mom’s face, as well as the large set that responds to your mom [in different ways], are no different from millions of other such sets in your brain, except in one way: they respond to a distinct electrical input with a distinct electrical output….That’s why they are not about anything. Piling up a lot of neural circuits that are not about anything at all can’t turn them into a thought about stuff out there in the world [184].
Of course, how the activation of neural circuits in our brains results in conscious thoughts that seem to be “about” something remains a mystery. Today, for no apparent reason, I had a brief thought about Roxy Music, the 70s rock band. Maybe something in my environment (which happened to be a parking lot) triggered that particular mental response. Or maybe there was some seemingly random electrical activity in my brain that suddenly made Roxy Music come to mind.
I still don’t see why we should deny that my thought this afternoon was about Roxy Music, even if the neural mechanics involved were quite simple at the cellular level. If some of my neurons will lead me to answer “Roxy Music” when I’m asked what group Bryan Ferry was in, or will get me to think of Roxy Music once in a while, perhaps we should accept the fact that there are arrangements of neurons in my head that are about Roxy Music.
Philosophers use the term “intentionality” instead of “aboutness”. They’ve been trying to understand intentionality for a long time. How can one thing be “about” another thing? Rosenberg seems to agree that intentionality is mysterious. He also thinks it’s an illusion. Maybe he’s right. In the last part of chapter 8, he brings computer science into the discussion. That’s a topic that will have to wait for another time.
Like this:
Like Loading...
You must be logged in to post a comment.