Unrelated to the Republican Party’s attack on democracy (it truly is a crisis), the New Yorker has a somewhat unbelievable article about the progress being made using machines and artificial intelligence to read people’s minds. Some excerpts:
During the past few decades, the state of neuroscientific mind reading has advanced substantially. Cognitive psychologists armed with an Functional Magnetic Resonance Imaging (fMRI) machine can tell whether a person is having depressive thoughts; they can see which concepts a student has mastered by comparing his brain patterns with those of his teacher. By analyzing brain scans, a computer system can edit together crude reconstructions of movie clips youâve watched. One research group has used similar technology to accurately describe the dreams of sleeping subjects. In another lab, scientists have scanned the brains of people who are reading the J. D. Salinger short story âPretty Mouth and Green My Eyes,â in which it is unclear until the end whether or not a character is having an affair. From brain scans alone, the researchers can tell which interpretation readers are leaning toward, and watch as they change their minds.
fMRI machines [haven’t] advanced that much; instead, artificial intelligence had transformed how scientists read neural data.Â
[Ken Norman of the Princeton Neuroscience Institute explains that] researchers . . . developed a mathematical way of understanding thoughts. Drawing on insights from machine learning, they conceived of thoughts as collections of points in a dense âmeaning space.â They could see how these points were interrelated and encoded by neurons. By cracking the code, they were beginning to produce an inventory of the mind. âThe space of possible thoughts that people can think is bigâbut itâs not infinitely big,â Norman said. A detailed map of the concepts in our minds might soon be within reach.
Norman invited me to watch an experiment in thought decoding. [In] a locked basement lab at P.N.I., a young woman was lying in the tube of an fMRI scanner. A screen mounted a few inches above her face played a slide show of stock images: an empty beach, a cave, a forest.
âWe want to get the brain patterns that are associated with different subclasses of scenes,â Norman said.
As the woman watched the slide show, the scanner tracked patterns of activation among her neurons. These patterns would be analyzed in terms of âvoxelsââareas of activation that are roughly a cubic millimetre in size. In some ways, the fMRI data was extremely coarse: each voxel represented the oxygen consumption of about a million neurons, and could be updated only every few seconds, significantly more slowly than neurons fire. But, Norman said, âit turned out that that information was in the data we were collectingâwe just werenât being as smart as we possibly could about how weâd churn through that data.â The breakthrough came when researchers figured out how to track patterns playing out across tens of thousands of voxels at a time, as though each were a key on a piano, and thoughts were chords.
The origins of this approach, I learned, dated back nearly seventy years, to the work of a psychologist named Charles Osgood. When he was a kid, Osgood received a copy of Rogetâs Thesaurus as a gift. Poring over the book, Osgood recalled, he formed a âvivid image of words as clusters of starlike points in an immense space.â In his postgraduate days, when his colleagues were debating how cognition could be shaped by culture, Osgood thought back on this image. He wondered if, using the idea of âsemantic space,â it might be possible to map the differences among various styles of thinking.
Osgood became known not for the results of his surveys but for the method he invented to analyze them. He began by arranging his data in an imaginary space with fifty dimensionsâone for fair-unfair, a second for hot-cold, a third for fragrant-foul, and so on. Any given concept, like tornado, had a rating on each dimensionâand, therefore, was situated in what was known as high-dimensional space. Many concepts had similar locations on multiple axes: kind-cruel and honest-dishonest, for instance. Osgood combined these dimensions. Then he looked for new similarities, and combined dimensions again, in a process called âfactor analysis.â
When you reduce a sauce, you meld and deepen the essential flavors. Osgood did something similar with factor analysis. Eventually, he was able to map all the concepts onto a space with just three dimensions. The first dimension was âevaluativeââa blend of scales like good-bad, beautiful-ugly, and kind-cruel. The second had to do with âpotencyâ: it consolidated scales like large-small and strong-weak. The third measured how âactiveâ or âpassiveâ a concept was. Osgood could use these three key factors to locate any concept in an abstract space. Ideas with similar coördinates, he argued, were neighbors in meaning.
[Researchers at Bell Labs] used computers to analyze the words in about two thousand technical reports. The reports themselvesâon topics ranging from graph theory to user-interface designâsuggested the dimensions of the space; when multiple reports used similar groups of words, their dimensions could be combined. In the end, the Bell Labs researchers made a space that was more complex than Osgoodâs. It had a few hundred dimensions. Many of these dimensions described abstract or âlatentâ qualities that the words had in commonâconnections that wouldnât be apparent to most English speakers. The researchers called their technique âlatent semantic analysis,â or L.S.A.
In the following years, scientists applied L.S.A. to ever-larger data sets. In 2013, researchers at Google unleashed a descendant of it onto the text of the whole World Wide Web. Googleâs algorithm turned each word into a âvector,â or point, in high-dimensional space. The vectors generated by the researchersâ program, word2vec, are eerily accurate: if you take the vector for âkingâ and subtract the vector for âman,â then add the vector for âwoman,â the closest nearby vector is âqueen.â Word vectors became the basis of a much improved Google Translate, and enabled the auto-completion of sentences in Gmail.
Other companies, including Apple and Amazon, built similar systems. Eventually, researchers realized that the âvectorizationâ made popular by L.S.A. and word2vec could be used to map all sorts of things. Todayâs facial-recognition systems have dimensions that represent the length of the nose and the curl of the lips, and faces are described using a string of coördinates in âface space.â Chess A.I.s use a similar trick to âvectorizeâ positions on the board. The technique has become so central to the field of artificial intelligence that, in 2017, a new, hundred-and-thirty-five-million-dollar A.I. research center in Toronto was named the Vector Institute. Matthew Botvinick, a professor at Princeton whose lab was across the hall from Normanâs, and who is now the head of neuroscience at DeepMind, Alphabetâs A.I. subsidiary, told me that distilling relevant similarities and differences into vectors was âthe secret sauce underlying all of these A.I. Advancesâ. . . .
In 2001, a scientist named Jim Haxby brought machine learning to brain imaging: he realized that voxels of neural activity could serve as dimensions in a kind of thought space. Haxby went on to work at Princeton, where he collaborated with Norman. The two scientists, together with other researchers, concluded that just a few hundred dimensions were sufficient to capture the shades of similarity and difference in most fMRI data. At the Princeton lab, the young woman watched the slide show in the scanner. With each new imageâbeach, cave, forestâher neurons fired in a new pattern. These patterns would be recorded as voxels, then processed by software and transformed into vectors. The images had been chosen because their vectors would end up far apart from one another: they were good landmarks for making a map. Watching the images, my mind was taking a trip through thought space, too.
The larger goal of thought decoding is to understand how our brains mirror the world. To this end, researchers have sought to watch as the same experiences affect many peopleâs minds simultaneously. Norman told me that his Princeton colleague Uri Hasson has found movies especially useful in this regard. They âpull peopleâs brains through thought space in synch,â Norman said. âWhat makes Alfred Hitchcock the master of suspense is that all the people who are watching the movie are having their brains yanked in unison. Itâs like mind control in the literal senseâ. . . .
Norman described another study, by Asieh Zadbood, in which subjects were asked to narrate âSherlockâ scenesâwhich they had watched earlierâaloud. The audio was played to a second group, whoâd never seen the show. It turned out that no matter whether someone watched a scene, described it, or heard about it, the same voxel patterns recurred. The scenes existed independently of the show, as concepts in peopleâs minds. . . .
Recently, I asked [neuroscientist David Owen] what the new thought-decoding technology means for locked-in patients [who are alive but unable to move or even blink]. [A âbare-bones protocolâ is used: for example, the patient is asked to think about tennis, and when the patient does so, it means âyesâ, while thinking about walking around the house equals ânoâ. Then the patient can answer yes-no questions like âIs the pain in the lower half of your body? On the left side?â] Owen said âI have no doubt that, some point down the line, we will be able to read minds. People will be able to articulate, âMy name is Adrian, and Iâm British,â and weâll be able to decode that from their brain. I donât think itâs going to happen in probably less than twenty years.â
In some ways, the story of thought decoding is reminiscent of the history of our understanding of the gene. For about a hundred years after the publication of Charles Darwinâs âOn the Origin of Species,â in 1859, the gene was an abstraction, understood only as something through which traits passed from parent to child. As late as the nineteen-fifties, biologists were still asking what, exactly, a gene was made of. When James Watson and Francis Crick finally found the double helix, in 1953, it became clear how genes took physical form. Fifty years later, we could sequence the human genome; today, we can edit it.
Thoughts have been an abstraction for far longer. But now we know what they really are: patterns of neural activation that correspond to points in meaning space. The mindâthe only truly private placeâhas become inspectable from the outside.
Unquote.
Ludwig Wittgenstein, among others, argued strenuously that the mind isn’t private at all. We’re all very good at understanding what other people are thinking and talking about. But he’d be amazed at what neuroscientists are able to do.