I finally got around to watching Her, also known as “that movie where the guy falls in love with his computer”.

It was like being trapped in a futuristic greeting card. Which doesn’t mean it’s a bad movie. It’s an excellent movie, but not easy to watch. It’s disturbing. And also provocative.

Theodore lives in downtown Los Angeles. It’s the near future, one that is amazingly pleasant. Future L.A. is extremely clean, with lots of big, shiny buildings and terrific mass transit, but seemingly uncrowded. Theodore has a job in a beautiful office writing very personal letters for people who can’t express their feelings as well as he can.

But Theodore is lonely and depressed. He’s going through a divorce and avoiding people. One day, he hears about a new, artificially intelligent computer program, brilliantly designed to tailor itself to the customer’s needs. Theodore assigns it a female voice, after which it gives itself the name “Samantha”.

It’s easy to understand how Theodore falls in love with Samantha. It’s intuitive and funny and loving, a wonderful companion that’s constantly evolving. Besides, it does a great job handling Theodore’s email and calendar.

Complications eventually ensue, of course, but in the meantime, Theodore and Samantha get to know each other, spending lots of time expressing their deeply sensitive feelings. It’s very New Age-ish, although the two of them can’t give each other massages and can’t go beyond what amounts to really good phone sex.

Watching Her, you are immersed in a loving but cloying relationship in which one of the entities involved expresses lots of feelings but doesn’t actually have any. That’s my opinion, of course, because some people think a sufficiently complex machine with really good programming will one day become conscious and have feelings, not just express them. 

Maybe that’s true, but I still lean toward the position that in order to feel anything the way living organisms do, whether the heat of the sun or an emotion like excitement, you need to be built like a living organism. A set of programming instructions, running on a computer, even if connected to visual and auditory sensors, won’t have feelings because it can’t really feel.

Although the movie is built on the dubious premise that Samantha can always say the right thing, appropriately displaying joy, sorrow or impatience, perfectly responding to whatever Theodore says and anticipating all of his emotional needs, there is no there there. 

I don’t mean to suggest that Theodore is wrong to cherish Samantha. It’s an amazing product. But when he and it are together, he’s still alone. He’s enjoying the ultimate long distance relationship.

10 thoughts on “It

  1. I think people often feel this way about their iWhatevers (speaking of phone sex, which now has a new and much more disappointing meaning for me).

    Great synopsis.

    I’ve been wanting to watch this movie to see how they handle the consciousness issue. It seems pretty obvious a priori that no robot can be human, no matter how well it mimics our behavior, but the conversation about it could be interesting. Throughout the centuries we’ve struggled to define what it means to be human, and this usually meant differentiating ourselves from animals, but now we’ve got a new kid on the block.

    • Thanks. I had to keep reminding myself that Samantha wasn’t human. Especially with Scarlett Johansson’s voice, I kind of wanted her to be “real” in that way. As the program points out in the movie, it’s definitely a complex individual. The more the story progresses, the easier it is to think of it as a person, although not at all human.

  2. Definitely a good movie, I thought the direction and cinematography were remarkable in creating a tangible melancholia. I also had a similar reaction to Scarlett Johansson’s voice — I had to remind myself that she was a computer program on his phone, and not another human being merely speaking to him over the phone. I saw this tendency as indicative of why people find the Turing test so plausible as a measure of intelligence. Surely, “Samantha” could pass the Turing test, and we’d probably like to say that she exhibits intelligence. I’m skeptical of the latter point, as you seem to be, due to Searle’s Chinese Room and Ned Block’s adaptations, but I think the movie shows why people do forsake these thought experiments for the conclusion that the Turing test measures intelligence.

    On the other hand, I thought the truly interesting depiction of the future provided by the movie was that humans will become so disengaged from and disinterested in reality due to technological conveniences that our appliances will be more excited than we are about living and having experiences. Of course, to be capable of having such excitement for the future and for having experiences might be good evidence of intelligence. Then we must remind ourselves that it is a work of fiction, and that such a scenario seems unlikely.

    • I wouldn’t deny that Samantha is extremely intelligent.To deny that would be like denying that it speaks excellent English and exhibits a wonderfully charming personality. I also thought one of the most interesting features of the movie was how it and the other versions of the program develop as the movie progresses. I didn’t foresee what happens at the end (maybe all the humans asked for their money back and put the software company out of business?).

      But as you say, it’s fiction. There seems to be a contradiction between Samantha’s astounding abilities and the setting, which looks like the relatively near future (given the clothing, decor, music, etc. and those unimpressive computer games the people enjoy so much). So there must have been some kind of amazing breakthrough in artificial intelligence for something like Samantha to exist. Putting that aside, at some point in the movie I found myself admitting that we humans had become second-class citizens. The analogy isn’t apt, but it’s like the genie gets out of the bottle.

  3. I agree with rung2diotimasladder; a great synopsis!

    I’m someone who does think a machine will eventually be able to be conscious. In my view, it requires that we find the correct architecture (which currently, at best, we only have the earliest inklings only of). But I found that the movie smoothly, imperceptibly, lumped consciousness in with a self actualizing agenda. I don’t think that either consciousness or a self actualization instinct would happen by accident, or that one necessarily implies the other.

    But on machine consciousness, I seem to be outside of the group consensus here. Given that, I’m curious what Larry or any of the commenters would see as the fundamental difference between us feeling the heat of the sun and a machine detecting it with a high resolution sensor? (Which admittedly doesn’t exist yet.) Or between following complex programming directives or following instincts?

    Certainly I agree that a machine couldn’t have a human’s experience unless it was very similar in form to a human (physically or virtually), but I can’t see why it wouldn’t have its own experience, just as a bat or a octopus has its own experience.

    • I agree with you that the move toward self-actualization was surprising, but I don’t think the filmmakers necessarily equated consciousness and self-actualization. Perhaps the self-actualization was “simply” a by-product of the machine’s increasing knowledge and its apparent ability to adjust its own programming? Ordinarily, I’d agree that the desire to self-actualize is separate in some way from mere knowledge of the possibility of self-actualization, but I was willing to give the movie the benefit of the doubt on that question. I suppose the question is whether expertly-programmed machines can sometimes rise above their own programming, which seems possible in a sense (maybe there is a fundamental level of programming that couldn’t change but higher levels that could?). But in one sense, if a machine is programmed to rise above its programming, it’s just doing what it was programmed to do!

      Your other question is a tough one. I definitely think there’s a difference between a thermometer noting the temperature and a human being starting to perspire and thinking “wow, it’s really hot”. There’s clearly a stimulus/response phenomenon in both cases, but in the human case, there’s that conscious “what it is to feel hot” phenomenon as well. But I don’t know how to explain the difference any better than that.

      So, assuming we agree that there’s something more going on in the human case, will machines eventually have their own kind of awareness, something that is similar to (in some sense) what complex organisms like humans have?

      We may not be in any disagreement on that question. My guess is that consciousness arises given a certain kind of physical structure or architecture, as you say. So that constructing a machine that has the right kind of structure (which we don’t understand very well yet) may be enough to give it consciousness too. As many have pointed out, unless we adopt the old view that people have souls and that’s why they’re conscious, future technologists should be able to make machines, even out of other materials, that are built like and therefore work like people.

      To take that idea, further, maybe there will be even better ways to build consciousness into machines of the future, using a very different architecture from what organisms have. There’s no reason to rule that possibility out (unless you think consciousness has something to do with having a soul). Knowing that consciousness has been achieved in this way, however, might be very difficult. It’s even worse than the Other Minds problem. Could we trust the machine to be correct when it said it was conscious? But perhaps there is some feature of our consciousness that will be detectable from a third-person perspective, and scientists will be able to see that same feature occurring in the machines. That’s probably the best evidence we could ever have.

      What makes me skeptical about machine consciousness, however, as it was shown in the movie, for example, is that it seems to require a significant leap to go from building a machine that’s extremely smart and talented like Samantha to making one that’s conscious. The hardware/software in even the best computers we’ve got now is so different from the what’s in the only entities that apparently have consciousness (like us and octopi), that I find it difficult to believe such machines are or ever will be conscious. I don’t know what Samantha was running on in the movie, but given that it was the relatively near future, I imagine the computer architecture wasn’t all that different from what we’ve got now, even if they were using quantum computers or massive parallel processing.

      I admit, of course, that machines might one day be close enough to our architecture, or there may be some other kind of breakthrough that we can’t imagine now, but until someone can explain why we and other animals are conscious, and how programs running on computers can be conscious too, I’ll probably remain skeptical. So I’m not sure we really disagree.

    • PS — One might argue that Samantha can’t be as human-like in its behavior unless it’s conscious (it can’t respond like a human unless it feels things like a human), but I think that’s a mistake. It seems possible for a computer (a mechanical zombie) to act as if it’s conscious even though it’s not. Although I admit it would be hard to insist to Samantha that “she” isn’t conscious when “she” strongly claims that “she” is! I understood Theodore’s pain in the movie when he doubted “her” consciousness.

      • I appreciate the thoughtful response. It does sound like we mostly agree. Certainly current computers are still a ways off from exhibiting any kind of conscious experience. This isn’t necessarily due to a lack of computing power since there are now supercomputers with the processing power of a human brain (or at least the estimated processing power). What’s missing is that architecture we’ve both mentioned. In this regard, I think Michael Graziano’s attention schema theory holds a good deal of promise.

        I think you’re right about the zombie part. I used to think p-zombies were obviously false, and I still think the idea of a neurological one is, but I’m open to the existence of a behavioral one, and Samantha could fit the bill. The question is, what would be easier to program, an AI that pretends to be conscious, or an an AI that is conscious?

        Also, I should have mentioned that I agree with your statement above that, regardless of whether or not she was conscious, Samantha had intelligence. Even the laptop I’m typing this on has some degree of intelligence.

        Thanks on the name. It’s a moniker I used on comment threads for a couple of years before the blog. It refers to us being patterns of cells, molecules, atoms, elementary particles, strings, branes, or whatever the ultimate building blocks of reality are. (Assuming reality isn’t structure all the way down.)

        • I hadn’t heard that there are supercomputers now able to match the estimated processing power of the human brain. Maybe you were referring to the simulation run on the Fujitsu K computer last year? At least I don’t remember reading about this before (so much for the performance of my human brain). Here’s part of an article on the subject from January:

          “The most accurate simulation of the human brain to date has been carried out in a Japanese supercomputer, with a single second’s worth of activity from just one per cent of the complex organ taking one of the world’s most powerful supercomputers 40 minutes to calculate.”

          “It used the open-source Neural Simulation Technology (NEST) tool to replicate a network consisting of 1.73 billion nerve cells connected by 10.4 trillion synapses.
          While significant in size, the simulated network represented just one per cent of the neuronal network in the human brain.”

          “[This] achievement offers neuroscientists a glimpse of what can be achieved by using the next generation of computers – so-called exascale computing.”

          “Exascale computers are those which can carry out a quintillion floating point operations per second, which is an important milestone in computing as it is thought to be the same power as a human brain and therefore opens the door to potential real-time simulation of the organ’s activity.”

          “Currently there is no computer in existence that powerful, but Intel has said that it aims to have such a machine in operation by 2018.”

          I gather that an exascale computer would be roughly 1000 times as powerful as something like the K computer, which is only “petascale”.

          • I’ve read about those emulations, but my statement comes from Michael Graziano in his book “Consciousness and the Social Brain”, where he makes the statement that some supercomputers are there. But they’re obviously not anywhere there yet if they have to emulate a physical brain.

            It actually takes far more processing power to emulate a brain than it does to just do your own processing natively. If each synapse in the brain counts as roughly a byte, then a brain has about a petabyte of memory, and functions as a massive but pokey parallel processing cluster. Of course, the brain isn’t a digital processor but an analog one, and a synapse scales smoothly in strength and is affected by the action potential of its axon neuron, so there are a lot of unknowns here.

Comments are closed.