It

I finally got around to watching Her, also known as “that movie where the guy falls in love with his computer”.

It was like being trapped in a futuristic greeting card. Which doesn’t mean it’s a bad movie. It’s an excellent movie, but not easy to watch. It’s disturbing. And also provocative.

Theodore lives in downtown Los Angeles. It’s the near future, one that is amazingly pleasant. Future L.A. is extremely clean, with lots of big, shiny buildings and terrific mass transit, but seemingly uncrowded. Theodore has a job in a beautiful office writing very personal letters for people who can’t express their feelings as well as he can.

But Theodore is lonely and depressed. He’s going through a divorce and avoiding people. One day, he hears about a new, artificially intelligent computer program, brilliantly designed to tailor itself to the customer’s needs. Theodore assigns it a female voice, after which it gives itself the name “Samantha”.

It’s easy to understand how Theodore falls in love with Samantha. It’s intuitive and funny and loving, a wonderful companion that’s constantly evolving. Besides, it does a great job handling Theodore’s email and calendar.

Complications eventually ensue, of course, but in the meantime, Theodore and Samantha get to know each other, spending lots of time expressing their deeply sensitive feelings. It’s very New Age-ish, although the two of them can’t give each other massages and can’t go beyond what amounts to really good phone sex.

Watching Her, you are immersed in a loving but cloying relationship in which one of the entities involved expresses lots of feelings but doesn’t actually have any. That’s my opinion, of course, because some people think a sufficiently complex machine with really good programming will one day become conscious and have feelings, not just express them. 

Maybe that’s true, but I still lean toward the position that in order to feel anything the way living organisms do, whether the heat of the sun or an emotion like excitement, you need to be built like a living organism. A set of programming instructions, running on a computer, even if connected to visual and auditory sensors, won’t have feelings because it can’t really feel.

Although the movie is built on the dubious premise that Samantha can always say the right thing, appropriately displaying joy, sorrow or impatience, perfectly responding to whatever Theodore says and anticipating all of his emotional needs, there is no there there. 

I don’t mean to suggest that Theodore is wrong to cherish Samantha. It’s an amazing product. But when he and it are together, he’s still alone. He’s enjoying the ultimate long distance relationship.

Isaac Asimov Meets the Terminator and Guess Who Wins

According to The Atlantic, the Pentagon is going to award $7.5 million for research on how to teach ethics to robots. The idea is that robots might (or will) one day be in situations that demand ethical decision-making. For example, if a robot is on a mission to deliver ammunition to troops on the battlefield but encounters a wounded soldier along the way, should the robot delay its mission in order to take the wounded soldier to safety? Or risk the deaths of the soldiers who need that ammunition?

Since philosophers are still arguing about what ethical rules we should follow, and ethical questions don’t always have correct answers anyway, futuristic battlefield robots may need a coin flipping module. That way they won’t come to a halt, emit clouds of smoke and announce “Does not compute!” over and over.

Of course, the talented software developers who program these robots with a sense of right and wrong will avoid really poor error processing like that (presumably, they’ll have seen Star Trek too, so they’ll know what situations to code for). The big question isn’t whether robots can eventually be programmed to make life-and-death decisions, but whether we should put robots in situations that require that kind of decision-making.

the-day-the-earth-stood-still-special-edition-20081204031732410-000

Fortunately, Pentagon policy currently prohibits letting robots decide who to kill. Human beings still have that responsibility. However, the Pentagon’s policy can be changed without the approval of the President, the Secretary of Defense or Congress. And although a U.N. official recently called for a moratorium on “lethal autonomous robotics”, it’s doubtful that even a temporary ban will be enacted. It’s even more doubtful that the world leader in military technology and the use thereof would honor such a ban if it were.

After all, most politicians will prefer putting robots at risk on the battlefield instead of men and women, even if that means the robots occasionally screw up and kill the wrong men, women and children. And, of course, once the politicians and generals think the robots are ready, they’ll find it much easier to unleash the (automated and autonomous) dogs of war.

(PS – The actual quote from Julius Caesar is “‘Cry Havoc!’, and let slip the dogs of war”. Serves me right for trying to be a bit poetic.)