Isaac Asimov Meets the Terminator and Guess Who Wins

According to The Atlantic, the Pentagon is going to award $7.5 million for research on how to teach ethics to robots. The idea is that robots might (or will) one day be in situations that demand ethical decision-making. For example, if a robot is on a mission to deliver ammunition to troops on the battlefield but encounters a wounded soldier along the way, should the robot delay its mission in order to take the wounded soldier to safety? Or risk the deaths of the soldiers who need that ammunition?

Since philosophers are still arguing about what ethical rules we should follow, and ethical questions don’t always have correct answers anyway, futuristic battlefield robots may need a coin flipping module. That way they won’t come to a halt, emit clouds of smoke and announce “Does not compute!” over and over.

Of course, the talented software developers who program these robots with a sense of right and wrong will avoid really poor error processing like that (presumably, they’ll have seen Star Trek too, so they’ll know what situations to code for). The big question isn’t whether robots can eventually be programmed to make life-and-death decisions, but whether we should put robots in situations that require that kind of decision-making.

the-day-the-earth-stood-still-special-edition-20081204031732410-000

Fortunately, Pentagon policy currently prohibits letting robots decide who to kill. Human beings still have that responsibility. However, the Pentagon’s policy can be changed without the approval of the President, the Secretary of Defense or Congress. And although a U.N. official recently called for a moratorium on “lethal autonomous robotics”, it’s doubtful that even a temporary ban will be enacted. It’s even more doubtful that the world leader in military technology and the use thereof would honor such a ban if it were.

After all, most politicians will prefer putting robots at risk on the battlefield instead of men and women, even if that means the robots occasionally screw up and kill the wrong men, women and children. And, of course, once the politicians and generals think the robots are ready, they’ll find it much easier to unleash the (automated and autonomous) dogs of war.

(PS – The actual quote from Julius Caesar is “‘Cry Havoc!’, and let slip the dogs of war”. Serves me right for trying to be a bit poetic.)

3 thoughts on “Isaac Asimov Meets the Terminator and Guess Who Wins

  1. It’s quite the dilemma — once artificial intelligence is capable of moral reasoning it will seem they have reached a level of autonomy where it would be wrong to treat them differently than humans, thereby defeating some of their initial purpose.

    • That’s a very good point. If they become sufficiently person-like, we should treat them the same as other persons, instead of using them as cannon fodder. One way to get around that dilemma would be to raise the threshold for them by requiring that they be conscious to qualify as persons. Although I have no idea how we’d ever determine that, not being machines ourselves. Maybe one day, however, we will identify some phenomenon (like a unique energy field) that goes along with consciousness. Although by that time, the machines may be trying to decide how to treat us…..

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s