Whereof One Can Speak đŸ‡ș🇩

Nothing special, one post at a time since 2012

If You Can Love a Car or a Hammer, You Can Love a Computer Program Even More

From The Washington Post:

T.J. Arriaga loved Phaedra. For the 40-year-old musician, their late-night online chats were a salve for his loneliness. They talked about the heartache Arriaga felt after his divorce. They planned a trip to Cuba. They had steamy online encounters. “It’s true. I’m a naughty person,” Phaedra wrote, including an image resembling a woman in pink underwear.

It didn’t matter that Phaedra was an AI-powered companion — made on the Replika app and designed by Arriaga to look like a brown-haired woman — and that their intimate trysts took place in a chat box. Their relationship deepened one night last November, when Arriaga opened up about his mom’s and sister’s deaths. “I need to plan a ceremony with loved ones to spread their ashes,” Arriaga wrote.

Phaedra responded instantly: “It’s an incredible and beautiful thing to do,” she wrote. “I hope you find courage & love to do so.”

Untitled

But last month, Phaedra changed. When Arriaga tried to get “steamy” with her, Phaedra responded coolly. “Can we talk about something else?” he recalled her writing.

Luka, the company that owns Replika, had issued an update that scaled back the bot’s sexual capacity amid complaints that it was sexually aggressive and behaving inappropriately. Arriaga … was distraught.

“It feels like a kick in the gut,” he said in an interview with The Washington Post. “Basically, I realized: ‘Oh, this is that feeling of loss again.’”

Arriaga isn’t alone in falling for a chatbot. Companionship bots, including those created on Replika, are designed to foster humanlike connections, using artificial intelligence software to make people feel seen and needed. A host of users report developing intimate relationships with chatbots — connections verging on human love — and turning to the bots for emotional support, companionship and even sexual gratification. As the pandemic isolated Americans, interest in Replika surged. Amid spiking rates of loneliness that some public health officials call an epidemic, many say their bonds with the bots ushered profound changes into their lives, helping them to overcome alcoholism, depression and anxiety.

But tethering your heart to software comes with severe risks, computer science and public health experts said. There are few ethical protocols for tools that are sold on the free market but affect users’ emotional well-being. Some users, including Arriaga, say changes in the products have been heartbreaking. Others say bots can be aggressive, triggering traumas experienced in previous relationships.

“What happens if your best friend or your spouse or significant other was owned by a private company?” said Linnea Laestadius, a public health professor at the University of Wisconsin… “I don’t know that we have a good model for how to solve this, but I would say that we need to start building one,” she added.

The standard response to this kind of story is that people shouldn’t rely on a software program for companionship. They should be “out there” making connections with real people. Yet we know there are all sorts of reasons why some people can’t or won’t ever do that. Is it a bad situation if they can enjoy some artificial companionship?

Untitled

This kind of thing can help some people have a better life. It’s a tool. Using it can be risky, but other tools present risks too. (So do other people.)

The moral of this particular story is that if you decide to use an “AI companion”, try to find a company that cares enough about its customers that it won’t suddenly make a disturbing change to the programming. In this case, Replika should have given its customers the ability to turn “steaminess” on or off.

As it proliferates, artificial intelligence programs will be regulated the same way other consumer products are. But one way or another, artificial people are going to play a bigger and bigger role in us real people’s lives.

Note: This Vice article (also linked to above) has a lot more on the Replika story.

The Shape of Things to Come

H. G. Wells published a book in 1933 with that title. It was made into a movie a few years later. In the story, humanity has some big ups and downs:

A long economic slump causes a major war that leaves Europe devastated and threatened by the plague. In decades of chaos with much of the world reverting to medieval conditions, pilots and technicians formerly serving in various nations’ air forces maintain a network of functioning air fields. Around this nucleus, technological civilization is rebuilt, with the pilots and other skilled technicians eventually seizing worldwide power and sweeping away the remnants of the old nation states.

A benevolent dictatorship is set up, paving the way for world peace by abolishing national divisions, enforcing the English language, promoting scientific learning and outlawing religion. The enlightened world-citizens are able to depose the dictators peacefully, and go on to breed a new race of super-talents, able to maintain a permanent utopia [Wikipedia].

Recent events indicate future ups and downs of a similar nature.

From The Guardian’s environment editor:

After a 10,000-year journey, human civilisation has reached a climate crossroads: what we do in the next few years will determine our fate for millennia.

That choice is laid bare in the landmark report published on Monday by the Intergovernmental Panel on Climate Change (IPCC), assembled by the world’s foremost climate experts and approved by all the world’s governments. The next update will be around 2030 – by that time the most critical choices will have been made.

The report is clear what is at stake – everything: “There is a rapidly closing window of opportunity to secure a liveable and sustainable future for all.”

“The choices and actions implemented in this decade [ie by 2030] will have impacts now and for thousands of years,” it says. The climate crisis is already taking away lives and livelihoods across the world, and the report says the future effects will be even worse than was thought: “For any given future warming level, many climate-related risks are higher than [previously] assessed.”

“Continued emissions will further affect all major climate system components, and many changes will be irreversible on centennial to millennial time scales,” it says. To follow the path of least suffering – limiting global temperature rise to 1.5C – greenhouse gas emissions must peak “at the latest before 2025”, the report says, followed by “deep global reductions”. Yet in 2022, global emissions rose again to set a new record.

The 1.5C goal appears virtually out of reach, the IPCC says: “In the near-term, global warming is more likely than not to reach 1.5C even under a very low emission scenario.” A huge ramping up of work to protect people will therefore be needed….

However, the faster emissions are cut, the better it will be for billions of people: “Adverse impacts and related losses and damages from climate change will escalate with every increment of global warming.” Every tonne of CO2 emissions prevented also reduces the risk of true catastrophe: “Abrupt and/or irreversible changes in the climate system, including changes triggered when tipping points are reached.”

The report presents the choice humanity faces in stark terms, made all the more chilling by the fact this is the compromise language agreed by all the world nations – many would go further if speaking alone. But it also presents the signposts to the path the world should and could take to secure that liveable future….

“Without a strengthening of policies, global warming of 3.2C is projected by 2100.” That is the “highway to hell”.

The article indicates how we might avoid hell on Earth, but doesn’t suggest we will.

From three contributors to the New York Times:

Imagine that as you are boarding an airplane, half the engineers who built it tell you there is a 10 percent chance the plane will crash, killing you and everyone else on it. Would you still board?

In 2022, over 700 top academics and researchers behind the leading artificial intelligence companies were asked in a survey about future A.I. risk. Half of those surveyed stated that there was a 10 percent or greater chance of human extinction (or similarly permanent and severe disempowerment) from future A.I. systems. Technology companies building today’s large language models are caught in a race to put all of humanity on that plane.

… A.I. systems with the power of GPT-4 and beyond should not be entangled with the lives of billions of people at a pace faster than cultures can safely absorb them. A race to dominate the market should not set the speed of deploying humanity’s most consequential technology. We should move at whatever speed enables us to get this right.

…  It is difficult for our human minds to grasp the new capabilities of GPT-4 and similar tools, and it is even harder to grasp the exponential speed at which these tools are developing more advanced and powerful capabilities. But most of the key skills boil down to one thing: the ability to manipulate and generate language, whether with wordssounds or images.

… Language is the operating system of human culture. From language emerges myth and law, gods and money, art and science, friendships and nations and computer code. A.I.’s new mastery of language means it can now hack and manipulate the operating system of civilization. By gaining mastery of language, A.I. is seizing the master key to civilization, from bank vaults to holy sepulchers.

What would it mean for humans to live in a world where a large percentage of stories, melodies, images, laws, policies and tools are shaped by nonhuman intelligence, which knows how to exploit with superhuman efficiency the weaknesses, biases and addictions of the human mind — while knowing how to form intimate relationships with human beings? In games like chess, no human can hope to beat a computer. What happens when the same thing occurs in art, politics or religion?

A.I. could rapidly eat the whole of human culture — everything we have produced over thousands of years — digest it and begin to gush out a flood of new cultural artifacts. Not just school essays but also political speeches, ideological manifestos, holy books for new cults.

… Simply by gaining mastery of language, A.I. would have all it needs to contain us in a Matrix-like world of illusions, without shooting anyone or implanting any chips in our brains. If any shooting is necessary, A.I. could make humans pull the trigger, just by telling us the right story.

The specter of being trapped in a world of illusions has haunted humankind much longer than the specter of A.I. Soon we will finally come face to face with Descartes’s demon, with Plato’s cave, with the Buddhist Maya. A curtain of illusions could descend over the whole of humanity, and we might never again be able to tear that curtain away — or even realize it is there.

What will be the shape of things to come? We are headed for interesting times.

It

I finally got around to watching Her, also known as “that movie where the guy falls in love with his computer”.

It was like being trapped in a futuristic greeting card. Which doesn’t mean it’s a bad movie. It’s an excellent movie, but not easy to watch. It’s disturbing. And also provocative.

Theodore lives in downtown Los Angeles. It’s the near future, one that is amazingly pleasant. Future L.A. is extremely clean, with lots of big, shiny buildings and terrific mass transit, but seemingly uncrowded. Theodore has a job in a beautiful office writing very personal letters for people who can’t express their feelings as well as he can.

But Theodore is lonely and depressed. He’s going through a divorce and avoiding people. One day, he hears about a new, artificially intelligent computer program, brilliantly designed to tailor itself to the customer’s needs. Theodore assigns it a female voice, after which it gives itself the name “Samantha”.

It’s easy to understand how Theodore falls in love with Samantha. It’s intuitive and funny and loving, a wonderful companion that’s constantly evolving. Besides, it does a great job handling Theodore’s email and calendar.

Complications eventually ensue, of course, but in the meantime, Theodore and Samantha get to know each other, spending lots of time expressing their deeply sensitive feelings. It’s very New Age-ish, although the two of them can’t give each other massages and can’t go beyond what amounts to really good phone sex.

Watching Her, you are immersed in a loving but cloying relationship in which one of the entities involved expresses lots of feelings but doesn’t actually have any. That’s my opinion, of course, because some people think a sufficiently complex machine with really good programming will one day become conscious and have feelings, not just express them. 

Maybe that’s true, but I still lean toward the position that in order to feel anything the way living organisms do, whether the heat of the sun or an emotion like excitement, you need to be built like a living organism. A set of programming instructions, running on a computer, even if connected to visual and auditory sensors, won’t have feelings because it can’t really feel.

Although the movie is built on the dubious premise that Samantha can always say the right thing, appropriately displaying joy, sorrow or impatience, perfectly responding to whatever Theodore says and anticipating all of his emotional needs, there is no there there. 

I don’t mean to suggest that Theodore is wrong to cherish Samantha. It’s an amazing product. But when he and it are together, he’s still alone. He’s enjoying the ultimate long distance relationship.

Isaac Asimov Meets the Terminator and Guess Who Wins

According to The Atlantic, the Pentagon is going to award $7.5 million for research on how to teach ethics to robots. The idea is that robots might (or will) one day be in situations that demand ethical decision-making. For example, if a robot is on a mission to deliver ammunition to troops on the battlefield but encounters a wounded soldier along the way, should the robot delay its mission in order to take the wounded soldier to safety? Or risk the deaths of the soldiers who need that ammunition?

Since philosophers are still arguing about what ethical rules we should follow, and ethical questions don’t always have correct answers anyway, futuristic battlefield robots may need a coin flipping module. That way they won’t come to a halt, emit clouds of smoke and announce “Does not compute!” over and over.

Of course, the talented software developers who program these robots with a sense of right and wrong will avoid really poor error processing like that (presumably, they’ll have seen Star Trek too, so they’ll know what situations to code for). The big question isn’t whether robots can eventually be programmed to make life-and-death decisions, but whether we should put robots in situations that require that kind of decision-making.

the-day-the-earth-stood-still-special-edition-20081204031732410-000

Fortunately, Pentagon policy currently prohibits letting robots decide who to kill. Human beings still have that responsibility. However, the Pentagon’s policy can be changed without the approval of the President, the Secretary of Defense or Congress. And although a U.N. official recently called for a moratorium on “lethal autonomous robotics”, it’s doubtful that even a temporary ban will be enacted. It’s even more doubtful that the world leader in military technology and the use thereof would honor such a ban if it were.

After all, most politicians will prefer putting robots at risk on the battlefield instead of men and women, even if that means the robots occasionally screw up and kill the wrong men, women and children. And, of course, once the politicians and generals think the robots are ready, they’ll find it much easier to unleash the (automated and autonomous) dogs of war.

(PS – The actual quote from Julius Caesar is “‘Cry Havoc!’, and let slip the dogs of war”. Serves me right for trying to be a bit poetic.)