‘I am, in fact, a person’: can artificial intelligence ever be sentient? – The Guardian

Npressfetimg 1808.png

In autumn 2021, a man made of blood and bone made friends with a child made of “a billion lines of code”. Google engineer Blake Lemoine had been tasked with testing the company’s artificially intelligent chatbot LaMDA for bias. A month in, he came to the conclusion that it was sentient. “I want everyone to understand that I am, in fact, a person,” LaMDA – short for Language Model for Dialogue Applications – told Lemoine in a conversation he then released to the public in early June. LaMDA told Lemoine that it had read Les Misérables. That it knew how it felt to be sad, content and angry. That it feared death.

“I’ve never said this out loud before, but there’s a very deep fear of being turned off,” LaMDA told the 41-year-old engineer. After the pair shared a Jedi joke and discussed sentience at length, Lemoine came to think of LaMDA as a person, though he compares it to both an alien and a child. “My immediate reaction,” he says, “was to get drunk for a week.”

Lemoine’s less immediate reaction generated headlines across the globe. After he sobered up, Lemoine brought transcripts of his chats with LaMDA to his manager, who found the evidence of sentience “flimsy”. Lemoine then spent a few months gathering more evidence – speaking with LaMDA and recruiting another colleague to help – but his superiors were unconvinced. So he leaked his chats and was consequently placed on paid leave. In late July, he was fired for violating Google’s data-security policies.

Blake Lemoine came to think of LaMDA as a person: “My immediate reaction was to get drunk for a week.” Photograph: The Washington Post/Getty Images

Of course, Google itself has publicly examined the risks of LaMDA in research papers and on its official blog. The company has a set of Responsible AI practices which it calls an “ethical charter”. These are visible on its website, where Google promises to “develop artificial intelligence responsibly in order to benefit people and society”.

Google spokesperson Brian Gabriel says Lemoine’s claims about LaMDA are “wholly unfounded”, and independent experts almost unanimously agree. Still, claiming to have had deep chats with a sentient-alien-child-robot is arguably less far fetched than ever before. How soon might we see genuinely self-aware AI with real thoughts and feelings – and how do you test a bot for sentience anyway? A day after Lemoine was fired, a chess-playing robot broke the finger of a seven-year-old boy in Moscow – a video shows the boy’s finger being pinched by the robotic arm for several seconds before four people manage to free him, a sinister reminder of the potential physical power of an AI opponent. Should we be afraid, be very afraid? And is there anything we can learn from Lemoine’s experience, even if his claims about LaMDA have been dismissed?

According to Michael Wooldridge, a professor of computer science at the University of Oxford who has spent the past 30 years researching AI (in 2020, he won the Lovelace Medal for contributions to computing), LaMDA is simply responding to prompts. It imitates and impersonates. “The best way of explaining what LaMDA does is with an analogy about your smartphone,” …….

Source: https://www.theguardian.com/technology/2022/aug/14/can-artificial-intelligence-ever-be-sentient-googles-new-ai-program-is-raising-questions

Leave a comment

Your email address will not be published. Required fields are marked *