A Google software engineer was suspended after going public with his claims of encountering ‘sentient’ artificial intelligence on the company’s servers. Researchers say it’s an unfortunate distraction from more pressing issues in the industry. What’s more important, researchers say, is addressing issues like whether AI can engender real-world harm and prejudice. Lemoine’s stance may make it easier for tech companies to abdicate responsibility for AI-driven decisions, a professor says. The more artificial intelligence gets sold as sentient, the more people are willing to go along with AI systems that can cause real-world harm, she says. If the focus is on the system’s apparent sentience, it creates a distance from the AI creators’ direct responsibility for any flaws or biases in the programs.
Alphabet’s Google employees were largely silent in internal channels besides Memegen, a person familiar with the matter said. Researchers pushed back on the notion that the AI was truly sentient, saying the evidence only indicated a highly capable system of human mimicry. ‘It is mimicking perceptions or feelings from the training data it was given,’ said Jana Eggers, chief executive officer of the AI startup Nara Logics. The architecture of LaMDA ‘simply doesn’t support some key capabilities of human-like consciousness,’ one researcher said. It wouldn’t learn from its interactions with human users because ‘the neural network weights of the deployed model are frozen’.The company added that its AI can follow along with prompts and leading questions, giving it the appearance of being able to riff on any topic.
The debate over sentience in robots has been carried out alongside science fiction portrayal in popular culture. So the debate had an easy path to the mainstream. ‘Instead of discussing the harms of these companies, everyone spent the whole weekend discussing sentience,’ Timnit Gebru, formerly co-lead of Google’s ethical AI group, said on Twitter. The earliest chatbots of the 1960s and ’70s generated headlines for their ability to be conversational with humans. There is no evidence that human intelligence or consciousness is embedded in these systems, expert says. LaMDA ‘is just another example in this long history,’ he says. An AI system may not be toxic or have prejudicial bias but still not understand it may be inappropriate to talk about suicide or violence in some circumstances, a researcher says. ‘The research is still immature and ongoing, even as there is a rush to deployment,’ Mark Riedl says.
The focus on AI sentience ‘further hides’ the existence and in some cases, the reportedly inhumane working conditions of these laborers said the University of Washington’s Bender. Tech companies must employ hundreds of thousands of human moderators in order to ensure that hate speech, misinformation and extremist content on these platforms are properly labeled and moderated. Putting emphasis on AI sentience would have given Google the leeway to blame the issue on the AI, Bender says. ‘The discourse about sentience muddies that in bad ways,’ she says. In 2015, Google’s Photos service was found to be mistakenly labeling photos of a Black software developer and his friend as ‘gorillas’.
In 2016, ProPublica published an investigation into COMPAS, an algorithm used by judges, probation and parole officers to assess a criminal defendant’s likelihood to re-offend. The investigation found that the algorithm systemically predicted that Black people were at ‘higher risk’ of committing other crimes, even if their records bore out that they did not. Google’s LaMDA technology is not open to outside researchers, meaning the public and other computer scientists can only respond to what they are told by Google or through the information released by Lemoine. ‘It needs to be accessible by researchers outside of Google in order to advance more research in more diverse ways,’ researcher Bruce Riedl said.
Post Your Comments