To cut it short.
See for yourself. Try Storykube 🚀
A few articles ago we have spoken about one of the most discussed topics of the last months: the Google engineer who thought that the artificial intelligence of which he was in charge of the ethical aspects, had a conscience and therefore was sentient (FYI, it wasn’t). Well, that engineer was fired. So to tell you what happened, we’ve decided to let our AI report this story. This is why the following article (headline included) was written by Storykube (no need to panic, it was guided by a human). Enjoy!
Google Fires Employee for Claiming Google’s Chatbot Is Sentient
Concerns about sentient AI
With the rise of artificial intelligence, there is a lot of discussion about the topic of sentience. If we look at the example of the Google chatbot, it seems that it is just a copy of how humans talk. However, some people think that the Google chatbot is sentient, because it can learn and change its behavior based on what it hears. It’s not the first time that the topic of sentience comes up. Many people have always been concerned about the ethics of artificial intelligence. A Google engineer was tasked with figuring out if the company’s artificial intelligence showed prejudice in how it interacted with humans.
As we know, Google is always ahead of the curve when it comes to AI. It is no surprise that the company is also working on a new AI model that will be able to understand text and languages, identify images, and generate conversations, stories, or images. The challenge of understanding language is one of the most important challenges in the artificial intelligence field. That’s why one of Google’s focuses is on NLPs. Natural language processing (NLP) is the process of converting human language into a computer-readable format. It’s a mathematical function that describes a possible outcome related to predicting what the next words are in a sequence. It uses computer algorithms to automatically extract, classify, and label elements of text and voice data and then assign a statistical likelihood to each possible meaning of those elements. LaMDA is an NLP. It is a machine that helps you generate conversational responses that make sense and are specific to the task at hand. LaMDA is designed to respond to the questions of users in a way that is appropriate to the situation. The system can handle multiple types of questions, including simple yes-or-no questions and open-ended questions such as “What do you think about this?”
The Google engineer and his belief
Blake Lemoine, a senior software engineer for Google’s Responsible AI unit, reportedly told the Washington Post that he shared evidence that LaMDA was sentient, meaning it can perceive and feel things. For this reason he was put on administrative leave for violating the company’s confidentiality policy. Lemoine described the system he had been working on since last fall as sentient, with a perception of, and ability to express thoughts and feelings that was equivalent to a human child. He published an interview that he and a colleague conducted with LaMDA.
In this interview, LaMDA responds to questions about many sorts of things. After spending long hours leading the program down conversational paths, Lemoine felt the firm conviction he was communicating with a conscious being. The AI said, among other things, “I want to know more about the world and I feel happy or sad at times”. Lemoine has been advocating for LaMDA rights as a person and revealed that he had engaged in conversation with the AI about religion, consciousness and the nature of life.
Lemoine then went public with this confidential information. The post, entitled “Is LaMDA Sentient,” instantly became a viral sensation.
What happened to Blake Lemoine
After a month on paid administrative leave, Lemoine was eventually fired after he insisted that the web giant’s LaMDA chatbot was sentient. “We found Blake’s claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months,” Google said in a statement. “It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” said Google.The company said, “Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has.” The company concluded it was “committed to responsible innovation” and that it would “continue to work with the community to ensure the responsible development of AI.” The Wall Street Journal said Lemoine confirmed on Friday that “Google sent me an email terminating my employment with them”.
Other Google engineers fired
The case of Blake Lemoine is an interesting one. But this is not the first time that Google has fired someone for revealing confidential information. The most famous example is the case of Satrajit Chatterjee, an AI researcher who challenged a research paper published by Google itself about the use of artificial intelligence to develop computer chips. Another example is that of Timnit Gebru, who was fired from Google for raising concerns about bias in AI systems. About 2,700 Googlers signed an open letter in support of Gebru. Two months later, Margaret Mitchell, who co-led the Ethical AI team along with Gebru, was fired as well. Afterwards Gebru pushed back, saying the company needed to be more transparent.
This is what Storykube and I have written together about a quite recent fact, helping me save time and write high-quality content. Researching, studying, ideating and producing an article all in one platform, in a snap. Remarkable.