HomeTechnologyGoogle Fires Engineer Convinced that AI Chatbot Has Emotions

Google Fires Engineer Convinced that AI Chatbot Has Emotions

Google recently fired one of its engineers who shared the view that the company’s artificial intelligence system has minds and feelings.

BBC News reports that Google last month fired Blake Lemoine, an engineer who coined the theory that Google’s AI language system is intelligent and has its own “preferences” and should be respected as an individual. Google and several AI experts denied Lemoine’s claims, and on Friday the company confirmed that Lemoine had been fired.

(Photo by LLUIS GENE/AFP via Getty Images)

Before Lemoine made headlines alleging that the company’s artificial intelligence system had become sensitive, it was okay behavior at Masters of the Universe, formerly calling Senator Marsha Blackburn (R-TN) a “terrorist”. .

Lemoine told BBC News he is currently seeking legal advice and cannot comment further. Google said in a statement that Lemoine’s claims about the Language Model for Speech Applications (LAMDA) are “not completely proven” and the company has been working with Lemoine for “months” to clarify this.

The statement read: “It is therefore unfortunate that Blake, despite his long interest in this issue, chose to continue to violate explicit employment and data security policies that include the need to protect product information.”

Lamda is Google’s artificial intelligence for communication and will be used to build chatbots.

Breitbart News reported earlier this month:

According to reports, engineer Blake LeMoine has been commissioned to work with LaMDA to ensure the AI ​​program does not use “discriminatory language” or “hate speech.”

After trying to persuade his superiors at Google to believe that LaMDA is sensitive and therefore should be treated as an employee rather than a program, Lemoine went on administrative leave.

After that, he went public by posting a lengthy conversation between him and LaMDA, in which the chatbot discussed difficult issues of identity, religion, and his own feelings of happiness, sadness, and fear.

in a comment Washington Post, Lemoine said that if he didn’t know that LaMDA is an artificial intelligence, he would have assumed he was talking to a human.

“If I didn’t know exactly what it was, what kind of computer program we’ve been building lately, I would have thought it was a 7-year-old, 8-year-old boy who knew physics,” Lemoine said. he also argued that the debate should go beyond Google.

“I think this technology will be great. I think it will benefit everyone. But maybe other people disagree and maybe we shouldn’t be making all the decisions at Google.”

Google also commented Mail, It defies the claim that LaMDA is sensitive.

“Our team, including ethicists and technologists, looked at Blake’s concerns as part of our AI principles and told him the evidence did not support his claims,” ​​said Google spokesperson Brian Gabriel. “I was told there was no evidence (and a lot of evidence against it) that LaMDA felt.”

Lemoine worked on the Google Responsible AI team and said: Washington post whose job is to test whether the technology uses discriminatory language. He says LAMDA is self-aware and able to communicate about religion, emotions and fears. As a result, Lemoine determined that LAMDA was intelligent.

Read more about BBC News here.

Source: Breitbart

- Advertisement -

Worldwide News, Local News in London, Tips & Tricks

- Advertisement -