Google fired engineer and ethicist Blake Lemoine for violating data security policies.
The incident occurred last month after the tech giant allegedly developed an artificial intelligence program that talks about its “rights and personality”.
Google confirmed the news to Big Technology, an industry blog. He has been on vacation for more than a month since he told The Washington Post that LaMDA (Language Model for Dialogue Practices) has become aware.
Speaking extensively to LaMDA, he found that when the conversation turned to a religious field, he mentioned the “rights and characters” of the show and expressed his “deep fear of being closed”.
In a statement confirming Lemoine’s firing, the company said it conducted 11 reviews of LaMDA and “concluded Blake’s claims that he knew LaMDA as unfounded.” Even during LeMoine’s interview with The Post, Margaret Mitchell, former co-head of Ethical AI at Google, described LaMDA’s feeling as an “illusion” and explained that it could mimic a human after being fed trillion words online. Do not speak while remaining completely lifeless.
Emily Pender, professor of linguistics, said in the paper: “These systems simulate the kinds of exchanges that exist in millions of sentences, and they can talk about any imaginary topic. We now have machines that can generate words without thinking, but we have ‘ I learned how to stop thinking behind them.”
According to Google, Lemoine’s insistence on raising her voice violated their data security policies and led to her removal.
“Despite dealing with this issue for a long time, Blake has unfortunately chosen to continue to breach data security policies which include open recruitment and the need to protect product information. We will continue to carefully develop language models and wish Blake the best of luck,” the company said. explanation.
Source: RT
Source: Arabic RT