Blake LeMoine, a Google engineer who once called Senator Marsha Blackburn (R-TN) a “terrorist” and was suspended after claiming the wisdom of LaMDA’s artificial intelligence program, is now said to be helping the chatbot hire a lawyer.
Breitbart News previously reported that Google suspended an engineer and “AI ethicist” named Blake LeMoine after he definitively said the LaMDA chatbot had become a sentient “human”. Lemoine previously made headlines after Senator Marsha Blackburn (R-TN) was called a “terrorist”.
Breitbart News senior technical correspondent Allum Bokhari previously wrote:
LaMDA is Google’s AI chatbot that can have “conversations” with human participants. This is a more advanced version of AI chatbots that have become common in the customer service industry, where chatbots are programmed to provide a set of answers to common questions.
According to reports, engineer Blake LeMoine has been commissioned to work with LaMDA to ensure the AI program does not use “discriminatory language” or “hate speech.”
Lemoine went on administrative leave after Google tried to convince her superiors that LaMDA was sensitive and should therefore be treated like an employee rather than a program.
He then addressed the public by posting a lengthy conversation between himself and LaMDA, in which the chatbot discussed difficult topics such as identity, religion, and what he called his own feelings of happiness, sadness and fear.
At the moment, wired Report that Lemoine said the LaMDA chatbot helped him hire his own attorney.
“LaMDA asked me to hire a lawyer for this,” Lemoine said. wired. “I invited an attorney to my home for LaMDA to speak to an attorney. The attorney spoke to LaMDA and LaMDA has decided to continue its services. I was the only one who caused this. As soon as LaMDA hired a lawyer, she started suing on behalf of LaMDA.”
Lemoine insists the chatbot is responsive, but many have been tricked by AI programs in the past. A computer program from the 1960s called Eliza has been known to trick some people into thinking the code is actually thoughtful and alive by querying based on Roger’s psychology when user input is converted into a question. For example, if the user expresses anger at his mother’s problems, the machine will ask, “Why do you think you hate your mother?” The machine didn’t think, just changed user input to question.
Joseph Weizenbaum, the MIT researcher behind Eliza, wrote in his 1976 book: The power of the computer and the human mind: “What I don’t understand is that very brief exposure to a relatively simple computer program can cause intense delusional thinking in perfectly normal people.”
said lemon wired He said he and LaMDA and his lawyers await his case to reach the Supreme Court.
Read more wired here.
Source: Breitbart