Citing sources familiar with the process, The New York Times revealed that two employees in Google’s responsible innovation department tried and failed to block the launch of the Bard chatbot last month.
They warned that it was vulnerable to “false and dangerous claims”.
When Google’s chief legal officer met with search and security executives to say the company was prioritizing AI above all else, concerned product reviewers noted issues with AI models with broad languages like Bard and its main competitor, ChatGPT.
Sources claim that the couple’s concerns are that the chatbot is generating misinformation, harming emotionally connected users, and even unleashing “technology facilitated violence” through artificial mass harassment. was ignored by responsible innovation supervisor Jane Jinai. While reviewers urged Google to hold off on Bard’s launch, Jenai allegedly edited its reports to remove that recommendation entirely.
Jenai defended her actions to the Times, pointing out that critics shouldn’t share people’s opinions going forward because Bard was just an experiment. He claimed to have improved the report by “correcting misconceptions and adding more risks and harms that currently need to be assessed”. He insisted that this made the final product safer.
Google credited Genai for the decision to release Bard as a “limited test,” but the chatbot is still set to be fully integrated “soon” into Google’s market-dominating search engine, according to Google’s own website. Google.
Google has previously succeeded in eliminating employee frustrations with artificial intelligence. Blake Lemoine was fired last year after claiming that LaMDA (Language Model for Dialogue Applications) was responsive; Researcher Mahdi Elmohamady resigned after the company blocked him from issuing a paper warning about cybersecurity vulnerabilities in basic language models like Bard. And in 2020, AI researcher Timnit Gebru was outed after he published a study that accused Google of not being careful enough in its AI development.
However, a growing number of AI researchers, tech executives, and other influential futurists have mobilized the rapid “development” of Google and Microsoft and its OpenAI competitors so that effective protections can be applied to the technology. A recent open letter calling for a six-month suspension of “giant AI experiments” has attracted the attention of thousands of signatories, including OpenAI co-founder Elon Musk and Apple co-founder Steve Wozniak .
The technology’s ability to destroy society by making many human professions (or people themselves) obsolete is at the center of many warnings by experts, but lesser risks are already occurring with OpenAI, such as of data breaches, are often mentioned.
Source: RT
Source: Arabic RT