HomeEconomyThe Gemini AI model will allow more complex searches...

The Gemini AI model will allow more complex searches on Google and will give life to an AI “universal agent”

(Updating)

Google reopened the doors of its Mountain View headquarters for the annual developer event, I/O, which, for the second consecutive year, highlighted the area of ​​artificial intelligence (AI). The technology company revealed how it will include the capabilities of the Gemini AI model in more technology products, from research to photo albums. Or even in standalone features, like Project Astra, which wants to be ““a universal AI agent”Able to identify real world objects through the phone.

Artificial intelligence dominated and the foldable (“thinner”) arrived. Google’s main news for the future

The day before the event, the technology company shared a video on social networks in which it referred to a model capable of recognizing what is in a scenario, answering questions in the user’s natural language and developing a conversation. The multimodal capabilities of the model were confirmed throughout the various demonstrations carried out by the company during the broadcast.

“For those who have never seen Google I/O, it’s like Tour of the erasbut without so many costume changes,” joked Sundar Pichai, CEO of Google, at the beginning of the presentation, referring to the singer Taylor Swift’s tour.

He began by summarizing the developments related to the Gemini AI model in recent months, before revealing that the Gemini will be present in the investigationat service Google Photos and you will more easily identify emails in Gmail. In research, for example, users will be able to ask longer and more complex questions, accompanied by images. In a crazier example, the phone’s camera was used, pointing at some purchased sneakers. online that must be returned. In this example, the model was able to find the purchase invoice in the email and then a delivery form to return the order.

Or, in the example of Google Photos, answering the question “what is the license plate of my car” or understanding the evolution of a child’s swimming skills over the years, from photographs saved in the cloud.

For now, AI in Search, a feature called AI Overviews, will be available this week in the US and soon in more markets, without any indication about Portugal. Photos’ more refined research capabilities won’t arrive until summer, also without any indication of markets.

[Reveja no vídeo abaixo o evento de apresentação do Google I/O de 2024]

Google’s event took place after the spring update of its rival OpenAI, which this Monday presented GPT-4o, a more developed model capable of understanding text, audio and images. The GPT-4o can interact with the user via voice. The startup behind ChatGPT announced that the model will be available for free; Currently, the most recent model, GPT-4, is only available to paid subscribers.

OpenAI releases the GPT-4 model that will be available for free. And ChatGPT wins a new computer application

In response to the rival news, Google revealed Project Astra, described as “a universal AI agent capable of being truly useful in everyday tasks.” In the pre-recorded demo, an app on the phone, powered by the Gemini model, was able to recognize and interact with objects in an office (think alliteration with these pens in a cup) or even help solve a work problem on a whiteboard.

Source: Observadora

- Advertisement -

Worldwide News, Local News in London, Tips & Tricks

- Advertisement -