FINANCE

Google co-founder Sergey Brin on Gemini: ‘We definitely messed up on the image generation’


AI models are still a work in progress, and Google’s Gemini is no exception, cofounder Sergey Brin said in a recent video taken at San Francisco’s AGI House. “We definitely messed up on the image generation and I think it was mostly due to not thorough testing and it definitely, for good reasons, upset a lot of people,” he said.

This was, of course, Brin speaking about Gemini’s recent gaffe in which it produced historically inaccurate pictures, including of racially diverse Nazis, prompting Google to pause the program altogether.

The Gemini meltdown prompted a $90 billion selloff in the stock of parent company Alphabet after largely right-wing backlash about the AI model’s error showing a racial bias. The overly “woke” algorithm, users claimed, just kept producing inaccurate, non-white images for prompts such as Adolf Hitler, the pope, and medieval Viking warriors, according to some users. The influential tech blogger Ben Thompson, of Stratechery, was particularly scathing, calling for the resignation of CEO Sundar Pichai because of the rotten culture exposed by Gemini’s “absurd” performance issues.

Brin, the 50-year-old Google cofounder, said he “kind of came out of retirement just because the trajectory of AI is so exciting,” although he cautioned in his appearance at AGI House that much work remains to be done.

He said the company is still not sure why its AI model “leans left in many cases,” but added that it is not intentional. Brin said that while Gemini’s error was clearly bad, the same could also happen with other large language models.

“If you deeply test any text model out there, whether it’s ours, ChatGPT, Grok, what have you, it’ll say some pretty weird things that are out there that you know definitely feel far left, for example,” he said.

Google did not immediately respond to Fortune’s request for comment.

Brin’s words follow Pichai’s stern message to staff, in an internal memo first reported by Semafor, that Gemini’s strange AI images had “shown bias” and were “completely unacceptable.” 

Brin said that Google had already made progress in changing Gemini to help avoid similar errors in the future.

“If you try it starting over this last week it should be at least 80% better, of the test cases that we’ve covered,” he said.

Still, Brin admits that criticisms of AI often intersect with politics. He pointed out that while AI models should avoid spreading inaccurate information, the definition of misinformation can vary depending on the person.

“There’s a lot of complicated political issues in terms of what different people consider misinformation versus not,” he said.

Despite the recent stumbles with Gemini, Brin said that he is excited about the future of AI and has even been writing some code, although “it’s not really code that you would be very impressed by,” he joked.

He said that while training costs are high, he thinks betting on AI is important because of the long-term utility and time savings it could bring to employees across industries. 

“If it saves somebody an hour of work over the course of a week, that hour is worth a lot,” he said.

Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.




Source link

Related Articles

Please, use our online surveys for check your audience.
Back to top button
pinup