- Google made headlines with some of its inaccurate AI Overviews and AI-generated images.
- Google's VP of Search reportedly said at a meeting that it shouldn't stop taking risks because of mistakes.
- Experts weighed in on Google's strategy and the risks that could arise.
All of the Big Tech companies are racing to scale their AI capabilities and roll out new products — but Google keeps making headlines about AI mistakes that go viral.
Shortly after releasing its AI Overviews feature, which provides AI-generated summaries for some search queries at the top of the page, the internet started buzzing about the search engine recommending eating glue pizza or consuming rocks.
Earlier this year, it launched its image-generation tool on Gemini and caused a stir when the chatbot inaccurately recreated images of historical figures. Google acknowledged the issue and paused the feature.
It sounds like Google isn't going to pump the brakes anytime soon, though, even with the high-profile flubs.
Google VP of Search Liz Reid addressed the recent pizza glue and eating rocks fiasco at a recent all-hands meeting and took the opportunity to reaffirm the company's AI strategy, according to leaked audio obtained by CNBC.
"It is important that we don't hold back features just because there might be occasional problems," Reid reportedly said in the meeting.
The VP said in the meeting that Google should address the problems when they're discovered and "act with urgency," but that doesn't mean it "shouldn't take risks," CNBC reported.
AI investor and managing partner at venture capital group Gaingels Lorenzo Thione told Business Insider that he generally thinks it's the right move to push experimental features out. But he said users need to know when and how they can rely on results. Thione said product disclosures need to be different when the tool being used is the publisher, curator, and moderator.
Google notes that "Generative AI is experimental" in AI Overviews. It also has a safety guide for developers that states generative AI tools "can sometimes lead to unexpected outputs, such as outputs that are inaccurate, biased, or offensive."
Google is far from the only company figuring out the risks of generative AI products. Tim Cook said Apple Intelligence is bound to get some things wrong, although he doesn't expect it to be often. Microsoft also said on Thursday that it would hold off on launching an AI tool intended to be available when its CoPilot PCs ship after privacy concerns arose.
But CEO of AI platform Copyleaks Alon Yamin told BI that by releasing large-scale features right at the top of Google Search, the company is making those mistakes particularly visible.
The alternative is to be more gradual about releasing features and not place them at the center if they're not fully ready, he said.
Yamin said it makes sense that the company wants to release products quicker because of talks about Google being behind in the AI race. But while generative AI isn't fully bulletproof at the moment, it's important to balance timing with innovation and accuracy, Yamin said.
Reid previously wrote in a blog post that Google builds "quality and safety guardrails into these experiences," and extensively tests them before launching. But Reid said in the all-hands meeting that Google "won't always find everything," according to the CNBC report.
It's worth noting these mistakes didn't appear to crop up frequently. A Google spokesperson said the "vast majority" of results are accurate and that the company found a policy violation on "less than one in every 7 million unique queries with AI Overviews."
But while a suggestion to glue cheese on pizza and eat rocks may be a minor error, Yamin said there are more serious risks around privacy, security, and copyright that could arise — and the faster you work, the more risk there is.
Do you have a tip about Google? Reach out to the reporter from a non-work email at [email protected].