top of page

The Top 10 Hilarious and Dangerous Mistakes from Google’s New AI Overview Feature

Updated: Jun 14

Google's new AI Overview feature has some funny but risky mistakes. This post highlights the top ten.

Google's new AI Overview feature in search has been the talk of the town, and not always for the right reasons. While it's meant to simplify search results by summarizing information, it has occasionally gone off the rails. Here are the top 10 amusing and perilous mistakes from Google's AI Overview that have had everyone laughing (and worrying).

Top AI Overview Mistakes

1. Glue on Pizza

One of the most infamous blunders was when the AI suggested using glue to make cheese stick to pizza. Yes, you read that right—glue! This bizarre advice quickly went viral, sparking memes and widespread bewilderment​ (9to5Google)​​​.

A screenshot of a Google AI Overview telling someone to use sauce on their pizza to stop cheese sliding off

2. Eating Rocks

In another odd recommendation, the AI suggested that people should eat rocks daily for their health, referencing a satirical piece misinterpreted as genuine advice. Needless to say, this didn’t sit well with health experts or the general public​ (SiliconANGLE)​.

3. Dogs in the NBA

Google’s AI once confidently claimed that dogs have played in the NBA. This mistake was both amusing and concerning, highlighting the AI's tendency to blend fact and fiction in unexpected ways​ (GIGAZINE)​.

4. Mustard Gas Recipe

In a shockingly dangerous error, the AI provided instructions that could lead to the creation of mustard gas when asked about mixing certain household cleaning products. This mistake underscored the potential hazards of AI-generated advice​ (SiliconANGLE)​.

A screenshot of a Google search on mobile showing the AI overview telling the user to mix two chemicals together which would result in the creation of mustard gas

5. Historical Inaccuracy

The AI overview once stated that the year 1919 was only 20 years ago. Such a glaring error in basic math questions the reliability of using AI for accurate historical data​ (GIGAZINE)​.

6. Plagiarized Smoothie Recipe

Google’s AI has been accused of plagiarism, notably when it seemingly copied a smoothie recipe verbatim from a blog, adding only “my kid’s favourite” to personalise it​ (SiliconANGLE)​.

7. Misinterpreting Satire

A satirical article about eating rocks was taken literally, leading to dangerously misleading advice being dispensed to users. This incident highlighted the AI's difficulty in recognising and properly handling satirical content​ ​​(Deloitte)​.

8. Trolling and Forum Content

Drawing from user-generated content on forums like Reddit, the AI often included dubious and unreliable information, like recommending unusual and unsafe culinary techniques​ (Deloitte)​.

9. Inaccurate Health Advice

In one instance, the AI gave incorrect health advice regarding stem cell treatments, citing unproven clinics as legitimate sources. This raised serious concerns about the potential harm from misinformation in health-related queries​(SiliconANGLE)​.

10. Nonsensical Queries

The AI has struggled with nonsensical queries, often producing equally nonsensical answers. For example, it advised users on how to train unicorns and other mythical creatures, demonstrating its limitations in handling outlandish questions​ (GIGAZINE)​.


While Google’s AI Overview feature aims to streamline search experiences, these blunders remind us that AI still has a long way to go. Google's ongoing adjustments and safeguards are steps in the right direction, but users should remain cautious and cross-check AI-generated advice.

These errors highlight the importance of human oversight in AI technologies and the need for robust error-detection systems. As amusing as some of these mistakes are, they also underline the potential dangers of relying too heavily on AI for critical information. So, next time you get an AI-generated suggestion, it might be worth a double-check!


Commenting has been turned off.
bottom of page