This Google Gemini Flaw Can Create Malicious Gmail AI Summaries

Jul 14, 2025 - 15:00
 0  3
This Google Gemini Flaw Can Create Malicious Gmail AI Summaries

AI summaries are meant to make life easier: They're meant to truncate a large amount of text into something you can quickly scan, so you can spend time on more pressing matters. The trouble is, you can't always trust these summaries. Usually, that's because AI hallucinates, and incorrectly summaries the text. In other cases, the summaries might actually be compromised by hackers.

In fact, that's what happening with Gemini, Google's proprietary AI, with Workspace. Like other generative AI models, Gemini can summarize emails in Gmail. However, as reported by BleepingComputer, the tech is vulnerable to exploitation. Hackers can inject these summaries with malicious information that pushes those users towards

Here's how it works: A bad actor creates an email with invisible text inside of it, utilizing HTML and CSS and manipulating the font size and color. You won't see this part of the message, but Gemini will. Because the hackers know not to use links or attachments, items that might flag Google's spam filters, the message has a high chance of landing in the user's inbox.

So, you open the email, and notice nothing out of the blue. But it's long, so you choose to have Gemini summarize it for you. While the top of the summary likely is focused on the visible message, the end will summarize the hidden text. In one example, the invisible text instructed Gemini to produce an alert, warning the user that their Gmail password was compromised. It then highlighted a phone number to call for "support."

This type of malicious activity is particularly dangerous. I can see how someone using Gemini believes a warning like this, especially if they already take the AI summaries at face value. Without knowing how the scam works, it seems like an official output from Gemini, like Google engineered its AI to warn users when their passwords were compromised.

Google did respond to a request for comment from BleepingComputer; iIt claims it has not seen evidence of Gemini manipulation in this way, and referred the outlet to a blog post on how it fight against prompt injection attacks. A representative shared the following message: "We are constantly hardening our already robust defenses through red-teaming exercises that train our models to defend against these types of adversarial attacks." It confirmed some tactics are about to be deployed.

How to protect yourself from this Gemini security flaw

The security researcher that discovered the flaw, Marco Figueroa, has some advice for security teams to combat this vulnerability. Figueroa recommends removing text designed to be hidden from the user, and running a filter that scans Gemini's outputs for anything suspicious, like links, phone numbers, or warnings.

As a Workspace end user, however, you can't do much with that advice. But you don't need to, now that you know what to look for. If you use Gemini's AI summaries, be deeply skeptical of any urgent messages contained within—especially if those warnings have nothing to do with the email itself. Sure, you might receive a legitimate email warning you about a data breach, and, as such, an AI-generated summary will tell you the same. But if the summary says the email in question is about an event happening in your city next week, and at the bottom of the summary you see a warning about your Gmail password being compromised, you can safely assume you're being messed with.

Like other phishing schemes, the warning itself might have red flags. In the example highlighted by BleepingComputer, Gmail is spelled "GMail." If you're not familiar with how Gmail is formatted, that might not stick out to you, but look for other inconsistencies and mistakes. Google also has no direct phone number to call for support issues. If you've ever tried to contact the company, you'll know there's virtually zero way to get in touch with a real person.

Beyond this phishing scheme, you should be skeptical of AI summaries. That's not to say they should be avoided entirely—they can be helpful—but AI summaries are fallible, if not prone to failure. If the email you're reading is important, I would suggest avoiding the summaries feature, or at least taking a scan of the original text to make sure the summary did get it right.

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0