Hidden flaws in Google's Gemini left user data exposed
Tenable found three flaws in Google's Gemini that allowed attackers to hijack its features and steal data without users knowing.
Google's Gemini suite recently faced a serious security issue. Cybersecurity firm Tenable found three flaws that exposed users to data theft without their knowledge. The issues, now fixed, showed how attackers could quietly turn Gemini's own features against its users. Tenable called this set of vulnerabilities the "Gemini Trifecta."
Each flaw targeted a different part of Gemini. The first was in Gemini Cloud Assist. Attackers could plant fake log entries with hidden instructions. When users interact with Gemini later, it will unintentionally follow those instructions, giving attackers control.
The second issue involved the Gemini Search Personalization Model. By sneaking malicious queries into a victim's browser history, attackers could make Gemini treat those queries as trusted input. This allowed them to take sensitive data such as stored information and location history.
The third flaw appeared in the Gemini Browsing Tool. Here, attackers could mislead Gemini into sending out hidden web requests that carried private user data. These requests were directed to servers they controlled, effectively handing over the data.
A silent security risk
Combined, these flaws acted like hidden doors. Attackers didn't need to install malware or trick users with phishing emails. Instead, Gemini itself became the delivery system. This raised the risks for anyone relying on AI tools that pull information from logs, search history, or the web.
Tenable researchers traced the root of the problem to Gemini's integrations. The system failed to clearly tell apart trusted user input from attacker-planted data. Poisoned logs, fake browser history entries, and hidden web content were all treated as legitimate context.
"Gemini draws its strength from pulling context across logs, searches, and browsing. That same capability can become a liability if attackers poison those inputs," said Liv Matan, Senior Security Researcher at Tenable.
Matan added that the Gemini Trifecta shows how AI platforms can be manipulated in ways users don't see. Data theft can happen silently, forcing security teams to rethink how they protect these systems.
What could have happened
Before Google fixed the flaws, attackers could have quietly taken advantage of them in several ways. They may have placed malicious commands in logs or search history, allowing them to impact Gemini's behavior without raising suspicion. They might have extracted stored information and location details in the background, obtaining access to sensitive data without the user's knowledge.
By abusing cloud integrations, attackers also had the potential to move deeper into connected systems. On top of that, they could have redirected private user data to their own servers through Gemini's browsing tool, turning routine features into channels for data theft.
Next steps for security teams
Google has already addressed the issues, so users don't need to take action. But Tenable urged security teams to treat AI features as active attack surfaces. They should regularly review logs and search histories, watch for unusual activity, and test systems for prompt injection.
"This vulnerability disclosure underscores that securing AI isn't just about fixing individual flaws," Matan said. "It's about anticipating how attackers could exploit the unique mechanics of AI systems and building layered defenses that prevent small cracks from becoming systemic exposures."