Google Cloud says AI will drive both sides of the cyber war in 2026
Google Cloud's 2026 forecast says AI will drive both cyberattacks and defenses, with hackers automating code and phishing while defenders use the same tools to react faster.
The balance of power in cybersecurity is shifting fast as both attackers and defenders turn to artificial intelligence. Google Cloud's Cybersecurity Forecast 2026 suggests that AI will soon sit at the center of every major security move—driving both how attacks unfold and how experts fight back.
By next year, AI won't just support cybercriminals—it will run their operations. Attackers are expected to automate entire campaigns, from writing code to sending phishing emails, using systems that learn and adapt on their own. These tools can imitate humans, exploit software gaps, and rewrite their own malware in seconds, allowing hackers to strike faster and with less effort than before.
One emerging concern is prompt injection, where attackers manipulate AI into breaking its own safety rules. As more companies build AI into daily workflows, experts warn this form of deception could become one of the most damaging types of cyberattacks in 2026.
Smarter scams, same human weakness
Social engineering—tricking people instead of breaking systems—remains a top tactic. With AI, it's becoming harder to spot. Groups like ShinyHunters have already used voice cloning to pose as executives or IT staff. Experts predict that vishing, or voice-based phishing, to become so realistic that even skilled personnel may be unable to tell the difference.
AI can also pull public data to create detailed profiles, making scams personal and convincing. Because these attacks target people, not software, security tools often can't detect them. Experts warn that extra verification steps and strong approval layers will be important to counter this threat.
The age of AI agents
More companies are using AI agents to handle daily tasks—from approving payments to managing systems—but this new efficiency brings new risks. Google's report says security teams must start treating AI agents like digital employees, each needing unique access controls and monitoring.
A growing issue, known as the "Shadow Agent" problem, is also on the rise. Employees from time to time connect uncertified AI tools to company data to make their jobs easier. This results in hidden weak points that can lead to leaks or breaches. Rather than banning such tools, experts recommend giving workers safe, approved ways to use AI responsibly.
Old crimes, new platforms
Ransomware and data theft remain major threats. Attackers increasingly use file transfer systems to steal large datasets quickly. Social engineering and voice scams are still common entry points, often bypassing multi-factor authentication.
Meanwhile, cybercrime is expanding into cryptocurrency and decentralized finance (DeFi). Criminals use blockchain platforms to hide their identities, but these same systems also create permanent records investigators can follow.
State actors stay active
Government-backed hackers are stepping up their long-term operations. Russia is shifting focus to global spying campaigns, while China continues to target technology and semiconductor firms. Iran's digital operations are growing amid regional tensions, and North Korea is still stealing cryptocurrency to fund its government.
Sandra Joyce, VP of Google Threat Intelligence, summed it up, "We expect to see more ransomware and extortion. This problem is going to continue and increase in 2026." The year ahead will test whether defenders can use AI's speed and intelligence without letting it become their biggest new risk.