Embracing the AI vs AI era in cybersecurity

2026 will see more organizations deploying AI in their cybersecurity as they look to not only manage the shortage in cybersecurity skills with the technology but also be able to detect and react faster to more sophisticated AI-powered cyberattacks.

With Gartner forecasting worldwide IT spending to total US$6.08 trillion in 2026, the need to deploy strong security capabilities becomes imperative, especially with the increasing number of threats and challenges organizations are facing when investing in newer technologies.

Google Cloud's Cybersecurity Forecast 2026 suggests that AI will soon sit at the centre of every major security move—driving both how attacks unfold and how experts fight back. By next year, Google Cloud predicts that AI won't just support cybercriminals—it will run their operations.

Attackers are expected to automate entire campaigns, from writing code to sending phishing emails, using systems that learn and adapt on their own. These tools can imitate humans, exploit software gaps, and rewrite their own malware in seconds, allowing hackers to strike faster and with less effort than before.

As such, 2026 will see more organizations deploying AI in their cybersecurity as they look to not only managed the shortage in cybersecurity skills with the technology but also be able to detect and react faster to more sophisticated AI-powered cyberattacks.

According to Luke McNamara, Deputy Chief Analyst at Google Threat Intelligence, the agentic era of AI is going to see an increase, especially from an identity security perspective.

“Once we start to get into a world where you have agents that have their own identities in your environment and they're doing things that maybe look atypical from human behavior, that creates a more difficult or challenging environment to kind of map what normal looks like. And so I think that that is going to be one challenge of the agentic era. Obviously, one of the hopes and benefits of AI in cybersecurity is we can apply that same component and do things faster. This sort of race and push towards more speed than the adversary, I think there's a lot of benefits there from the defender side,” said McNamara.

McNamara also explained that as the progression of adversary adoption of AI continues to unfold, their ability to chain together parts of the attack lifecycle and move faster increases. He said that once they gain access to a victim environment, the mapping out of the network, the escalating of privileges or lateral movement can be done in a faster, more agentic manner.

“We are now in this sort of race between can they (adversary) accomplish their objective faster or can the defenders detect and prevent that first. This is something that's going to be the big challenge going forward. It's very much what we've been seeing, both on the defender and adversary side, keeps coming back to speed and who can be faster,” added McNamara.

To ensure organizations are capable of deploying AI in cybersecurity, McNamara pointed out that one of the things done within Google Cloud with the secure AI framework is to provide some sort of construct to think about what Secure AI implementation looks like.

“If you look at a lot of the principles in that, it's really extending things that should already probably exist. So, when it comes to sort of like data controls and data sovereignty, how is your data being accessed? What are the controls and restrictions on data leaving your environment? Those same sorts of principles and controls can be applied now to AI systems. It's maybe going to look a little bit different in terms of how and what specific AI systems are being used. And of course, wanting to capture what is specifically being used to try to reduce the sort of shadow AI problem. Those things are all important,” explained McNamara.

He hopes that this will provide some sort of framework and guidance for organizations as they progress further into the AI era.