GenAI and agentic AI in cybersecurity: A complex problem to unpack?

Bryce Boland, Head of Security, ASEAN at AWS, unpacks the complexities of using GenAI and agentic AI in cybersecurity.

As businesses move towards deploying AI-based cybersecurity solutions, the challenge now is should they look to deploy agentic AI solutions in cybersecurity or rely on GenAI capabilities for their cybersecurity solutions.

Bryce Boland, Head of Security, ASEAN at AWS unpacks the complexities of using GenAI and agentic AI in cybersecurity as AWS announces several new capabilities to boost cybersecurity for customers.

[Related: AWS banking on new security capabilities in ASEAN]

How different is using agentic AI in cybersecurity as compared to GenAI in cybersecurity?

That's a complex problem to unpack. I think there's a couple of models for ways of thinking about this, and it is evolving quite quickly. So, you'll see a lot of announcements in this space about how agentic solutions are designed and built.

But fundamentally, what we're talking about is tools that use AI foundation models to generate tasks for other AI models. And so instead of having like one layer of activity, you've got one layer generating potentially multiple layers or a tree or a graph of activity. And each link in that layer of activity has to have security controls like access management and data security and vulnerability management to ensure that it is secure, that it is safe, and that it isn't able to be tampered with or modified in some unauthorized way.

And so essentially, agentic is expanding both the use of the AI models themselves. So, it's creating a significant increase in the number of tokens, the workload size, but it's also increasing the number of points where you need to have security controls applied. And so, this is an area of focus for us in terms of providing tooling and automation to be able to simplify that for customers.

You'll also see we are working to create frameworks to make it easy for developers to build agentic applications. And we'll be continuing to invest in that space to make it easier for our customers to build securely with agentic.

Essentially, it's massively expanding the scope, but the controls that are necessary remain the same. They just need to be able to operate at scale to support the scale of the agentic workloads.

Who watches these agentic controls? Would we still need the human in the loop to oversee these agentic models?

That's going to come down to every customer who's developing an agentic workflow as everyone is going to have a different risk approach. So typically, there'll be a number of controls in place to ensure that the data is secure, to ensure that the identities of the calling agent and the called agent are known, to ensure that the content isn't tampered with in transit, and so on. And depending on the complexity and the impact of the decisions that are taken by an agent, every organization is going to make their own risky decisions.

I think it's important that that remains that way. We can't expect that every agentic solution is going to be as important or as impactful as every other. And so what is important is that there's consistent controls that can provide visibility or action that can enable customers to make informed decisions and to be able to make the choices about whether or not there needs to be a human in the loop at any point, or if in fact they don't need to have a human in the loop at all.