Optimism in the use of AI agents for security, reveals Salesforce study
IT security leaders expect AI agents to be beneficial, yet most see significant readiness gaps in deploying proper safeguards
Half of IT security leaders in Asia Pacific are worried that their data foundation isn’t set up to get the most out of agentic AI while 57% are also not fully confident that they have the appropriate guardrails to deploy AI agents. These were among the findings from Salesforce’s latest State of IT report. The report is based on a global survey of over 2,000 enterprise IT security leaders, including 588 in the APAC region.
Despite 100% of security leaders across the region identifying at least one security concern that could be improved by agents, less than half of them aren’t sure they have the quality data to underpin agents, or that they could deploy the technology with the right permissions, policies, and guardrails, but progress is being made.
According to Gavin Barfield, vice president and chief technology officer for solutions in ASEAN at Salesforce, organizations can only trust AI agents as much as they trust their data.
“When 62% of security leaders in Asia Pacific report that customers remain hesitant about AI adoption due to security and privacy concerns, it’s clear that robust data governance isn’t optional, but essential. IT teams that establish strong data governance frameworks will find themselves uniquely positioned to harness AI agents for their security operations all while ensuring data protection and compliance standards are met,” said Barfield.
Interestingly, the report also revealed that 76% of APAC organizations expect to increase security budgets over the coming year. The budget will most likely include relying on AI agents to enhance security and compliance as 82% believe AI agents offer strong compliance opportunities.
However, 82% also feel that the use of AI agents also presents compliance challenges. This is why only 52% are fully confident they can deploy AI agents in compliance with regulations and standards. And the reasons for this include a huge lack in confidence in the accuracy of explainability in AI outputs. 54% haven't perfected their ethical guidelines for AI use as well.
As such, while the adoption of AI agents brings forth considerable advantages, there are still areas where IT security teams need to strengthen their foundations. In addition to the steps these teams must take to shore up their data infrastructure for the agentic era, over half admit they have work to do to bring their overall security and compliance practices up to par. Only 57% believe their security and compliance practices are fully prepared for AI agent development and implementation.
In conclusion, security leaders are optimistic about the impact of AI agents tackling security concerns. However, until they have put in place all the necessary requirements in their data foundation to ensure a proper deployment, the scenario may not provide them with the desired outcome and could even end up making things a lot worse.