Shadow AI a growing concern for organizations in APAC
According to Cisco’s 2025 Cybersecurity Readiness Index, 60% of organizations lack confidence in detecting unregulated AI deployments posing significant cybersecurity and data privacy risks.
Cisco’s 2025 Cybersecurity Readiness Index revealed that only 5% of organizations in Asia Pacific have a matured level of readiness when it comes to dealing with any cyberthreats. With AI cyberthreats seemingly becoming more rampant, being able to defend and protect organizations becomes an even greater imperative.
The study also revealed that only 51% of respondents are confident their employees fully understand AI related threats, and 49% believe their teams fully grasp how malicious actors are using AI to execute sophisticated attacks. This awareness gap leaves organizations critically exposed.
Another interesting finding from the report is the increasing concerns about shadow AI. According to the study, 60% of organizations lack confidence in detecting unregulated AI deployments posing significant cybersecurity and data privacy risks.
To understand more about this, CRN Asia speaks to Juan Huat Koo, Director, Cybersecurity, Cisco ASEAN. Koo breaks down the increasing threat from shadow AI as well as how companies in the region can mitigate such threats.
What is shadow AI, and how letting it fester could be detrimental to businesses and governments?
Shadow AI refers to the use of third-party AI tools and applications by employees without the oversight or approval of an organization’s IT or security teams. According to the 2025 Cisco Cybersecurity Readiness Index, GenAI tools are widely adopted, with 48% of employees in Southeast Asia (SEA) using approved third-party tools. However, 27% have unrestricted access to public GenAI tools, and 48% of IT teams are unaware of employee interactions with GenAI, underscoring major oversight challenges.
While these tools can enhance productivity and innovation, they pose significant risks, such as the inadvertent leakage of proprietary information and unauthorized use of AI-generated artifacts. For instance, employees debugging code with third-party AI tools have unintentionally exposed sensitive source code. Similarly, tasks like editing, content generation, or data analysis using tools like ChatGPT can lead to the unintentional sharing of sensitive information.
Shadow AI poses significant cybersecurity and data privacy risks for businesses and governments as it is hard for security teams to monitor and control what they cannot see. Our data reflects this issue as well with 57% of organizations across SEA lacking confidence in detecting unregulated AI deployments, or shadow AI.
Furthermore, only 8% of organizations in the region have achieved a "Mature" level of cybersecurity readiness, with most struggling due to talent shortages (93%), complex security postures (85%), and unmanaged device vulnerabilities (90%).
In an already complex threat landscape, the additional challenges posed by Shadow AI further compound the risks, emphasizing the need for robust visibility and comprehensive security strategies.
How can companies mitigate/negate the impact of shadow AI?
There are a few things companies can do to mitigate the impact of shadow AI. The first is to implement a robust set of policies, guidelines or framework to govern the safe and responsible use of AI.
Cisco continuously evolves our Responsible AI Framework based on standards for transparency, fairness, accountability, privacy, security, and reliability. The framework governs how we design, develop, integrate, and use AI, ensuring that AI solutions are operated as they are intended to. We also provide well-informed guidelines so employees adhere to IT governance standards by the Cisco security team.
Companies should also prioritize bridging the awareness gap regarding the security risks of exposing sensitive business information to GenAI tools, so employees are motivated to take precautionary measures. According to the same Cisco report, only 59% of respondents in SEA are confident that their employees fully understand AI related threats, and 54% believe their teams fully grasp how malicious actors are using AI to execute sophisticated attacks.
Lastly, technology can play a key role in mitigating the impact of shadow AI. Cisco AI Defense is a pioneering solution that enables and safeguards AI transformation within enterprises. It provides security teams with a common substrate across their enterprise, offering visibility wherever AI is being used and continuously validates that AI models aren't compromised and enforces safety and security guardrails. By leveraging security and monitoring tools like Cisco AI Defense, organizations can mitigate risks posed by shadow AI and maintain a secure AI environment.
Can cybersecurity in AI really make a difference?
The adoption of AI has introduced new safety and security risks that traditional security solutions can’t address. In just the last 12 months, 89% of our survey respondents say their organizations have experienced AI-related security incidents. Some of which include model thefts or unauthorized access to company-owned AI (54%), AI enhanced social engineering (45%), and data poisoning attempts (49%). These incidents highlight the urgent need for organizations to prioritize cybersecurity in AI.
For the first time, we are securing systems that think, talk, and act autonomously in ways we can’t fully predict. Unlike traditional applications, AI systems are still evolving very rapidly, making them uniquely challenging to secure. Adding on to that, malicious actors are also using AI for sophisticated attacks. As AI becomes more widely adopted, we need a common layer of visibility to track and secure AI usage across the entire ecosystem.
To tackle today’s cybersecurity challenges, organizations must invest in AI-driven solutions, simplify security infrastructures, and enhance AI threat awareness. They should look for security solutions with a platform-approach that simplify security and leverage AI for threat detection, response, and recovery; address talent shortages; and manage risks from unmanaged devices and shadow AI.
Lastly, how are cybersecurity partners enabling customers in their adoption of AI in cybersecurity?
AI is creating opportunities for cybersecurity, allowing teams to operate at machine scale rather than human scale as they strive to stay ahead of adversaries. Cybersecurity partners are pivotal in helping organizations adopt AI-driven security solutions by providing expertise, infrastructure, and tools tailored to specific needs. At Cisco, we are committed to helping our customers tackle the growing challenges of managing AI security risks.
That is why we developed the Cybersecurity Readiness Index – to help organizations honestly assess where they stand across five critical security pillars and identify their most pressing gaps. This assessment-first approach prevents organizations from making scattershot investments that do not address their most significant vulnerabilities.
Beyond assessment, forward-thinking partners are prioritizing simplification while driving innovation in security solutions by leveraging AI to enhance efficiency and resilience. 85% of ASEAN organizations report their complex security infrastructures – often involving more than 10 point product solutions – are severely hindering their threat response capabilities. Cybersecurity partners must help consolidate and integrate these disparate systems to create a unified security architecture that leverages AI. Our recent AI-powered innovations in the Cisco Security Cloud, XDR, and Hypershield reflect our commitment to providing customers with an integrated platform approach scalable to organizations of any shape and size.
Lastly, cybersecurity partners should also help businesses ensure they comply with stringent governance around data privacy, security, confidentiality, IP rights, and bias. Our customers trust us because we believe that privacy is a fundamental human right and our approach to AI development is grounded in Responsible AI Principles.