The thinning line between AI and data privacy
CRN Asia reached out to several vendors to get their views on where the line should be when it comes to AI and data privacy.
With AI relying heavily on data to produce the best results, there continues to be a debate on how much data should the AI have access to produce the desired results. As with any AI agents, the more data it's fed, the better it learns, and the better capabilities it has.
While businesses have imposed control and regulations on what type of data can AI models train on, the reality is, many organizations still unknowingly reveal data that they shouldn’t when using AI.
With shadow AI and data privacy being constant challenges that organizations are hoping to manage better, CRN Asia reached out to several vendors to get their views on where the line should be when it comes to AI and data privacy.
Rachel Ler, Area Vice President of Asia at Fastly
AI adoption is accelerating across Asia, but it does not change the fundamentals of data privacy where accountability and transparency are paramount. Organizations remain responsible for how data is collected, processed, and protected regardless of whether that data is handled by AI systems, cloud platforms, or channel partners.
Just as importantly, organizations must ensure employees who handle personally identifiable information (PII data) understand what can and cannot be shared with AI tools. This requires a clear, enforceable AI policy that defines approved use cases and explicitly prohibits the use of customer or sensitive data when interacting with AI models.
In a region with diverse data sovereignty and regulatory requirements, understanding where data is processed and who has access to it is critical. When data is used beyond its intended purpose or without transparency, trust is quickly eroded.
Lim Hsin Yin, Vice President of Sales, ASEAN at Cohesity
In 2026, the line between AI and data privacy comes down to control and accountability. AI offers powerful insights, but it can also amplify risk if data isn’t properly governed. Regulations in the AI space are tricky as many of those involved in the process don’t always have a true and in-depth understanding of all the areas that must be considered.
At the fundamental level, organisations must ensure AI only touches data they can classify, secure, and recover—even during an attack. With the ever-growing speed, scale and sophistication of cyber threats, protecting data isn’t just about keeping it safe; it’s about ensuring how it can be restored quickly and reliably.
When AI systems operate without clear oversight, organisations are at risk not only for regulatory breaches but also reputational and financial damage. The key is responsible AI use—balancing innovation with resilient processes that maintain privacy and trust, no matter what form the next threat takes.
Remus Lim, Senior VP, Asia Pacific, Cloudera
The crux lies in whether organizations have the governance needed to balance AI and data privacy. AI can only scale to the extent that data use remains visible and accountable. When sensitive information is embedded in unstructured data or reused without clear oversight, privacy risk escalates and this becomes more pronounced with the rise of large language models (LLMs) that rely heavily on unstructured data, where sensitive information is often embedded in seemingly innocuous text.
IDC research titled ‘Agentic Automation: Unlocking Seamless Orchestration for the Modern Enterprise’ showed that 40% of Asia Pacific organisations already use AI agents, with over 50% planning to implement them within the next year. This would further expand the surface area of risk by retrieving information and taking actions across systems. Without clear oversight, personal or regulated data can be accessed, reused, or exposed in less predictable ways.
Organizations must also be deliberate about purpose and control - they need clarity on which data is appropriate for training, evaluation, or testing, and where it can be used safely. Sensitive or regulated data should only be handled in environments that enforce privacy and sovereignty requirements. Strong data governance is what ultimately enables AI and data privacy to work together, allowing innovation to scale responsibly.
Beni Sia, General Manager & Senior Vice President, Asia Pacific & Japan, Veeam Software
There is no line. In a world where data powers every business and we have clear privacy and governance regulations in place, organizations have a responsibility to understand where all their data – structured and unstructured – resides, what information is contained in that data and who has access to it.
In addition to ensuring they meet relevant privacy and governance regulations, being resilient gives the added benefit of guaranteeing their data is trusted which is exactly what companies need to accelerate safe and trusted AI projects. Data is the primary reason for failed AI projects. As Gartner says, AI is ready, but your data isn’t.
Kumar Mitra, Executive Director & General Manager, Infrastructure Solutions Group, Greater Asia Pacific, Lenovo
The line between AI and data privacy should be drawn at trust and accountability. AI can only go as far as an organization can responsibly govern the data it uses. When data privacy, security, or regulatory compliance cannot be assured, that is where the line must be set.
In practice, this means:
- organizations must embed accountability, oversight, and compliance from the outset in their AI pilots.
- organizations should be clear about which data can be used for AI, for what purpose, and under what controls.
- sensitive, regulated, or mission-critical data should only be used in environments where privacy, sovereignty, and security requirements can be enforced, often through hybrid or on-premises architectures.
Today, many organizations in Asia Pacific are moving faster on AI adoption than on governance. Lenovo CIO Playbook 2026 shows that only 1 in 3 organizations have established comprehensive AI governance frameworks. Gaps in responsible AI practices, alongside concerns around data security, data quality, and transparency, are emerging as key barriers to trust and scale.
Strong data privacy frameworks give CIOs the confidence to move AI from experimentation into production, knowing that autonomy, access, and decision-making remain aligned with enterprise risk and accountability. In today’s AI-first era, privacy is not the boundary that slows AI down, it is the guardrail that allows it to move forward safely.
Marco Zhang, Solutions Engineering Director, Asia Pacific and Japan (APJ) at Saviynt
The line between AI and data privacy is unfortunately not a legal one, but a leadership one. AI does not decide to misuse data; organizations decide how much access to render, how long they leave it open, and whether anyone else is watching.
What concerns me is how casually we are handing the keys to systems we still do not fully understand. AI does not need unrestricted access to all data to be effective. Organizations must challenge the assumption that more data automatically equals better intelligence. Privacy-preserving techniques - such as data minimization, masking, and role-based access - allow AI to deliver value without overexposure. AI systems thrive on large volumes of data, but privacy is not simply a technical constraint, it’s a trust contract. Institutional over-trust creates risk at scale.
The responsible path forward is not to slow AI down, but to treat it like any other powerful actor in the enterprise. Access should be earned, tightly scoped to a specific purpose, and relinquished as soon as the task is complete. No standing privileges, no blind spots, and no assumption that “machine” automatically means “safe.”
When AI systems make decisions or recommendations, organisations must be able to answer three simple questions: what data was used, who had access to it, why access is needed. Ultimately, accountability defines the true boundary. If an AI system cannot be governed, audited, or explained, the line has already been crossed.
AI and privacy can coexist, but only so long as access is intentional, temporary, and continuously verified.
Ed Keisling, Chief AI Officer, Progress Software
There is a chicken and egg issue with AI in that while it has been trained on all publicly available knowledge on the internet, AI lacks the context of personal information, such as health or financial information or sensitive information about your business that will drive the most meaningful outcomes.
While most of the major AI providers provide “opt-out” clauses where they indicate that they will not train their models on your data or retain your data, many people and organizations still lack trust in the AI vendors and a full understanding of how their data will be handled and if this trade-off will be worth it.
Providing access to the data has the potential to improve the models, and potentially the outcomes for all, but also introduces real risk that secrets or highly personal information could be exposed. The underlying issue here is choice; both organisations and individuals must continue to have the final say as to what information is shared and how with any AI vendor.
Wee Tee Hsien, Chief Executive Officer at FUJIFILM Business Innovation Singapore
The line between AI innovation and data privacy is not a fixed boundary, but a deliberate design choice. As an AI solutions provider, we believe the line should be drawn at purpose and proportionality. AI should only use data that is necessary to deliver a clearly defined outcome. When data collection or model training goes beyond the original customer intent, privacy is compromised. Responsible AI starts with disciplined data governance, not with technology capability.
The second line is drawn at control and transparency. Customers must retain control over their data, including how it is used, whether it is used to train models, and how long it is retained. Equally important is transparency. Organisations should be able to explain, in plain language, what data an AI system uses, what it learns, and what it does not do. From our experience working with customers across industries, trust is built through transparency and clarity, not technical compliance alone.
Finally, the line is drawn at accountability. AI decisions do not exist in isolation; they sit within business processes that impact people, customers, and society. Service providers have a responsibility to embed privacy by design, ensure human oversight, and continuously monitor risk as models evolve. In our view, the right balance is achieved when AI advances business outcomes without diminishing individual rights. When privacy is treated as a foundation rather than a constraint, AI adoption becomes sustainable, scalable, and trusted.
David Irecki, CTO for APJ at Boomi
The line between AI and data privacy should be drawn at accountability and intent.
AI should only use data that organizations are entitled to use, for clearly defined purposes, and within transparent governance frameworks. Privacy doesn’t disappear just because AI is involved. If anything, it becomes more important.
This is where AI activation really matters. Activating AI responsibly means giving it controlled, governed access to data, not unrestricted reach. Modern integration platforms play a critical role here by ensuring AI systems only interact with the right data, at the right time, and under the right policies.
In regulated environments, responsibility always sits with humans, not algorithms. When privacy is built in by design, AI and data protection can reinforce each other rather than conflict.
Grant Case, Field Chief Data Officer for Asia Pacific & Japan at Dataiku
A line is a speed limit. A barrier is a launchpad.
Building privacy into the architecture from the start prevents teams from slowing down to ask, "Can we do this?" They already know. Speed comes from clarity and following principles like Privacy by Design not permission-seeking.
Think about it. Driving in two-way traffic with just a painted line between cars is nerve-wracking and slows everyone down. A barrier lets more vehicles travel faster. Privacy guardrails operate the same way. Set the parameters early, and teams know exactly what is permissible.
Most organizations still treat privacy as a post-hoc problem, bolted on after the AI is built. Teams optimise against privacy compliance. Ship fast, add consent banners later.
That approach does not just create risk. It creates drag. Every use case becomes a negotiation. Deployment slows not because teams are reckless, but because no one knows where the edge really is. Worse: the regulatory line and your customers' trust line are not the same. You discover the gap only when trust breaks publicly.
The organizations building durable AI capabilities want the governed path to be the fast path. They do not ask, "Where is the line?" They ask:
*What do we build with privacy as a foundational constraint, not an afterthought?*
Speed is a by-product of decisions you no longer have to revisit.
Han Tiong Law, Regional CTO Asean and Greater China, Rimini Street
The line between AI and data privacy is drawn at trust, transparency, and control—especially when data comes from multiple sources, whether internal systems or external feeds. AI should only act on data that is fully understood, authorized, and governed across its entire lifecycle.
This requires end-to-end processes that ensure integrity from ingestion to decision-making, with clear accountability at every stage. Role-based governance is equally critical: AI agents must operate within defined boundaries, delivering insights and actions aligned to each persona’s responsibilities—whether in supply chain, finance, or compliance.
When implemented responsibly, this approach not only safeguards privacy but also drives measurable business outcomes: faster decision cycles, reduced operational risk, and improved agility to respond to market changes. At Rimini Street, we believe AI should augment human judgment, enabling innovation without compromising trust.”
Guna Chellappan, General Manager for Singapore, Red Hat
AI is a driving force in technological innovation, transforming industries and reshaping how we interact with technology. Open and public AI, rooted in transparency, collaboration and accessibility, has played a critical role in accelerating innovation and democratizing access to advanced technologies. By openly sharing models, datasets and methodologies, the global AI community has been able to identify biases, improve accountability, and build more trustworthy systems.
However, this openness also brings ethical challenges, particularly around data privacy and safety. The dual-use nature of AI means that the same technologies driving progress can also be misused – whether through deepfakes, increasingly sophisticated cybersecurity threats or unintended exposure of sensitive data in public datasets. These risks highlight why transparency must be paired with responsibility.
Balancing collaboration and safety requires thoughtful approaches such as responsible sharing, selective transparency and robust safeguards. Establishing safety benchmarks, embedding protective measures and encouraging community oversight are essential steps to ensure AI systems remain fair, secure and privacy-conscious.
As AI adoption accelerates, Data Privacy Day is a timely reminder that ethical AI development is not about limiting innovation, but about guiding it responsibly. By prioritizing privacy, accountability and collaboration, the AI community can continue to advance technologies that empower society while safeguarding the data and trust of individuals.
Harm Teunis, Cybersecurity Evangelist, ESET
It’s time to act. The risk of privacy violations with real-world consequences is higher than ever. We live in the age of AI: systems that excel at collecting, combining and interpreting vast amounts of data. This means red alert: awareness alone is no longer sufficient. Protecting data and privacy today is essential to ensuring security in the future.
Caring about your privacy means protecting yourself and those you love. Personal data can be misused long after it is collected. With lasting consequences. Data Privacy Day is a timely reminder of that reality.
Every interaction with digital technology generates data: what we buy, where we go, who we interact with, and what we believe. Our data is collected and stored at a scale most people can hardly fathom. That scale continues to grow as society becomes more dependent on digital systems. Data is often described as the new oil. But unlike oil, data is deeply personal. It does not just represent economic value. It defines who we are. And it puts power in the hands of those who have it. When individuals lose control over that data, the consequences go far beyond inconvenience.
First, cybercriminals actively exploit personal data. Stolen information is used for fraud, identity theft, extortion, and highly targeted scams that are increasingly difficult to detect.
Second, technology companies build extensive user profiles to monetize personal data, primarily through targeted advertising. While often legal, this constant tracking creates detailed behavioral insights that users rarely fully understand or consciously consent to.
Third, governments and regimes can weaponize data. Open-source and commercially available data is already being scraped and analyzed to monitor, detain, or even deport individuals, turning everyday digital traces into tools of control.
Privacy is not about having something to hide; it is about having something to protect. Strong regulation, privacy by design, and awareness of our digital rights are essential to prevent misuse of data. On Data Privacy Day, the message is clear: privacy protection today means security tomorrow.