What can organizations do better to strengthen data privacy?

CRN Asia speaks to tech vendors to get their views on what can organizations do better to strengthen their data privacy.

Financial Concept. Binary code. Blue Gold binary code

With issues like Shadow AI becoming rampant in Asia, businesses need to ensure they have the right measures and policies in place so that employees are aware of how they are using AI tools. While it’s not going to be easy to control what employees do with AI on their personal devices or outside of work, ensuring they are aware of the consequences of sharing too much information, be it on social media or on AI applications is imperative.

Tech vendors in the region shared with CRN Asia some measures organizations can consider to not just strengthen data privacy but also ensure better processes are in place when it comes to using AI tools and applications for work.

Grant Case, Field Chief Data Officer for Asia Pacific & Japan at Dataiku

Treat your data environment like a chemical storage facility. Chemicals decay. They become volatile. Stored carelessly, they turn from asset to liability. Data works the same way.

One Australian firm proved the point in 2023 when attackers breached its systems, exposing 7.9 million driver's licence records. Sixty percent of that data was more than a decade old. One man had his details stolen from a couch purchase fifteen years prior, yet had not taken out credit in ten years.

Long unused and orphaned data is a pressurised drum with no label and a failing seal.

Most organizations accumulate data because storage is cheap and deletion requires decisions. But cheap storage does not mean low risk. You must protect, explain, and defend every record retained.

Ask these three questions to expose whether your practices are sound:

If the answers make you uncomfortable, discomfort is the signal, not the problem.

Patrick Harding, Chief Product Architect at Ping Identity

This week offers an opportunity to pause and assess the rapidly evolving landscape of digital trust, as privacy really boils down to choice and trust around how personal data is being used. Data privacy is no longer a passing concern for consumers – it has become a defining factor in how they judge brands, with three-quarters now more worried about the safety of their personal data than they were five years ago, and a mere 14% trusting major organizations to handle identity data responsibly.

Whether it’s social engineering, state sponsored impersonation or account takeover risks, AI will continue to test what we know to be true. As threats advance and AI agents increasingly act on behalf of humans, only the continuously verified should be trusted as authentic.

For businesses, the path forward is clear: trust must be earned through transparency, verification, and restraint in how personal data is collected and used. The businesses that adopt a “verify everything” approach that puts privacy at the center and builds confidence across every identity, every interaction, and every decision, will have the competitive edge.

Maurizio Garavello, SVP for Asia Pacific & Japan, Qlik

As AI becomes more autonomous, data privacy stops being a compliance checkbox and becomes a design principle. You can’t build trusted AI on opaque data or unclear ownership. Organizations need to know where data lives, who can act on it, and how decisions are governed – especially as agents begin to operate on their behalf. Privacy, governance, and transparency are what turn AI from a risk into a reliable partner. Without that trust layer, scale simply won’t happen.

Ed Keisling, Chief AI Officer, Progress Software

Employees can be the first line of defence to strengthen data privacy if they are trained on and understand the risks to the business, but also to themselves when data is shared with external AI models. The use of AI further reinforces the need for existing data policies that organisations have in place. They need to minimise the amount of data that is collected, retained, and shared.

Privacy requirements about what data is shared and how it is used must be incorporated into product and system architectures. As AI adoption expands, clear policies must be created for model training data, retention, and logging to avoid creating additional liabilities. And as with any policy, these processes must be regularly audited and tested to ensure compliance and continuous improvement.

Wee Tee Hsien, Chief Executive Officer at FUJIFILM Business Innovation Singapore

To strengthen data privacy in an AI-driven world, organisations must first shift their mindset from compliance to stewardship. Privacy should be embedded by design, not retrofitted after deployment. This starts with clarity of purpose and proportionality. Organizations should only collect and use data that is necessary to deliver a defined business outcome, with clear boundaries on how data can be reused, retained, or used for model training. When AI initiatives begin with disciplined data governance, the line between innovation and responsible use becomes far clearer.

Second, organizations need to do better on control, transparency, and capability. Customers and employees must retain meaningful control over their data, including visibility into how AI systems use it and what the systems do not do. This requires robust data inventories, clear documentation of AI models, and clear explanations that goes beyond technical teams to business leaders and end users.

At the same time, organizations must invest in upskilling their workforce. We also see privacy as a shared organisational responsibility, not just a legal or IT issue. Companies that demonstrate strong responsible AI frameworks, supported by education and clear operating rules, can accelerate adoption rather than slow it.

Rachel Ler, Area Vice President of Asia at Fastly

Organizations should embed privacy by design into their AI strategies from the outset. This includes limiting data collection, improving visibility across hybrid and multi-cloud environments, and enforcing consistent security controls. Channel partners also play a vital role as trusted advisors, helping customers design secure, compliant architectures and clearly communicate data practices.

At Fastly, we believe securing and processing data at the edge and closer to users helps organisations reduce risk, meet local requirements, and deliver trusted digital experiences across Asia.

Lim Hsin Yin, Vice President of Sales, ASEAN at Cohesity

Strengthening data privacy today requires a shift from static controls to continuous cyber resilience. Organizations need adaptive architectures that monitor data across systems, applications, and IT infrastructure, ensuring it can be recovered rapidly if compromised.

Regular testing, clear ownership, and full visibility are critical. Privacy is no longer just an IT responsibility—it requires collaboration across leadership, operations, and legal teams.

As a first step, organizations will benefit from deploying proper classification tools and policies to understand what data they have, where it is located and map accordingly by sensitivity and risk.

For instance, organizations can identify which data must be under strict control – this typically includes personally identifiable information (PII) data, financial or medical related records. It is also critical to set corporate procedures and policies related to data security, retention and disposal schedules, records management, information sharing, and privacy. As data custodians, organizations can help to ensure compliance by establishing how they operate and share information with their partners, stakeholders and suppliers, to safeguard their most important and confidential data from breaches.

From an operational standpoint, by rehearsing for disruption, baselining risk, and embedding privacy into everyday processes, organizations can safeguard sensitive data, maintain customer trust, and stay operational even amid unpredictable cyber threats.

Remus Lim, Senior VP, Asia Pacific, Cloudera

As AI adoption accelerates, organizations must move data privacy upstream and treat it as part of the AI engineering lifecycle instead of a downstream compliance task. The first step is improving data visibility. Blind spots in data access make it difficult to prevent sensitive information from slipping into training corpora, evaluation sets, or prompt libraries, especially when teams are moving quickly to build and scale AI use cases.

Second, organizations should prioritize privacy-enhancing techniques, including the use of synthetic data, to reduce exposure to privacy risks. By generating datasets that preserve statistical patterns without exposing real personal information, synthetic data has the potential to reduce reliance on highly sensitive data while enabling model development and evaluation to move forward. Handling it responsibly involves defining whether data is used for training, evaluation, red-teaming, or system testing; generating datasets based on clear utility targets; and checking for memorization risk or the presence of overly unique or reconstructable examples.

Governance must be continuous, with privacy controls evolving alongside AI use cases, models, and regulations. The key to scaling AI with confidence lies in embedding governance, documentation, and accountability into everyday AI workflows.

Beni Sia, General Manager & Senior Vice President, Asia Pacific & Japan, Veeam Software

Organizations can strengthen data privacy by focusing on three core pillars: visibility, governance, and resilience. It starts with gaining clear visibility into what sensitive data exists, where it resides, and who has access, because without this clarity, protection is impossible. Next, enforce robust governance through least-privilege access, strong authentication, and clear retention policies to ensure data is used only for its intended purpose and retained only as long as necessary.

Finally, build resilience into the process, as privacy often fails during disruption; reliable recovery capabilities allow teams to restore systems quickly, avoid risky workarounds, and minimise exposure during a crisis. Ultimately, better visibility, stronger governance, and proven recovery readiness are the most practical ways to raise privacy standards while still enabling safe adoption of technologies like AI.

Kumar Mitra, Executive Director & General Manager, Infrastructure Solutions Group, Greater Asia Pacific, Lenovo

Strengthening data privacy starts with designing AI responsibly, not adding controls later. As AI moves from pilots into production, privacy and security must be built into infrastructure, data architecture, and operating models from day one.

First, organizations should adopt Hybrid AI approaches that provide greater control over where data lives and how it is processed. Sensitive, regulated, or mission-critical workloads often need to remain on-premises or at the edge, where data sovereignty, security, and compliance can be enforced.

Second, CIOs need to embed responsible AI principles into how AI is developed and deployed, including clear governance, transparency around data usage, and defined accountability for AI-driven decisions. These guardrails are essential for building trust as AI systems become more autonomous.

Finally, data privacy is not a one-time exercise. Governance frameworks must evolve alongside AI use cases and regulatory expectations. Organizations that continuously review controls and align technology decisions with risk and compliance objectives will be better positioned to scale AI securely and responsibly.

Marco Zhang, Solutions Engineering Director, Asia Pacific and Japan (APJ) at Saviynt

Data privacy weakens when access becomes excessive, long-lived, and poorly understood. In many organizations, identities - human, machine, and increasingly AI-driven - accumulate permissions over time that far exceed what they actually need. This buildup of access creates conditions where privacy risk grows unnoticed, especially as cloud services, automation, and AI workflows expand the number of entry points.

Privacy regulations set the baseline, not the finish line. Organizations need real-time visibility into who has access to what data, why, and for how long. Identity is the control plane for privacy, especially as AI systems and non-human identities multiply.

Standing access is one of the biggest privacy risks. The shift leaders need to make is mental before it’s technical. Privilege should be treated as temporary by default, granted with intent, and continuously reassessed. Access that no longer serves a clear business purpose should expire automatically, not wait for a quarterly review or an incident to prompt action.

AI must be governed like a first-class citizen. AI models, service accounts, and bots should be governed with the same rigour as human users. This includes enforcing least privilege, monitoring data usage patterns, and embedding privacy controls directly into AI pipelines rather than bolting them on later.

Strong data privacy today comes from discipline. That includes knowing every identity in your environment, granting access only at the moment it’s needed, removing it automatically when it’s needed, and monitoring behaviour continuously, because misuse is often accidental before it becomes malicious. When access is precise and time-bound, privacy strengthens naturally. The real win is prevention that operates quietly in the background, rather than high-profile remediation after the fact.

David Irecki, CTO for APJ at Boomi

Most organizations don’t have a data privacy problem, they have a data visibility and control problem.

Data is spread across cloud platforms, legacy systems, partners, and now AI tools, which makes privacy hard to manage without strong foundations. Strengthening privacy starts with understanding where data lives, how it moves, and who or what has access to it.

This is why modern iPaaS has become so important. By connecting fragmented systems and standardising how data flows, organisations can reduce duplication, enforce privacy policies consistently, and avoid unmanaged data exposure.

Strong data privacy isn’t just about compliance, it’s about trust. Organizations that invest in connected, governed data foundations are better positioned to use AI confidently and responsibly.

Han Tiong Law, Regional CTO Asean and Greater China, Rimini Street

Organizations can strengthen data privacy by embedding it into their core operations and modernization strategies, rather than treating it as a compliance checkbox. This starts with robust data governance—knowing where data resides, who has access, and how it flows across ERP and connected systems. As enterprises extend the life of these platforms while layering in AI and analytics, privacy-by-design and human-in-the-loop controls become essential.

End-to-end processes ensure integrity from data capture to actionable insights, while role-based governance ensures that AI recommendations align with specific responsibilities. The payoff is significant: reduced regulatory risk, faster decision-making, and improved customer trust—all of which translate into tangible business outcomes like operational efficiency, cost savings, and accelerated innovation. At Rimini Street, we advocate for a trust-first approach that enables organizations to innovate securely while protecting long-term business value.