There should be a fair value exchange between data and AI, says SAS
In 2026, organizations should now look at whether they are creating a fair value exchange for data or putting a price on privacy, explains Deepak Ramanathan, Vice President, Global Technology Practice at SAS
SAS has been at the forefront of ensuring the lines between AI and data privacy remain intact for decades. Even before the hype of generative AI models like ChatGPT and such, SAS has been advocating for better data management.
A leader in data and AI and recognized by IDC as a leader in data integration software platforms, the vendor’s software and industry-specific solutions continue to enable organizations transform data into trusted decisions.
But with the thinning line between AI and data privacy, how can organizations make sure they are making the right decisions and ensuring they get the best outcome from their AI models while ensuring their data is not compromised at the same time?
According to Deepak Ramanathan, Vice President, Global Technology Practice at SAS, organizations should draw the line between AI and data privacy by knowing if they are using data in a way a customer would recognize as fair and expected and can justify it clearly.
“If a use case needs more personal data than the outcome justifies, or it relies on customers not noticing what is happening, then it is already on the wrong side of that line. What has changed is scale. AI can connect and infer meaning across datasets that were never intended to speak to each other, so “we collected it legally” is no longer a sufficient argument,” Ramanathan said.
He believes the question should be whether organizations can defend the use of data as fair, necessary, and expected.
“Another practical test is control; if you cannot trace where sensitive data came from, who touched it, which models learned from it, and where outputs are being used, then you do not have privacy risk under control, even if you have policies that say otherwise. In my experience, the biggest privacy failures rarely come from one dramatic decision. They come from small governance gaps multiplied across too many systems, too many teams, and too many vendors. In the AI era, privacy becomes an operating discipline, not a legal perimeter,” he explained.
Strengthening data privacy
Ramanathan pointed out that most organizations can materially improve privacy in 2026 by returning to fundamentals rather than chasing every tool on the market.
“The foundation is strong, practical data governance - knowing what data you have, where it resides, who owns it, and how it moves across your systems. Privacy breaks down quickly when data quality is poor, because teams cannot confidently classify what they have, detect misuse, or respond fast when something goes wrong. If your customer records are inconsistent across systems, your privacy posture is inconsistent too. Leaders should shorten the distance between detection and remediation, insisting on operational readiness that treats privacy issues with the same urgency as fraud or uptime” he said.
Ramanathan added that organizations must focus on reducing unnecessary exposure. This means businesses should collect only the data that is truly needed, store it for as little time as possible, and control where and how it is replicated. He believes this is where synthetic data is becoming a serious advantage as it allows teams to develop and validate models with representative datasets while materially reducing the risk of exposing real customer information.
“The goal is not to slow innovation; it is to make innovation safe enough to scale without gambling with trust,” he added.
Creating a fair value exchange
Interestingly, Ramanathan also pointed out that organizations should now look at whether they are creating a fair value exchange for data or putting a price on privacy.
“This is becoming one of the defining issues of 2026. We are seeing more “data for benefits” models across industries, from financial services to retail to health. They can be legitimate and even beneficial, but only if the exchange is transparent, truly optional, and not punitive for people who choose privacy. If better pricing, faster service, or access to essential products increasingly requires deeper personal disclosure, then organizations risk creating a two-tier system where privacy becomes something only some customers can afford,” he explained.
For Ramanathan, this will invite regulatory scrutiny, but more importantly, it will erode trust.
“The way forward is to be explicit about value and restraint. If you ask customers for more data, explain what they get in return in plain language, and prove you are using the data in narrow, accountable ways. Fraud prevention is a strong example because many customers will share more if it clearly reduces risk to them, but the safeguards must be visible: data minimisation, clear retention limits, and strong governance across partners,” he said.
As such, Ramanathan believes that when it comes to the line between AI and data privacy, there should be consent which should also feel like a choice and not a negotiation.
“In 2026, trust will not be won by clever wording, but by demonstrable control and fair dealing,” he concluded.