How Qlik is enabling customers with trustable data

As businesses continue to be burdened by some of the investments and choices made with some of the hyperscalers, Qlik’s vendor agnostic and open API gives them an edge, says Charlie Farah, CTO at Qlik.

With over 40,000 customers globally, Qlik focused on converting complex data landscapes into actionable insights and driving strategic business outcomes. In the Asia Pacific region, Qlik works in a number of organizations to not only help them make the most of their data but also achieve greater cost savings when it comes to managing and using their data.

In a recent media briefing, Charlie Farah (pictured above), CTO at Qlik stated that as the data company is completely vendor agnostic and has an open API, they are capable of working with all the hyperscalers and the data cloud providers.

“We want to be able to provide a solution that's going to drive outcomes for our customers. We don't care whether your data sits in AWS, if it's in Microsoft, in Google, etc. We drive a change where we want to be agnostic. We put the customer first in all our decision making and all our vision across our product to really drive outcomes and make sure our customers get value from their investments. And ultimately, what we also do then is look at it from a holistic perspective across the entire data analytics and AI sphere. So, it's not just about creating a solution just for data engineers. It's not creating a solution for the business users,” Farah said.

Looking at the whole paradigm when it comes to how to use data, Farah believes it's not just about having data architects satisfied with the technology but to drive value from the insights that they're creating.

According to Farah, businesses continue to be burdened by some of the investments that they've made or choices they've made with some of the hyperscalers.

“They get surprised by these consumption bills at the end of every quarter they haven't budgeted for, that they need to reassess programs that they've prioritized over the course of the year. They need to know how they're going to shift things around to manage and balance their budgets,” Farah explained.

Understanding data with Qlik

This is where Qlik’s Open Lakehouse comes in. Now generally available, it is a fully managed Apache Iceberg service in Qlik Talend Cloud that delivers real-time pipelines, automated Iceberg optimization, and true multi-engine access without lock-in. Farah believes this helps optimize some of those capabilities across organizations that results in an AI-ready data foundation that cuts time and cost between data and action.

Deployed in the customer’s own cloud account with bring-your-own-compute, Qlik Open Lakehouse combines change data capture (CDC) ingestion with automatic Iceberg optimization and multi-engine access so teams can use the tools they already rely on, including Amazon Athena, Snowflake, Spark, Trino, and Amazon SageMaker for machine learning (ML).

“One of the things we do is try to speed up the ingestion of data, the use of data, but also the amount of data consumption across the organization. So, we can typically save an organization about 50% of their consumption or their utilization across some of those hyperscalers as well,” Farah said.

“Another solution we have is the Qlik Trust Score, which can be tailored according to the data quality needs of your company, and gives you visibility on the healthiness of any dataset or data product. It is based off our Talend acquisition a couple of years ago. It’s a very mature data quality and governance solution in the market. So, we've now integrated it across our entire portfolio. The idea here is that as you onboard data, it allows a trust score. It looks at the validity, completeness, accuracy, and timeliness of the data as well. So, it gives an organization some insights to see whether this data needs to be cleaned, and whether it needs to be curated. There are some invalid data files or rows or tables if there's anything that's missing,” Farah explained.

Farah also added that the solution allows users to really understand where they need to invest some of their time, rather than investing all this time into analytics products or data solutions and not realizing there's incompleteness and losing that trust.

“Once you lose that trust with the business user, it's hard to drag them back, and they start using their intuition, how things have always been done, and they don't trust the systems that you're trying to put in place. So, we feel this is quite critical in building that sort of capability, that trust, that awareness across an organization to really have the bigger impact with the investments that organizations are making. And this is not the future. This is stuff that Qlik has been doing for 30 years,” he said.

Echoing Farah’s comments is Prof Khor-Ping Quek, from the National University of Singapore. In the projects the university works with, he said that 70% to 80% of the efforts taken by organizations for any AI project is trying to clean up the data or trying to find the data.

“When we talk about data quality, tools like Qlik probably help to improve that a lot and keep the data at a source rather than moving somewhere and duplicate it. Any duplication will introduce error. So, the data prominence is important, whether you have a record of where the data is from, the origin of the problem, and whether the data has been transformed, and so on, and whether the data conforms to what is regulation, the PDBA, GDPR, etc. All the consideration that any data scientist that deals within AI project has to get the data will have to pay attention to,” said Prof Quek.

Qlik in APAC

In the APAPC market, Maurizio Garavello, SVP APAC at Qlik pointed out that the region is no different to any other market in general. Garavello explained that there is incredible pressure on any business leader to do things more efficiently, to do things faster and to be closer to their customers.

“Those are generally the three main reasons why customers are looking at AI, or big enterprises are looking at AI - efficiency, cost reduction, doing things that you cannot do without AI. There are things that you just can’t do without AI. For example, knowing what your customer is, what are the problems they face or is this customer having the same problem? You need to be able to connect all the dots to find a solution that can accelerate the business,” he said.

As the market is accelerating, Garavello believes that there is a gigantic pressure on all companies to do more with AI. And that, he feels can only be done with data that is trustable.

“It's a world where every company out there wants to talk about an agentic AI, 50% are investing in proof of concept and so on. But the agentic AI doesn't know what to trust, right? And that's where we come. The power of AI sits on the data. You need to be able to trust the data as the data is everywhere. That's where we are advocating that the trust core of the data is the most important part of an agentic AI world. You need to know where the data is coming from,” added Garavello.

While many refer to data as the new oil, Garavello pointed out that while this makes sense, the reality is that the oil still needs to be extracted from the ground and refined before it can be used.

“The same applies to data as well. It's sitting everywhere, but you cannot do anything unless you have a system to query real data, put a trust core on your data and see if it's trustable or not, and then consume your data through the AI with ease,” he concluded.