Confluent simplifies complexities in data streaming through Tableflow and Apache Flink
Announced at Confluent Current in Bangalore, India, Tableflow simplifies the integration between operational data and analytical systems while Apache Flink streamlines and simplify the process of developing real-time AI applications.
For most organizations, data streaming is seen as a more effective, efficient and better way to move their data. However, in some cases, there’s an inherent challenge that streaming is perceived to be expensive.
According to Shaun Clowes, Chief Product Officer at Confluent, the development of data streaming over the past decade has actually made it a lot more cost saving. In a conversation with CRN Asia during the Current conference in Bangalore, Clowes mentioned that today there are different streaming offerings that enable streaming to be done very inexpensively.
“If the data is high volume, but it's not particularly latency sensitive, you can move it very inexpensively. If the data is high volume, but needs low latency, you can also move it inexpensively. It's a different type of cost. Different Kafka offerings let businesses choose the right offering for their specific type of data but still use it and move it all using just one approach, which really opens up a whole bunch of opportunities,” said Clowes (pictured below).
Today, Clowes stated that a lot more data can be moved using streaming, compared to previous iterations – with data streaming being more cost effective as well.
Tableflow and Apache Flink
At Current 2025, the data streaming pioneer also made two big announcements at. First, Confluent announced significant advancements in Tableflow, giving users a faster and less complicated way of accessing data from their data lakes and warehouses.
According to Clowes, Tableflow simplifies the integration between operational data and analytical systems. It continuously updates tables used for analytics and AI with the exact same data from business applications connected to Confluent Cloud.
Developer teams remain challenged in getting Kafka data into a data lake. Not only is the process difficult to manage, it can also lead to high costs. Apart from simplifying this process Tableflow also enables teams to directly take work they're already doing and just have that exact same real-time, reliable, reusable data appear in their data warehouse with no effort whatsoever.
“Everyone is super excited about Tableflow because it takes all of the work that they do to move data using streaming, to process data in its streaming form, to govern it, to basically get all the real time data and have it be immediately available in their data warehouse with no additional work at all. It literally appears as an algebra Delta table that they can just immediately use. They will get more of their data more easily accessible, and they can put it to bear with all sorts of interesting problems. And obviously, the one that everybody is thinking about right now is AI,” explained Clowes.
This is where Confluent’s announcement of new capabilities in Confluent Cloud for Apache Flink comes in. Apache Flink streamlines and simplifies the process of developing real-time AI applications. Flink Native Inference cuts complex workflows by enabling teams to run any open-source AI model directly in Confluent Cloud.
“Customers are really excited about Flink because of its ability to take complicated transformations, joins, processing, and put it directly into Flink statements and have it just work. Customers do joins, aggregations, roll-ups, processing of data in the way they might normally do it in a data warehouse, but do it trivially using Flink. It just works, and the output is still real time. And in many cases, they're actually less expensive to run as a query than if they put the same query in a data warehouse. So, customers are actually saving costs,” said Clowes.
Streaming data for AI
Clowes pointed out that one of the reasons why businesses have yet to move from proof-of-concepts in AI is because they just couldn't safely deploy these experiences at scale because they couldn't get the data to the AI.
“The hard part is not building the agents. It's having the data to be reliable and be reusable. It's about getting the right data to the AI at the right time. But once you do that, you can build these things very easily. And if you use this type of architecture, you can deploy them into production safely at scale,” added Clowes.
On concerns of data privacy and security, Clowes mentioned that Confluent has invested heavily in governance capabilities that enable organizations to have visibility on what is in their streams.
“Customers know what's going in real time and then apply different rules to how that data is treated. So, for example, you even encrypt data end-to-end, all the way from the moment its first produced to the final consumer, so that nowhere is the data ever stored or moved in an unencrypted fashion. We've also got a whole bunch of capabilities in the platform that enable you to understand what the data is, understand who has access to it, understand how it's being used, and then encrypt it so that you can be sure that the data is only used for the specific use cases that make sense for that specific data. There's no individual who's seeing information they shouldn't see,” explained Clowes.
At the end of the day, Confluent aims to make it easier for businesses to get to all of their data with different types of streaming.
“We want to make Flink even more powerful so that you can build more powerful AI or general processing applications more quickly than we could before, even on top of real-time data. And with Tableflow and future evolution, we want to make it even easier to query and work with that data, as well as have AI work with their data. We want data to be practically useful for all different use cases,” concluded Clowes.