Elastic is seeing a shift from answers to action with generative AI

“While a lot of initial generative AI work has been about asking questions to models and summarizing the data, the agentic push is seeing more organizations move towards how LLMs can get actions done,” explains Ajay Nair, General Manager of Platform at Elastic.

Technology artificial intelligence digital ai hand concept on cyber future business tech science innovation futuristic network strategy background virtual data communication learning assistant search.

As organizations increase their generative AI capabilities, one of the common challenges they face is getting the best out of their AI. This includes ensuring that the outcomes that they are getting from their AI, especially LLMs, are relevant to the business content.

This is an area which Elastic is hoping businesses will solve with their offerings. In a conversation with CRN Asia, Ajay Nair, General Manager of Platform at Elastic explained the search is the engine that powers AI and that the vendor sees generative AI as a fundamental transformation n how information is put to work for value for its customers.

“We believe search is the heart of AI and we excel at being exactly that. Generative AI has also led to this big push within companies of redefining what speed means. While the pace to consume, process, and act on information has radically shifted, it means technology has to keep up with it. So one of the biggest things we see customers trying to figure out is today about having that flexibility and openness in their system and architecture to be able to have not just model choice, but also the ability to connect those models with their relevant data is another big one,” he explained.

According to Nair, from Elastic's perspective, one of the things the vendor has always talked about is its flexible and open approach to building from its open source routes to the partnership is has with all the different cloud providers.

Nair also believes that there is now a big shift from answers to action with generative AI. He explained that while a lot of initial generative AI work has been about asking questions to models and summarizing the data, the agentic push is seeing more organizations move towards how LLMs can get actions done.

“This requires you to have strong guardrails to reason what your LLMs are doing, as well as giving it a predictive and deterministic way to go and act. So, I think this is where innovations like Elastic Agent Builder allow organizations to assemble those workflows in a deterministic way using our querying language. Elastic Workflows allows you to have, connect both software and other service systems to connect together to act on behalf of you. When you take that whole package together, making sure that AI is not only using relevant data to act for you, but doing it in a way that gives you the flexibility to keep innovating and push hard to act on that data,” he explained.

The AI journey

For Nair, this has always been Elastic's core philosophy and AI just gives the vendor an opportunity to push harder on that. What makes it more interesting is that Elastic’s customers are also on their own maturity journey and engages the vendor as trusted partners when it comes to dealing with these situations.

“The amount of unstructured data that most enterprises have in their companies is staggering today. It starts with a very simplistic problem of saying, how do I even access, connect, and use that data in a consistent format? Is there a pane of glass that allows me to kind of ingest the data and look at it? So, I think when we talk to customers, many of them have their operational data sitting in Elastic, and then they're looking for connectivity with some of their other software systems like Workday or Salesforce and otherwise, and saying, how do I act on that?”

Nair added that it then becomes a question about asking the relevance of that data that is actually matching what they are looking for. This is where he believes there is reasoning that the LLM is taking to ensure that hallucination is not happening.

“So, observability of what LLMs are doing on their behalf becomes another big pillar for customers. Because sometimes there's a lot of repeat of what others doing, For example, they see a GPT and they replicate a GPT, but it may not make sense in your business context as to what your workflow needs to be. So a lot of conversations we have initially with customers are saying, where is the place that you're putting information to work, and how do you see that workflow actually impacting? Elastic then states where LLMs can actually help.”

Apart from that, Nair also pointed out that customers are starting to realize how much they have to internalize and understand in their own business workflows. Specifically, a lot of teams have kind of built their own approaches to working with data and acting on it, or sometimes processes have taken place of where automation could have filled in. But now AI is pushing that boundary a lot further for them and making them realize that of these processes are not required and even that some of these validations are not required.

“We specifically see this, for example, in customer service and response. You're used to this engagement model where a ticket has to come in, a human has to look at it, and look up from a determinative, authoritative source, and then give a prescriptive answer to them. And then ask the question. How much of that actually requires human intervention? How much of that is actually deterministic? And you're seeing organizations push towards far higher automation on that. By using LLMs in our own customer support workflow, we improved mean time to first customer response by 23% and reduced assisted support vollume by 7%,” said Nair.

Given this scenario, Nair believes the maturity curve varies among different customers. For some, there are basic problems like what data they need to put to work to understand the data. For some, they have access to the data, but the challenge is knowing what actions they put it into place.

Knowing what works best

Nair highlighted that there are basically three areas organizations need to look to in their AI journey. First, there is trust. Organizations need to ensure that their data platform is able to ensure that the LLM is giving out trusted information. This means there needs to be access to data broadly and securely with high relevance.

“I truly believe Elastic is differentiated and market-leading on that capability compared to any other provider out there. Our lastest innovations, including Better Binary Quantization, which compresses vector data by up to 32x while maintaining recall, let customers scale AI search workloads at a fraction of the cost. Search has been our bread and butter for a long time, and that's a winning game over there,” he said.

The second area is flexibility and openness. Nair shared that a lot of providers out there have simple answers if they’re picking a certain model or a certain class of data. At the same time, it is too early in the AI innovation cycle to be locked into one particular area. Hence, Nair believes the more flexibility organizations have about model choice and data that can be connected, the more they will be able to innovate.

“So, when you're making the choice, pick an open ecosystem. This should be one that allows you to pick the choices that you're going for,” Nair said.

The third area is all about acting on data. As many data systems stop at retrieval, organizations need to be able to connect it to other systems or being able to have a deterministic workflow to go with that. Nair explained that this becomes extremely critical in the new workflow.

“You need action on your data that's going on over there. Elastic differentiates itself in all of those three. You've got capabilities like agent builder workflows, inference service, and our general relevance engine. And I think that's what makes us a preferred choice for most customers looking to build these kinds of applications,” he explained.

Data Sovereignty

When asked how organizations ensure that the AI is not revealing sensitive data, Nair believes that this is where the choice of an organization’s context engineering platform or data platform makes such a huge difference.

“This is an unchanged problem. Even today, if you're a human acting on a system, you want to make sure the human only has access to information and data that they're privileged to access, not without the others. So, if you take a system like Elastic, we give organizations deep privacy controls on the data. This includes what can be accessed and making sure that the APIs and systems that access it are following those particular rules. This includes enabling the ability for organizations to filter out data using sort of either programmatic or non-programmatic methods,” said Nair.

Put simply, Nair pointed out that this is something that organizations have to be a lot more conscious about when they go and put AI to work over there. This is because AI will take whatever data its given and act on it, but it is on organizations to make sure that it is operating within a safe, bounded context.

“So the biggest action you can take is work with a trusted data platform, one that operates where your data already lives and has experience organizing and enforcing data sovereignty at scale. This is where Elastic is available in 50-plus regions around the world, so customers can store data where they see fit, with compliance coverage across federal authorities throughout the world," he concluded