Nutanix focused on driving agentic AI, enabling opportunities for neoclouds and service providers

In his keynote at .Next summit, Rajiv Ramaswami believes that customers choose Nutanix not just because of the value the vendor delivers but the simplicity it brings to the infrastructure, as well as reducing the total cost of ownership and the flexibility, control and the performance that it delivers to customers.

With over 5,000 attendees at the Nutanix. Next summit in Chicago, Rajiv Ramaswami, CEO of Nutanix reassured its partners and customers in his keynote address that the vendor remains committed to them, especially with the increasing challenges and demands in the industry that they’re facing today.

“We are committed to you all, our customers. And you choose us because of the value we deliver to you and the simplicity that we bring to your infrastructure, the total cost of ownership that we can help reduce as well as the flexibility, control and the performance that we deliver for you. We continue to support you with our world class support and maintaining an industry leading NPS of 90 plus,” Ramaswami said.

The vendor made several big announcements at the summit following the Nutanix Agentic AI Solution announcement made at NVIDIA GTC a few weeks earlier. The full stack platform, which is in early access, is designed to help enterprises build and operate AI applications on the Nutanix Cloud Platform (NCP).

At .Next, Nutanix unveiled new capabilities to the NCP solution. Specifically, these capabilities are designed to help organizations operate reliably as AI workloads expand, cloud environments grow more complex, and hardware supply constraints drive the need for more flexible infrastructure platforms.

What’s interesting is that the capabilities will also empower neoclouds to deliver secure, scalable AI services to organizations. With the increasing concern on data security and demand for data sovereignty, neocloud providers are evolving to become full AI service platforms while providing businesses with enterprise-grade security, performance and control of their data for AI.

As such, Nutanix will enable neoclouds to deliver a broader catalog of AI services including GPU-as-a-service, Kubernetes-as-a-service, and an enterprise-ready AI platform service powered by Nutanix Agentic AI.

Ramaswami also shared that Nutanix is accelerating its partnerships with service providers through the new Service Provider Central with NCP, which will be available in the second half of 2026. The solution aims to provide a clear path for disenfranchised VMware Cloud Service Provider partners to continue offering profitable services to their customers.

Service Provider Central will offer a single pane of glass through Nutanix Central enabling service providers to run multiple tenants with automated workflows on shared Nutanix infrastructure without compromising control or compliance.

“What Service Provider Central does is control resource sharing such as managing your operations or managing multiple tenants. For example, even if you're an enterprise, you will want your HR agents to be kept separated from your finance agents. If you're a service provider, every tenant, every customer needs to be isolated from the other. So, we now offer a full suite of multi-tenancy capabilities by allowing you to access and share common infrastructure with the security and the compliance and guardrails that you need to make this all real. And using this, you can also have a service catalog that allows you to deliver the services that you need,” Ramaswami said.

Partnership with NVIDIA and AMD

Ramaswami also highlighted the continued partnership Nutanix has with both NVIDIA and AMD. With the AI Factory delivered by server vendors that include GPU chips from NVIDIA and AMD at the bottom and the AI applications and models at the top, there is a critical layer of software in between that organizations need to stitch together from open source components.

“You put together a number of these different components. You try to put it all together. And maybe if you're a huge company with a lot of resources, you can get that to go. But it's very hard for a normal company to do all of this. And that's where Nutanix can help. Wo announced our Agents Stack at NVIDIA GTC recently. And there's four elements of the stack. The first element is the AI services and the Kubernetes platform that it all runs on. The second is the underlying infrastructure that you've already come to know and love. This is our bread and butter. The third is data. Handling the data that's needed for all of these applications. And the fourth is the ability to operate all of this in a multi-tenant world,” Ramaswami explained.

On the very top layer, Ramaswami pointed out that developers need a rich set of services that they can use to develop and build their applications and Nutanix is unveiling a catalog of these services that are all based on the best of class, best of breed, open source components that have been curated to make it easy for developers to stand up all of these services.

“They include the developer tools that you need. They include the vector databases that your developers need. It includes all the ML operations components that you need and more. And with this, you can develop faster, with more agility, and run this anywhere, be it on-premises or in the public cloud. All of this will be available this summer,” he said.

The second big component of the agentic stack is running these applications. When it comes to AI applications, Ramaswami mentioned that performance becomes very critical and getting bare metal-like performance is key with virtualized systems.

"At Nutanix, we've done a lot of work to make this work. So we have optimized our AHV and Nutanix Flow for these GPU-based AI applications. AHV is now topology-aware. We know how to place your AI workloads onto the appropriate GPUs to maximize both utilization and performance. The same kind of work that we did with the hypervisor. For compute-centric applications. We're not doing for GPU-centric applications. With flow, we can offload a lot of the flow functions to dedicated data processing units, freeing up the main CPU again for your compute tasks.

As data is the third component of the stack, the data foundation needs to keep up with the demands of AI. A critical part of all of this is to efficiently work with GPUs.

“Some of the key functions that we do is to offload the key value cache, which is integral to a lot of these inferencing applications. It frees up expensive GPU footprint and offloads a lot of the cache into cheaper storage. We deliver low latency, high throughput streaming data from storage into GPU memory. As the data comes in, we also transform raw data into intelligence. We process and extract and vectorize the data so that your AI applications can consume it. And all of this is NVIDIA certified,” he said.

NKP Metal

Nutanix also extended the operating model for the Nutanix Kubernetes Platform (NKP) with the introduction of NKP Metal, which supports a dual-native architecture in which containers and virtual machines operate as first-class infrastructure under a unified operating model including for AI and other performance-intensive workloads that often run directly on bare-metal infrastructure.

Ramaswami explained that the underlying foundation for AI applications is cloud-native, Kubernetes, and containers. And one of the things that organizations are having to deal with today is that as they move into Kubernetes, they are also creating silos

“You have different tools, different teams to manage all of these silos, and it creates a whole set of problems over time. What we aim to do at Nutanix is to give you one experience, one team and wllowing you the flexibility and freedom to match containers and VMs together in the same platform. We've always given you choice in terms of where you run your container workloads and VM workloads. You can run them on-prem, you can run them in the public cloud, both containers on VMs or containers directly on top of any native public cloud subsets. Today, we are extending this with NKP Metal,” he said.

Currently available in early access, NKP Metal enables organizations to have a consistent experience for how they consume storage on bare metal versus consuming storage on public cloud versus storage on traditional VM-based architectures.

“It's also about how you do unified lifecycle management across all of these different capabilities. And this is again a critical add-on for all of you. For example, people running new apps at the edge where you might need bare metal performance and you only have a handful of new container-based applications. This is a great solution. So what we are doing here with all of this is uniquely giving you that single platform, that single experience across the world of containers and the world of virtual machines,” he added.

Opportunities for partners

In a media briefing, Lee Caswell, SVP Product and Solutions Marketing at Nutanix shared that partners will all have the opportunity to bring Nutanix capabilities into their customers and provide more services. This includes services around migration capabilities as customers go to de-risk existing Broadcom environments.

Apart from that, with many service providers that were disenfranchised by pricing changes or access changes within Broadcom licensing restrictions, Caswell shared that these partners are looking to Nutanix for a full service provider console to manage and share resources across multiple users.

“This is actually an interesting technical problem around providing not just resources, but also network restrictions so that we're controlling noisy neighbors and nosy neighbors So, with that capability, we now have the option or the opportunity to help service providers now offer new services based on Nutanix to their customers in ways that were more openly partner-friendly. The Service Provider Central Validation Program is a way for us to work together with service providers and show that we have joint go-to-market activities,” Caswell said.

On the increasing importance of data sovereignty, Caswell explained that geopolitical issues and concerns are driving both nations and also economic entities to look at how to control in a sovereign manner data and operations.

“We do a very good job with the data. Our data services can then restrict where data is snapshotted, replicated, restored in DR terms. And we've got the same access control capabilities, the same micro-segmentation and security policies. Follow the data, if you will, to authorize locations. But data is just one part of the puzzle,” he said.

With partners, Caswell said there's an opportunity to build sovereign offerings that include now operations capability contained within sovereign nations and to think about having sovereign support as well. This includes the capabilities that are now being offered to neoclouds.

“So, depending on the compliance regulations for a sovereign cloud, we can provide the elements that are essential for building a sovereign cloud and partner with our partners to provide that as concerns over accessing data and accessing an ongoing cloud remain high,” he concluded.