Cisco’s validated designs and integration capabilities enables steady AI deployment in Asia
Jeremy Foster, SVP, GM of Cisco Compute talks about how Cisco is supporting customers in the region, especially in providing them the infrastructure that is capable of intergrating with their systems.
Cisco’s AI infrastructure received record breaking orders for its fiscal 2025 year. Chuck Robbins, CEO of Cisco shared that the vendor received over US$800 million in AI infrastructure orders during the company’s final quarter of its fiscal year.
In the Asia Pacific region, the networking and security vendor continues to experience exponential growth, having celebrated its 30th anniversary in Singapore as well recently. With organizations now focused on AI deployment, Cisco is also making sure its capable of supporting customers in their AI journey with the right infrastructure.
Jeremy Foster, SVP, GM of Cisco Compute, who is responsible for driving innovation, market share and sustainable growth for Cisco’s multi-billion-dollar compute and SaaS infra portfolio was in town for Cisco Connect and CRN Asia had the opportunity to speak to Foster about how Cisco is supporting customers in their AI journey, especially when it comes to providing the infrastructure that enables it.
A lot of companies are looking at cloud repatriation right now, does that have an impact on how they work on their AI journey?
Absolutely. Cloud repatriation is an interesting term because what I think I see is that it's not so much repatriation of existing applications, it's doing what people do in IT all the time, which is optimizing or balancing what's going on.
When cloud came out, everybody started investing in cloud, those dollars went way up. Then now what we're seeing is people are right-sizing cloud because people are coming in and realize they have this huge spend with this cloud provider and at the end of the year they’ve got a 30-million-dollar credit or whatever and that CIO feels they must have overspent. And so that ultimately leads people to think about what the cost would be to do things on-prem.
So particularly with AI because data is super important and the data has gravity, and a lot of that data may be on-prem. The second piece is the equipment itself is a factor of 10 times more expensive than a traditional server. When you start getting into some of these AI servers at a minimum, the ability to run them versus just run those apps in the cloud can give you a better ROI over time.
How does Cisco advice customers on this?
We'll work with the customer through their use case, and every application is different in terms of, some AI applications may produce a lot of data, a lot of graphics and those types of things which might increase your costs in the cloud. So we'll work with customers. Typically, if we're working with them, to be honest, it's primarily on the infrastructure side.
But I think if I think about the conversations I have with other people across the industry, AI will be a trend that even if you start in the cloud, when you truly want to scale it out, it's going to be more cost-effective to run it on-prem. That's the hypothesis I'd say we're working under.
When you speak to customers as well, and also your partners, does the cost conversation drive their plans?
I think there's probably two different levels of cost. There are customers evaluating what's the project and the outcome they want to drive and then saying, hey, we're going to go put dollars into this particular project. That's a corporate budget conversation. And effectively, most likely borrowing from some other place else in the budget to shift it to this AI initiative is how I typically see that happening.
On the cost of the actual equipment, it's a very competitive market in terms of having to work with customers on whether it's compute or networking and making sure that we're putting our best position forward on an individual basis.
I think that's also why it's important to build solutions because when you can package things together, then you end up delivering more value and working with a customer every step along the way. That certainly is the approach that we like to take.
One of Cisco’s strengths is its validated design on its products. How important is this?
So, we've done validated designs for many years. We've had validated designs for everything from VMware in 2009 to before that, just networking architecture. So it's really part of our heritage. We even used to have an Ethernet switch. You need to be able to plug it into a bunch of different things and it needed to work. And so that's where validated designs start off.
What my team has done over the last many years since we brought out UCS is UCS runs a lot of different pieces of software. So now you bring a different dynamic. And what we're trying to do is say, let's take the hardware pieces, the networking pieces, the computing pieces, the software pieces, whether that's VMware, OpenShift, NVIDIA, and then package that together into a solution that we can support.
And so the validated design work that our team builds is a tremendously large undertaking in terms of amount of hours that we put into that, because from a customer viewpoint, they're getting a validated certified design that's been tested so we can reduce their time to value and we can reduce the risk that they feel like they're going to take if they find problems on their own.
But from a business perspective for us, we have to do validated designs with a lot of different partners to take that open ecosystem approach. It's a major investment for our team. And I think customers see the value in it and we can see that in the customers that are actually buying the solution support on top of the solutions.
With agentic AI coming into the picture as well, is that going to make it more complicated for businesses to decide on their infrastructure investments?
I think it adds another layer of complexity to the requirements of the infrastructure, potentially long term, particularly as things scale. Because you have security implications of agents talking to agents, generating data that needs to be tracked and understood, even though you just had a conversation between two computers, not two people. And at the speed that they will work, there's a lot of implications around that.
I do think it's very early for agentic solutions for customers. They're really focused on how they drive a customer support type of an outcome. Or how do they drive a supply chain type outcome where they can better predict X, Y and Z within my supply chain? It's not common to find enterprise customers yet who are fully deploying those eight different agents that are talking to each other, driving some macro type of an outcome. I do think we're going to get there over the next two years because this is going to happen fast.
As Cisco works with a lot of storage providers, how important is the integration?
Customers don't want lock-in vendors. And if you look at the other choices in the space, many of them participate in some form of networking, compute and storage. And so, it's very important that Cisco works with, in this case, since you mentioned storage, several different storage providers so customers can buy best in breed. And when you look at things like AI, there may be certain providers that have better solutions than other providers.
And there's also this traditional IT space that we need to make sure we continue to service our customers and build great technology to make customers' lives a lot easier that we have been doing for many years. And that will change and evolve.
Each one of those storage players may enter one of the markets or move to the other market as well because you're seeing that transition right now from a bunch of people who grew up in storage and AI and other folks who grew up in enterprise storage that are headed to AI. And we want to allow customers to make that choice based on what they're operationally most comfortable with.
But on a key focus area for us right now is making sure that we're developing even tighter integrations with folks like that. So, in that space of storage, we want to make sure that we have the ability to put our compute, our network with them for solutions that are AI based, traditional compute based and allow customers to kind of have that best of breed approach. And I think it's even more important to do now than it was to do before.
Lastly, how do you see the future looking like, not just for Cisco, but for the ecosystem as a whole?
I think the overall ecosystem is going to evolve where you have solutions that are highly customized for the top end of the market. And then you're going to see a different class of infrastructure that will be able to service the enterprise inferencing needs and make it easy for them to consume as well.
So, I think you'll see a little bit of a split in the infrastructure, what the infrastructure looks like, depending on what type of a customer you're talking to, even though they may be trying to deliver the same thing at different scales, which is interesting, right? We have not really seen that before. I think that ease to consume will definitely be something that's key, especially with the complexities in the technologies.