There is no one-size-fits-all for AI, says AMD’s Alexey Navolokin
“AMD is meeting rising AI infrastructure demand by delivering the right tool for the right job through an open, end-to-end approach to AI at scale,” explains Alexey Navolokin, General Manager, APAC at AMD.
AMD’s recent financial results for the fourth quarter and full year of 2025 showed strong growth, with the tech giant continuing to expand its capabilities in the AI era. At CES 2026, AMD also unveiled several innovations with Dr. Lisa Su, AMD chair and CEO focused on making AI available everywhere for everyone.
For the full year 2025, AMD reported record revenue of US$34.6 billion, gross margin of 50%, operating income of US$3.7 billion and net income of US$4.3 billion. Dr Su stated that 2025 was a defining year for AMD, with record revenue and earnings driven by strong execution and broad-based demand for high-performance and AI platforms.
“We are entering 2026 with strong momentum across our business, led by accelerating adoption of our high-performance EPYC and Ryzen CPUs and the rapid scaling of our data center AI franchise,” said Dr Su.
In APAC, AMD continues to witness strong growth as well. The chip company has already made several huge infrastructure investments in specific markets across the region, including plans for new R&D hubs in Taiwan. In terms of partnership in the region, AMD also announced an expanded collaboration with Tata Consultancy Services to codevelop a rack-scale AI infrastructure design based on the AMD “Helios” platform in support of India’s national AI initiatives.
To understand more about AMD’s plans and growth in APAC, CRN Asia caught up with Alexey Navolokin, General Manager, APAC at AMD. Navolokin discusses some of the announcements made at CES earlier this year as well as how AMD is confident on meeting all customer and partner needs in the region.
In her keynote, Lisa Su, AMD Chair and CEO focused on the message of AI everywhere, for everyone. How is that looking like for businesses in the ASEAN region, especially in their AI journey?
For businesses across ASEAN, “AI everywhere, for everyone” is about making AI accessible and practical no matter where a company is on its AI journey. The region’s diversity means organisations are starting from very different points, and success depends on flexible solutions that let them adopt AI at their own pace.
AMD enables this with the industry’s broadest AI portfolio, spanning data centres, cloud, edge, and PCs, combined with an open ecosystem that gives customers choice and lowers barriers to entry. This allows large enterprises to scale advanced AI deployments while giving startups and smaller businesses the ability to start small, experiment, and grow without massive upfront investment. From AI-powered productivity on PCs, to edge AI for manufacturing, healthcare, and smart cities, to large-scale data centre workloads, AMD’s scalable and energy-efficient platforms make AI usable in real-world environments.
By working closely with regional partners, system integrators, and cloud providers, AMD ensures local availability, optimised solutions, and strong ecosystem support – helping businesses across ASEAN turn AI from an abstract ambition into something they can actually deploy, use, and benefit from.
AMD Helios dominated headlines at CES. What can businesses in the region look forward to most in this?
As a reference design, the upcoming Helios gives OEMs and hyperscalers a proven blueprint they can rapidly customise and deploy, significantly reducing time to market for large-scale AI and HPC systems. The platform will integrate AMD Instinct MI455X GPUs, AMD EPYC “Venice” CPUs, and AMD Pensando “Vulcano” NICs – delivering up to 3 AI exaflops of performance in a single rack. It is engineered for maximum bandwidth and energy efficiency to support trillion-parameter model training, while scaling seamlessly for distributed inference. For businesses in the region, this translates to access to open, standards-based AI infrastructure that delivers leadership performance and a clear, efficient path to scaling the most demanding AI workloads as requirements grow.
How will the refreshed Ryzen AI chips enable organisations to have an edge in their AI journey?
With AMD Ryzen AI 400 Series processors, Copilot+ PCs deliver leadership CPU performance alongside a robust, dedicated NPUs delivering up to 60 TOPS, enabling faster multitasking, AI-enhanced productivity, next-generation creativity and immersive graphics – all while maintaining extended battery life for true mobility.
For business environments, Ryzen AI PRO 400 Series takes this a step further. Backed by AMD PRO Technologies, these processors deliver enterprise-grade security, manageability and reliability, while enabling on-device AI acceleration for Copilot and other intelligent applications. That means smarter collaboration, faster workflows and consistent performance, with IT controls that enterprises require.
Platforms like Ryzen AI Max+ 392 and 388 expand what’s possible with on-device AI, supporting models up to 128 billion parameters with up to 128GB of unified memory. This enables advanced local inference and sophisticated content creation experiences in thin-and-light notebooks.
Together, these platforms enable AI where the work happens, scaling from mainstream enterprise productivity to advanced local AI development, giving organisations greater control over performance, cost and data, while preparing them for the next generation of AI-driven workflows.
Lastly, with demand for AI infrastructure in the region soaring, how is AMD ensuring they are capable of meeting all customer and partner needs?
AMD is meeting rising AI infrastructure demand by delivering the right tool for the right job through an open, end-to-end approach to AI at scale. There is no one-size-fits-all for AI, and AMD’s strength lies in its broad, purpose-built portfolio spanning CPUs, GPUs, adaptive computing, networking, software and rack-scale systems – enabling customers to optimize performance, efficiency and total cost of ownership for every workload.
As compute demand grows exponentially, this breadth enables a scalable AI fabric that extends from cloud and data centres to client PCs and the edge, ensuring AI can run wherever it is needed to support always-on, high-volume workloads. AMD’s open ecosystem is central to this strategy, with deep commitment to open-source software, open standards and industry collaboration through platforms like AMD ROCm, giving customers and partners the flexibility to build customizable, future-ready AI solutions.
This openness is reinforced by a strong and expanding global channel partner ecosystem, ensuring customers can access, deploy and get the support needed. At the same time, AMD continues to execute consistently on an aggressive product roadmap, giving customers confidence to scale long term as AI moves into the agentic AI era.