The evolution of FPGA
Since the invention of the FPGA in 1985, AMD has evolved its portfolio by integrating FPGA programmable logic with embedded Arm processors, NPUs, DSPs, a programmable network on chip, and other hard IP, creating powerful adaptive SoCs.
Four decades ago, the first commercial field-programmable gate array (FPGA ) was introduced. The XC2064 by Xilinx revolutionized chip design and hardware. Since then, FPGA has evolved, with Xilinx now part of AMD.
In conjunction with the 40th anniversary of the first commercially available FPGA, CRN Asia speaks to Steven Fong, Corporate Vice President, APJ Embedded Services at AMD to understand more about how FPGA is driving AI acceleration today.
How has AMD’s FPGA portfolio evolved over the past four decades, especially to meet the changing demands of modern workloads in AI and embedded systems?
Since the invention of the FPGA in 1985, AMD has evolved its portfolio by integrating FPGA programmable logic with embedded Arm processors, NPUs, DSPs, a programmable network on chip, and other hard IP, creating powerful adaptive SoCs. Our latest Versal AI Edge Series Gen 2 adaptive SoCs deliver full pipeline acceleration—sensor input, AI inference, and actuation—on a single chip, optimized for edge AI workloads.
Our portfolio has also grown with the inclusion of our x86 Embedded processors and the launch of AMD Embedded+, a new architectural solution that combines AMD Ryzen Embedded processors with Versal adaptive SoCs onto a single integrated board to deliver scalable and power-efficient solutions that accelerate time-to-market for original design manufacturer (ODM) partners.
What is the current focus for AMD’s Adaptive and Embedded Computing Group?
Our core focus is AI at the edge. We believe edge computing is at an inflection point, and our adaptive SoCs enable ultra-low latency, power-efficient, and real-time AI across embedded applications—from robotics to automotive to industrial automation.
What differentiates AMD in the AI edge space?
AMD adaptive SoCs deliver breakthrough AI performance/Watt, scalability, reliability, security and long lifecycles for the most demanding real-time AI-driven embedded systems on a single heterogeneous platform.
AMD also uniquely offers continuity across platforms. Developers can train models on Ryzen AI PCs and seamlessly deploy them on embedded devices powered by the same NPU architecture. Our adaptive SoCs integrate FPGA fabric, Arm CPUs, NPUs, and video engines—delivering highly efficient, heterogeneous compute tailored for real-time edge AI.
What are the biggest challenges developers face when building edge AI solutions?
One of the biggest challenges in building edge AI solutions is meeting the strict latency requirements that many of these applications demand. In use cases like autonomous driving or industrial automation, there’s simply no time to send data to the cloud and wait for a response. Decisions must be made instantly, and even a few milliseconds of delay can have serious implications for performance and safety.
Another key challenge is data privacy and security. In fields like healthcare or smart cities, data must be processed locally to ensure it remains protected. Transmitting sensitive information to the cloud can introduce vulnerabilities and regulatory concerns, so many edge solutions must be designed with on-device processing in mind.
Finally, personalization is becoming increasingly important. Edge devices often need to adapt to specific user preferences or environmental conditions. That means not only performing inference on-device, but in some cases supporting lightweight training or model updates at the edge.
To address all these challenges, AMD’s adaptive SoCs are designed to deliver real-time responsiveness, secure local compute, and scalable AI performance—all in one flexible platform.
How do adaptive SoCs differ from traditional GPUs and CPUs in AI applications?
While CPUs and GPUs are excellent for general-purpose and parallel processing, adaptive SoCs are optimized for end-to-end acceleration—sensor fusion, preprocessing, inference, and actuation—all in one device. This makes them ideal for embedded, latency-sensitive, and power-constrained environments.
Can you share examples of AMD customers in the APAC region and how they’re leveraging FPGA and adaptive SoC technologies in their operations?
In February 2024, JR Kyushu deployed an AI-powered track inspection system using the AMD Kria K26 adaptive system on module (SOM). The system replaces traditional manual inspection with a rail cart equipped with cameras and Kria SOM, enabling automated detection of loose bolts and cracks at speeds of up to 20 km/h. This solution delivers significant improvements in inspection speed, accuracy, and operational cost efficiency.
In the automotive domain, AMD has collaborated with several leading companies in Asia. In November 2022, Aisin adopted the AMD Zynq UltraScale+ MPSoC platform for its next-generation automated parking assist system. The platform’s key strengths include low-latency AI vision processing and support for over-the-air (OTA) updates.
Then in March 2024, Sony Semiconductor Solutions (SSS) integrated Zynq UltraScale+ MPSoC and Artix-7 FPGA into a reference design for automotive LiDAR systems. This solution significantly enhances autonomous driving safety through precise object detection, high-speed data processing, and robust reliability.
Finally, also in 2024, the Subaru EyeSight driver-assistance system adopted the Versal AI Edge Gen 2 adaptive SoC, supporting the implementation of intelligent driver-assist features.
These collaborations demonstrate how AMD’s adaptive computing solutions are helping top-tier companies in Asia solve real-world challenges and accelerate innovation.
How does AMD see the future of FPGAs and adaptive SoCs?
We’re doubling down on integration—bringing together RF, DSPs, AI engines, HBM, and more into single devices. With chiplet-based architectures, advanced packaging (e.g., CoWoS), and 3D stacking, we’re building the next generation of scalable, high-performance adaptive platforms. Edge AI, automotive, robotics, 6G, and space systems will continue to be major growth areas.