Alibaba Cloud focused on Next Gen AI innovations and global data center expansion

“To underscore our long-term commitment to advancing AI, we will progress with our RMB 380 billion investment plan in AI and cloud infrastructure over the next three years,” says Eddie Wu, Chairman and CEO of Alibaba Cloud Intelligence.

Alibaba Cloud has unveiled its latest full-stack AI innovations at Apsara Conference 2025 in China. The Chinese cloud company continues to have a strong influence in the Asian region, with many businesses leveraging Alibaba Cloud’s AI products like large language models from the Qwen family.

Alibaba Cloud’s Qwen, which was first launched in 2023, has now open-sourced over 300 AI models built on its two foundation models - the large language model Qwen and the visual generation model Wan. In fact, Alibaba’s AI models has over 600 million downloads and 170,000+ derivative models created to become one of the most widely adopted open-source AI series globally. Notably, over 1 million corporates and individuals have used Qwen on Model Studio, Alibaba’s AI development platform.

In his keynote address, Eddie Wu, Chairman and CEO of Alibaba Cloud Intelligence stated that

In the future, large AI models will be deeply integrated into a wide range of devices, functioning like operating systems. These models will be equipped with persistent memory, seamless cloud-edge coordination, and the ability to continuously evolve.

“We remain committed to open-sourcing Qwen and shaping it into the ‘operating system of the AI era,’ empowering developers around the world to build transformative AI applications,” Wu said.

Wu also mentioned that Alibaba Cloud is strategically positioned as a full-stack AI service provider, dedicated to delivering robust computing with maximised efficiency for training and deploying large AI models on the cloud.

“To underscore our long-term commitment to advancing AI, we will progress with our RMB 380 billion investment plan in AI and cloud infrastructure over the next three years,” Wu added.

Increasing data center capacity

Given the increased focus on next gen AI innovations, Alibaba Cloud will also be looking to increase its data center presence and capacity to support the need for more compute. As such, the cloud vendor will look to launch its first data centers in Brazil, France, and the Netherlands, with additional data centers to be added in Mexico, Japan, South Korea, Malaysia, Dubai in the coming year.

The strategic expansion will also see the set up of new regional service centers in Indonesia and Germany to provide round-the-clock, multi-language customer support. Alibaba Cloud currently operates 91 availability zones in 29 regions globally.

“AI is revolutionising not only technology, but also the very foundation of how enterprises deliver business value and drive growth. Our strategic expansion of global infrastructure is designed to cater for the accelerating demand from forward-thinking customers. Alibaba Cloud stands at the forefront of AI innovation, co-evolving with our customers with full stack AI and cloud solutions that support businesses anytime and anywhere. We are here to help partners and customers to design, launch, and scale groundbreaking AI agents and applications, fueling the next generation of digital innovation and unlocking unprecedented value in the global marketplace.” Dr. Feifei Li, President of International Business and SVP of Alibaba Cloud Intelligence Group said.

Focus on next gen AI

At the summit, Alibaba unveiled Qwen3-Max, its largest LLM model with over 1 trillion parameters. With Instruct (non-thinking) and Thinking modes, the model achieves impressive performance across a wide range of benchmarks especially in code generation and agentic capabilities. For the instruct mode, it scores 69.6 in SWE-Bench, an authoritative benchmark for evaluating LLMs on real-world software issues, on par with some leading closed-source models. It also records remarkable performance on Tau2-Bench, a benchmark that evaluates conversational agents, showing exceptional proficiency in tool use, a foundational capability for building intelligent, action-oriented agents.

There was also a series of Qwen3 models that cover visual language and multimodal processing unveiled at the conference. Qwen3-VL, the most capable vision-language model in the Qwen family to date is a Mixture-of-Experts (MoE) architecture that enables flexible deployment from edge devices to high-performance cloud environments. It functions as a visual agent and is capable of operating on both computer and mobile interfaces.

Another model is the Qwen3-Omni , a natively end-to-end, multilingual omni-model capable of processing text, images, audio, and video inputs, while delivering real-time, streaming response in both text and natural speech. Powered by a novel Thinker–Talker MoE architecture and pre-trained on 20 million hours of audio data, Qwen3-Omni delivers exceptional performance in understanding audio input (up to 30 minutes) and video-based conversation, all without compromising its strong capabilities in text and image processing.

Alibaba also unveiled Fun, a family of speech LLMs equipped with advanced multilingual speech recognition and synthesis capabilities. The series includes Fun-ASR , an end-to-end automatic speech recognition (ASR) model optimized for real-world enterprise deployment, and Fun-CosyVoice, a high-quality, expressive speech synthesis model designed to generate natural-sounding spoken output in multiple languages.

On the infrastructure side, a comprehensive suite of innovative infrastructure upgrades specifically designed to support the emerging agentic AI landscape was also announced. This includes enhancements to Alibaba Cloud’s storage, networking, security, container and database offerings.

Alibaba Cloud’s Platform for AI (PAI) also introduced synergistic optimizations to advance large model development into the agentic AI era. Its novel MoE training acceleration improves Qwen series training by over 300%, while the upgraded DiT training engine reduces Wan series’ single-sample training time by 28.1%. Enhanced inference delivers 71% higher TPS, 70.6% lower TPOT latency, and 97.6% faster infrastructure scaling.

PAI will also have an integration of the full suite of the NVIDIA Physical AI software stack, marking a milestone collaboration in the Physical AI domain. The initiative provides developers with a comprehensive, cloud-native platform to accelerate advancements in humanoid robotics and Physical AI solutions. This collaboration underscores Alibaba Cloud’s commitment to driving innovation in Physical AI, equipping developers with the tools and agility to rapidly advance breakthroughs in humanoid robotics.