Huawei’s cloud unit now supports DeepSeek models

Huawei Cloud has worked with SiliconFlow, an AI infrastructure startup to make DeepSeek models available for its users.

DeepSeek is now available to end users on the Huawei Cloud. The Chinese AI-powered chatbot has taken the world by storm and caused a bit of a frenzy among tech vendors in the US when it was recently launched.

Developed in China, DeepSeek’s R1 LLM models are provided under an open source license, which not only makes it free to use but also costs much cheaper to develop compared to OpenAI’s ChatGPT and other LLM models. The DeepSeek AI assistant for mobile devices also became one the most downloaded services since its release.

According to a report by South China Morning Post (SCMP), Huawei’s cloud unit has now teamed up with a Beijing-based AI infrastructure startup to make the AI models available to its end users as well.

SiliconFlow will host the DeepSeek models on its platform. Users will be able to have access to DeepSeek’s large language model, V3 and reasoning model R1 through Huawei Cloud’s Ascend cloud service. SCMP also reported that the models will be offered on a more discounted rate on the platform.

Both Huawei Cloud and SiliconFlow released statements confirming that the models are available and will have performance that matches DeepSeek models that are running on global premium GPUs.

Given the tech sanctions imposed on Huawei by the US, the Ascend cloud currently runs on its home-grown Ascend solution for compute resources. This includes the use of self-developed server clusters, AI modules and accelerator cards. Both companies did not state which chips they will be using in the Ascend cloud service.

Apart from Huawei, Tencent, another Chinese social media and video gaming giant, has also adopted DeepSeek’s reasoning model R1 on its cloud-computing platform. According to Tencent Cloud, users can deploy the R1 model and start using it within minutes.

The biggest takeaway from the release and availability of DeepSeek’s models in Chinese cloud companies is the cost to users. Being an open source model, the AI models have been developed at a fraction of the cost of what US tech companies have spent on developing their own models.

Realizing the potential of DeepSeek, US tech companies like AWS have already offered the service on its marketplace. In a LinkedIn post, Matt Garman, AWS CEO stated that DeepSeek will be available on both Amazon Bedrock and Amazon SageMaker.

“We’ve always been focused on making it easy to get started with emerging and popular models right away, and we’re giving customers a lot of ways to test our DeepSeek AI,” said Garman.

Apart from AWS, IBM has also acknowledged the capabilities of DeepSeek with Arvind Krishna, CEO of IBM stating the development a promising move for businesses.

“For too long, the AI race has been a game of scale where bigger models meant better outcomes. But there is no law of physics that dictates AI models must remain big and expansive. The cost of training and interference is just another technological challenge to be solved,” said Krishna in a LinkedIn post.

Krishna also stated that cost reductions in emerging technologies have been experienced before, and AI will most likely follow the same path.

“In the early days of computing, storage and processing power were prohibitively expensive. Yet, through technological advancements and economies of scale, these costs plummeted. AI will follow the same path,” he said.

Meanwhile, Alibaba has also released an upgraded version of Qwen AI model which it says has outperformed DeepSeek’s V3. The model is also reported to outperform OpenAI and Meta Platforms’ latest AI models.