TLDRs; Huawei launches Flex:ai, an open-source tool to improve AI chip efficiency by 30%. The software manages GPU, NPU, and accelerator workloads using Kubernetes orchestration. Researchers from three Chinese universities contributed to developing Flex:ai’s core framework. Flex:ai aims to overcome China’s restricted access to advanced semiconductor technology. Huawei has announced Flex:ai, a new open-source software [...] The post Huawei Unveils Flex:ai to Significantly Boost AI Chip Processing Efficiency appeared first on CoinCentral.TLDRs; Huawei launches Flex:ai, an open-source tool to improve AI chip efficiency by 30%. The software manages GPU, NPU, and accelerator workloads using Kubernetes orchestration. Researchers from three Chinese universities contributed to developing Flex:ai’s core framework. Flex:ai aims to overcome China’s restricted access to advanced semiconductor technology. Huawei has announced Flex:ai, a new open-source software [...] The post Huawei Unveils Flex:ai to Significantly Boost AI Chip Processing Efficiency appeared first on CoinCentral.

Huawei Unveils Flex:ai to Significantly Boost AI Chip Processing Efficiency

TLDRs;

  • Huawei launches Flex:ai, an open-source tool to improve AI chip efficiency by 30%.
  • The software manages GPU, NPU, and accelerator workloads using Kubernetes orchestration.
  • Researchers from three Chinese universities contributed to developing Flex:ai’s core framework.
  • Flex:ai aims to overcome China’s restricted access to advanced semiconductor technology.

Huawei has announced Flex:ai, a new open-source software platform designed to enhance the efficiency of AI chips.

The tool aims to improve processing utilization by optimizing how workloads are allocated across GPUs, NPUs, and other accelerators, promising significant performance gains for AI workloads.

Open-source software targets AI chips

Flex:ai allows AI processors to divide a single physical card into multiple virtual computing units, enabling more flexible and efficient use of hardware.

Huawei claims the software can improve processor utilization by roughly 30%, although this figure has not been independently verified. The company plans to release the platform through its ModelEngine developer community, inviting developers and researchers to explore and build on the technology.

The move reflects a broader trend among Chinese tech companies to develop software-driven solutions that maximize performance despite limited access to the latest advanced chips due to trade restrictions and export controls imposed by the U.S.

Collaboration with top Chinese universities

Huawei worked closely with researchers from Shanghai Jiao Tong University, Xian Jiaotong University, and Xiamen University to develop Flex:ai.

The collaboration highlights the growing intersection of corporate and academic expertise in AI software and hardware innovation within China. By pooling resources and knowledge, Huawei and its academic partners have crafted a system that aims to rival similar orchestration platforms developed abroad.

For comparison, Nvidia acquired Run:ai in 2024, a company that provides similar AI workload management software. Huawei’s Flex:ai seeks to offer a domestic alternative to such tools, ensuring that Chinese developers have access to advanced AI chip orchestration without relying on foreign solutions.

Virtual computing units enhance utilization

One of Flex:ai’s core innovations is its ability to partition a single processing card into multiple virtual units, effectively multiplying the chip’s computational capacity for AI tasks.

This approach enables researchers and companies to run multiple experiments simultaneously on a single GPU or NPU, significantly reducing hardware waste and improving throughput.

Flex:ai uses Kubernetes to manage resource allocation dynamically, distributing workloads efficiently across all available hardware. This approach aligns with global trends in cloud and AI infrastructure, where software orchestration is increasingly critical to maximize chip utilization.

Flex:ai responds to chip access limits

The launch of Flex:ai comes amid ongoing challenges for Chinese companies in obtaining advanced chips. Recent reports show Huawei’s Ascend AI chips still incorporate parts from foreign suppliers, including TSMC, Samsung, and SK Hynix, despite U.S. export restrictions.

By focusing on software optimization, Huawei can extract more performance from existing hardware while mitigating supply chain limitations.

The company is also ramping up AI chip production. In 2025, Huawei aims to double output of its flagship 910C Ascend chips, with plans to produce up to 1.6 million dies by 2026. Flex:ai’s software capabilities complement this expansion, ensuring that the growing number of chips are used efficiently across various applications in Chinese tech firms, including Alibaba and DeepSeek.

The post Huawei Unveils Flex:ai to Significantly Boost AI Chip Processing Efficiency appeared first on CoinCentral.

Market Opportunity
Sleepless AI Logo
Sleepless AI Price(AI)
$0.03915
$0.03915$0.03915
-3.92%
USD
Sleepless AI (AI) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.