PANews reported on September 29th that the DeepSeek-V3.2-Exp model was officially released and open-sourced today. The model incorporates a sparse attention architecture, which effectively reduces computing resource consumption and improves model inference efficiency. The model is now available on Huawei Cloud's MaaS platform. Huawei Cloud continues to deploy the DeepSeek-V3.2-Exp model using a large EP parallelization solution, leveraging the sparse attention structure to achieve a long-sequence affinity context parallelization strategy while balancing model latency and throughput performance.
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.