The post Together AI Enhances Fine-Tuning Platform with Larger Models and Hugging Face Integration appeared on BitcoinEthereumNews.com. Lawrence Jengar Sep 10, 2025 19:13 Together AI unveils major upgrades to its Fine-Tuning Platform, including support for 100B+ parameter models, extended context lengths, and improved integration with Hugging Face Hub. Together AI has announced significant upgrades to its Fine-Tuning Platform, aiming to streamline the model customization process for AI developers. The latest enhancements include the ability to train models with over 100 billion parameters, extended context lengths, and enhanced integration with the Hugging Face Hub, according to Together AI. Expanding Model Capacity The platform now supports a range of new large models, such as DeepSeek-R1, Qwen3-235B, and Llama 4 Maverick. These models are designed to perform complex tasks, sometimes rivaling proprietary models. The platform’s engineering optimizations allow for efficient training of these large-scale models, reducing both costs and time investments. Longer Context Lengths Responding to the growing need for long-context processing, Together AI has overhauled its training systems to support increased context lengths. Developers can now utilize context lengths of up to 131k tokens for certain models, enhancing the platform’s capability to handle complex and lengthy data inputs. Integration with Hugging Face Hub The integration with Hugging Face Hub allows developers to fine-tune a wide array of models hosted on the platform. This feature enables users to start with a pre-adapted model and further customize it for specific tasks. Additionally, outputs from training runs can be directly saved into a repository on the Hub, facilitating seamless model management. Advanced Training Objectives Together AI has also expanded its support for Preference Optimization with new training objectives, such as length-normalized DPO and SimPO, offering more flexibility in training on preference data. The platform now supports the maximum batch size setting, optimizing the training process across different models and modes. These enhancements are part… The post Together AI Enhances Fine-Tuning Platform with Larger Models and Hugging Face Integration appeared on BitcoinEthereumNews.com. Lawrence Jengar Sep 10, 2025 19:13 Together AI unveils major upgrades to its Fine-Tuning Platform, including support for 100B+ parameter models, extended context lengths, and improved integration with Hugging Face Hub. Together AI has announced significant upgrades to its Fine-Tuning Platform, aiming to streamline the model customization process for AI developers. The latest enhancements include the ability to train models with over 100 billion parameters, extended context lengths, and enhanced integration with the Hugging Face Hub, according to Together AI. Expanding Model Capacity The platform now supports a range of new large models, such as DeepSeek-R1, Qwen3-235B, and Llama 4 Maverick. These models are designed to perform complex tasks, sometimes rivaling proprietary models. The platform’s engineering optimizations allow for efficient training of these large-scale models, reducing both costs and time investments. Longer Context Lengths Responding to the growing need for long-context processing, Together AI has overhauled its training systems to support increased context lengths. Developers can now utilize context lengths of up to 131k tokens for certain models, enhancing the platform’s capability to handle complex and lengthy data inputs. Integration with Hugging Face Hub The integration with Hugging Face Hub allows developers to fine-tune a wide array of models hosted on the platform. This feature enables users to start with a pre-adapted model and further customize it for specific tasks. Additionally, outputs from training runs can be directly saved into a repository on the Hub, facilitating seamless model management. Advanced Training Objectives Together AI has also expanded its support for Preference Optimization with new training objectives, such as length-normalized DPO and SimPO, offering more flexibility in training on preference data. The platform now supports the maximum batch size setting, optimizing the training process across different models and modes. These enhancements are part…

Together AI Enhances Fine-Tuning Platform with Larger Models and Hugging Face Integration

2025/09/11 23:45
2분 읽기
이 콘텐츠에 대한 의견이나 우려 사항이 있으시면 crypto.news@mexc.com으로 연락주시기 바랍니다


Lawrence Jengar
Sep 10, 2025 19:13

Together AI unveils major upgrades to its Fine-Tuning Platform, including support for 100B+ parameter models, extended context lengths, and improved integration with Hugging Face Hub.





Together AI has announced significant upgrades to its Fine-Tuning Platform, aiming to streamline the model customization process for AI developers. The latest enhancements include the ability to train models with over 100 billion parameters, extended context lengths, and enhanced integration with the Hugging Face Hub, according to Together AI.

Expanding Model Capacity

The platform now supports a range of new large models, such as DeepSeek-R1, Qwen3-235B, and Llama 4 Maverick. These models are designed to perform complex tasks, sometimes rivaling proprietary models. The platform’s engineering optimizations allow for efficient training of these large-scale models, reducing both costs and time investments.

Longer Context Lengths

Responding to the growing need for long-context processing, Together AI has overhauled its training systems to support increased context lengths. Developers can now utilize context lengths of up to 131k tokens for certain models, enhancing the platform’s capability to handle complex and lengthy data inputs.

Integration with Hugging Face Hub

The integration with Hugging Face Hub allows developers to fine-tune a wide array of models hosted on the platform. This feature enables users to start with a pre-adapted model and further customize it for specific tasks. Additionally, outputs from training runs can be directly saved into a repository on the Hub, facilitating seamless model management.

Advanced Training Objectives

Together AI has also expanded its support for Preference Optimization with new training objectives, such as length-normalized DPO and SimPO, offering more flexibility in training on preference data. The platform now supports the maximum batch size setting, optimizing the training process across different models and modes.

These enhancements are part of Together AI’s commitment to provide cutting-edge tools for AI researchers and engineers. With these new features, the Fine-Tuning Platform is positioned to support even the most demanding AI development tasks, making it a cornerstone for innovation in machine learning.

Image source: Shutterstock


Source: https://blockchain.news/news/together-ai-enhances-fine-tuning-platform-larger-models

시장 기회
Moonveil 로고
Moonveil 가격(MORE)
$0.0000392
$0.0000392$0.0000392
-2.24%
USD
Moonveil (MORE) 실시간 가격 차트
면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, crypto.news@mexc.com으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.

USD1 Genesis: 0 Fees + 12% APR

USD1 Genesis: 0 Fees + 12% APRUSD1 Genesis: 0 Fees + 12% APR

New users: stake for up to 600% APR. Limited time!