The post TorchForge RL Pipelines Now Operable on Together AI’s Cloud appeared on BitcoinEthereumNews.com. Jessie A Ellis Dec 04, 2025 17:54 Together AI introduces TorchForge RL pipelines on its cloud platform, enhancing distributed training and sandboxed environments with a BlackJack training demo. TorchForge reinforcement learning (RL) pipelines are now seamlessly operable on Together AI’s Instant Clusters, offering robust support for distributed training, tool execution, and sandboxed environments, as demonstrated by an open-source BlackJack training demo, according to together.ai. The AI Native Cloud: Foundation for Next-Gen RL In the rapidly evolving field of reinforcement learning, building flexible and scalable systems necessitates compatible and efficient compute frameworks and tooling. Modern RL pipelines have transcended basic training loops, now relying heavily on distributed rollouts, high-throughput inference, and a coordinated use of CPU and GPU resources. The comprehensive PyTorch stack, inclusive of TorchForge and Monarch, now operates with distributed training capabilities on Together Instant Clusters. These clusters provide: Low-latency GPU communication: Utilizing InfiniBand/NVLink topologies for efficient RDMA-based data transfers and distributed actor messaging. Consistent cluster bring-up: Preconfigured with drivers, NCCL, CUDA, and the GPU operator, enabling PyTorch distributed jobs to run without manual setup. Heterogeneous RL workload scheduling: Optimized GPU nodes for policy replicas and trainers, alongside CPU-optimized nodes for environment and tool execution. Together AI’s clusters are aptly suited for RL frameworks that require a blend of GPU-bound model computation and CPU-bound environment workloads. Advanced Tool Integration and Demonstration A significant portion of RL workloads involves executing tools, running code, or interacting with sandboxed environments. Together AI’s platform natively supports these requirements through: Together CodeSandbox: MicroVM environments tailored for tool-use, coding tasks, and simulations. Together Code Interpreter: Facilitates fast, isolated Python execution suitable for unit-test-based reward functions or code-evaluation tasks. Both CodeSandbox and Code Interpreter integrate with OpenEnv and TorchForge environment services, allowing rollout workers to utilize these tools… The post TorchForge RL Pipelines Now Operable on Together AI’s Cloud appeared on BitcoinEthereumNews.com. Jessie A Ellis Dec 04, 2025 17:54 Together AI introduces TorchForge RL pipelines on its cloud platform, enhancing distributed training and sandboxed environments with a BlackJack training demo. TorchForge reinforcement learning (RL) pipelines are now seamlessly operable on Together AI’s Instant Clusters, offering robust support for distributed training, tool execution, and sandboxed environments, as demonstrated by an open-source BlackJack training demo, according to together.ai. The AI Native Cloud: Foundation for Next-Gen RL In the rapidly evolving field of reinforcement learning, building flexible and scalable systems necessitates compatible and efficient compute frameworks and tooling. Modern RL pipelines have transcended basic training loops, now relying heavily on distributed rollouts, high-throughput inference, and a coordinated use of CPU and GPU resources. The comprehensive PyTorch stack, inclusive of TorchForge and Monarch, now operates with distributed training capabilities on Together Instant Clusters. These clusters provide: Low-latency GPU communication: Utilizing InfiniBand/NVLink topologies for efficient RDMA-based data transfers and distributed actor messaging. Consistent cluster bring-up: Preconfigured with drivers, NCCL, CUDA, and the GPU operator, enabling PyTorch distributed jobs to run without manual setup. Heterogeneous RL workload scheduling: Optimized GPU nodes for policy replicas and trainers, alongside CPU-optimized nodes for environment and tool execution. Together AI’s clusters are aptly suited for RL frameworks that require a blend of GPU-bound model computation and CPU-bound environment workloads. Advanced Tool Integration and Demonstration A significant portion of RL workloads involves executing tools, running code, or interacting with sandboxed environments. Together AI’s platform natively supports these requirements through: Together CodeSandbox: MicroVM environments tailored for tool-use, coding tasks, and simulations. Together Code Interpreter: Facilitates fast, isolated Python execution suitable for unit-test-based reward functions or code-evaluation tasks. Both CodeSandbox and Code Interpreter integrate with OpenEnv and TorchForge environment services, allowing rollout workers to utilize these tools…

TorchForge RL Pipelines Now Operable on Together AI’s Cloud

2025/12/06 15:05


Jessie A Ellis
Dec 04, 2025 17:54

Together AI introduces TorchForge RL pipelines on its cloud platform, enhancing distributed training and sandboxed environments with a BlackJack training demo.

TorchForge reinforcement learning (RL) pipelines are now seamlessly operable on Together AI’s Instant Clusters, offering robust support for distributed training, tool execution, and sandboxed environments, as demonstrated by an open-source BlackJack training demo, according to together.ai.

The AI Native Cloud: Foundation for Next-Gen RL

In the rapidly evolving field of reinforcement learning, building flexible and scalable systems necessitates compatible and efficient compute frameworks and tooling. Modern RL pipelines have transcended basic training loops, now relying heavily on distributed rollouts, high-throughput inference, and a coordinated use of CPU and GPU resources.

The comprehensive PyTorch stack, inclusive of TorchForge and Monarch, now operates with distributed training capabilities on Together Instant Clusters. These clusters provide:

  • Low-latency GPU communication: Utilizing InfiniBand/NVLink topologies for efficient RDMA-based data transfers and distributed actor messaging.
  • Consistent cluster bring-up: Preconfigured with drivers, NCCL, CUDA, and the GPU operator, enabling PyTorch distributed jobs to run without manual setup.
  • Heterogeneous RL workload scheduling: Optimized GPU nodes for policy replicas and trainers, alongside CPU-optimized nodes for environment and tool execution.

Together AI’s clusters are aptly suited for RL frameworks that require a blend of GPU-bound model computation and CPU-bound environment workloads.

Advanced Tool Integration and Demonstration

A significant portion of RL workloads involves executing tools, running code, or interacting with sandboxed environments. Together AI’s platform natively supports these requirements through:

  • Together CodeSandbox: MicroVM environments tailored for tool-use, coding tasks, and simulations.
  • Together Code Interpreter: Facilitates fast, isolated Python execution suitable for unit-test-based reward functions or code-evaluation tasks.

Both CodeSandbox and Code Interpreter integrate with OpenEnv and TorchForge environment services, allowing rollout workers to utilize these tools during training.

BlackJack Training Demo

Together AI has released a demonstration of a TorchForge RL pipeline running on its Instant Clusters, interacting with an OpenEnv environment hosted on Together CodeSandbox. This demo, adapted from a Meta reference implementation, trains a Qwen 1.5B model to play BlackJack using GRPO. The RL pipeline integrates a vLLM policy server, BlackJack environment, reference model, off-policy replay buffer, and a TorchTitan trainer—connected through Monarch’s actor mesh and using TorchStore for weight synchronization.

The OpenEnv GRPO BlackJack repository includes Kubernetes manifests and setup scripts. Deployment and training initiation are streamlined with simple kubectl commands, allowing experimentation with model configurations and GRPO hyperparameter adjustments.

Additionally, a standalone integration wraps Together’s Code Interpreter as an OpenEnv environment, enabling RL agents to interact with the Interpreter like any other environment. This integration allows RL pipelines to be applied to diverse tasks such as coding and mathematical reasoning.

The demonstrations highlight that sophisticated, multi-component RL training can be conducted on the Together AI Cloud with ease, setting the stage for a flexible, open RL framework in the PyTorch ecosystem, scalable on the Together AI Cloud.

Image source: Shutterstock

Source: https://blockchain.news/news/torchforge-rl-pipelines-operable-together-ai-cloud

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Suspected $243M Crypto Hacker Arrested After Major Breakthrough in Global Heist

Suspected $243M Crypto Hacker Arrested After Major Breakthrough in Global Heist

Major breakthrough in $243M crypto heist as suspect arrested! $18.58M in crypto seized, linked to suspected hacker’s wallet. Dubai villa raid leads to possible arrest of crypto thief. A major breakthrough in the investigation into the $243 million crypto theft has emerged, as blockchain investigator ZachXBT claims that a British hacker, suspected of orchestrating one of the largest individual thefts in crypto history, may have been arrested. On December 5, ZachXBT revealed in a Telegram post that Danny (also known as Meech or Danish Zulfiqar Khan), the primary suspect behind the attack, was likely apprehended by law enforcement. ZachXBT pointed to a significant find: approximately $18.58 million worth of crypto currently sitting in an Ethereum wallet linked to the suspect. The investigator claimed that several addresses connected to Zulfiqar had consolidated funds to this address, mirroring patterns previously seen in law enforcement seizures. This discovery has raised suspicions that authorities may have closed in on the hacker. Moreover, ZachXBT mentioned that Zulfiqar was last known to be in Dubai, where it is alleged that a villa was raided, and multiple individuals associated with the hacker were arrested. He also noted that several contacts of Zulfiqar had gone silent in recent days, adding to the growing belief that law enforcement had made a major move against the hacker. However, no official statements from Dubai Police or UAE regulators have confirmed the arrest, and local media reports remain silent on the matter. Also Read: Song Chi-hyung: The Visionary Behind Upbit and the Future of Blockchain Innovation The $243 Million Genesis Creditor Heist: How the Attack Unfolded The arrest of Zulfiqar may be linked to one of the largest known individual crypto heists. In September 2024, ZachXBT uncovered that three attackers were involved in stealing 4,064 BTC (valued at $243 million at the time) from a Genesis creditor. The attack was carried out using sophisticated social engineering tactics. The hackers impersonated Google support to trick the victim into resetting two-factor authentication on their Gemini account, giving them access to the victim’s private keys. From there, they drained the wallet, moving the stolen BTC through a complex network of exchanges and swap services. ZachXBT previously identified the suspects by their online handles, “Greavys,” “Wiz,” and “Box,” later tying them to individuals Malone Lam, Veer Chetal, and Jeandiel Serrano. The U.S. Department of Justice later charged two of the suspects with orchestrating a $230 million crypto scam involving the theft. Further court documents revealed that the criminals had used a mix of SIM swaps, social engineering, and even physical burglaries to carry out the theft, spending millions on luxury items like cars and travel. ZachXBT’s tracking work has played a key role in uncovering several related thefts, including a $2 million scam in which Chetal was involved while out on bond. The news of Zulfiqar’s potential arrest could mark a significant turning point in the investigation, although full details are yet to emerge. Also Read: Kevin O’Leary Warns: Only Bitcoin and Ethereum Will Survive Crypto’s Reality Check! The post Suspected $243M Crypto Hacker Arrested After Major Breakthrough in Global Heist appeared first on 36Crypto.
Share
Coinstats2025/12/06 18:27