Nvidia (NVDA) stock gained slightly as the company announced a multiyear collaboration with Thinking Machines Lab to deploy next-generation AI infrastructure, signaling strong growth prospects for both enterprise and research applications.
The partnership commits to deploying at least one gigawatt of Nvidia’s Vera Rubin systems, marking one of the largest AI-focused infrastructure initiatives outside of previous OpenAI projects. The rollout is expected to begin early next year, providing substantial computational power for Thinking Machines Lab’s frontier AI models and platforms.
The collaboration focuses on designing and implementing high-performance training and serving systems optimized for Nvidia’s architectures. Beyond simply supplying GPUs, the deal emphasizes modular rack-scale AI platforms capable of delivering massive compute throughput.
NVIDIA Corporation, NVDA
Thinking Machines Lab, known for its work on frontier AI models, will leverage these resources to enhance the development of open and enterprise-accessible AI systems. The initiative aims to broaden access to cutting-edge AI across research institutions, businesses, and the scientific community.
Nvidia has also made a substantial financial investment to support the company’s long-term expansion, although specific terms were not disclosed. Analysts view the commitment as a strategic move to secure Nvidia’s position at the center of next-generation AI infrastructure.
A one-gigawatt AI system represents a significant engineering achievement. For context, this is roughly one-tenth of the scale OpenAI outlined in its planned multi-gigawatt AI infrastructure buildout with Nvidia. Unlike traditional GPU clusters, the Vera Rubin platform integrates GPUs, Vera CPUs, and high-speed NVLink interconnects at the rack level, effectively turning each rack into a single accelerator.
This scale necessitates innovative power solutions, including the adoption of 800-volt direct current (VDC) systems, which improve energy efficiency and reduce material costs like copper busbars. Experts say such deployments mark a turning point in how data centers are designed, blending modular architecture with extreme performance requirements.
Nvidia’s rapid cadence of hardware releases adds pressure for AI labs to stay current. Rubin is set for 2026, followed by Rubin Ultra in 2027. In addition, the Rubin NVL144 system is expected to deliver over three times the 8-bit floating point (FP8) training performance of the previous Blackwell Ultra B300 NVL72 platform slated for 2025.
To support these frequent upgrades, Nvidia’s MGX rack architecture provides modular server guidelines, allowing over 50 partners to integrate GPUs, CPUs, and interconnects seamlessly. This ecosystem strategy ensures that labs can adopt new technology without major disruptions, while maintaining Nvidia’s hardware at the core of next-generation AI systems.
For investors, the partnership signals Nvidia’s continued dominance in AI infrastructure and its ability to capture long-term growth opportunities. The stock’s modest gain following the announcement reflects cautious optimism as markets digest the implications of large-scale AI deployments.
For the research community and enterprises, the deal means broader access to frontier AI capabilities without requiring massive in-house hardware investments. By combining modular infrastructure, high-speed interconnects, and scalable compute power, Nvidia and Thinking Machines Lab are setting the stage for a new era of AI experimentation, enterprise solutions, and scientific discovery.
The post Nvidia (NVDA) Stock; Moves Higher with Multiyear Thinking Machines Lab Deal appeared first on CoinCentral.


![[DECODED] 3 playbook tactics that persist a year after Duterte’s arrest](https://www.rappler.com/tachyon/2026/03/image-5.jpeg?resize=75%2C75&crop_strategy=attention)