Google just triggered a hard reset in the AI hardware war, after its TPU chips pushed Gemini 3 past GPT‑5 in independent tests, smashing both OpenAI and Nvidia right across the face at the same time. Gemini 3 ran mostly on Google’s tensor processing units, not Nvidia GPUs. After the results landed, Sam Altman told […]Google just triggered a hard reset in the AI hardware war, after its TPU chips pushed Gemini 3 past GPT‑5 in independent tests, smashing both OpenAI and Nvidia right across the face at the same time. Gemini 3 ran mostly on Google’s tensor processing units, not Nvidia GPUs. After the results landed, Sam Altman told […]

Google pushes Gemini 3 ahead of GPT‑5 using its own TPU chips, forcing OpenAI into internal code‑red mode

2025/12/09 04:50

Google just triggered a hard reset in the AI hardware war, after its TPU chips pushed Gemini 3 past GPT‑5 in independent tests, smashing both OpenAI and Nvidia right across the face at the same time.

Gemini 3 ran mostly on Google’s tensor processing units, not Nvidia GPUs. After the results landed, Sam Altman told staff to redirect focus back to fixing ChatGPT and its core models.

The move followed what OpenAI called a “code red” moment last week. At the same time, analysts said Google is planning to more than double TPU production by 2028, as demand for in‑house AI chips keeps rising.

Google scales chips and pushes into outside sales

Google now plans to move beyond using TPUs only inside its own cloud. One recent deal alone sent 1 million TPUs to Anthropic, a move valued in the tens of billions of dollars. That single contract shook Nvidia investors.

The concern is simple. If Google sells more TPUs to outside firms, Nvidia faces direct loss of data‑center demand.

Chip analysts at SemiAnalysis now rank TPUs as “neck and neck with Nvidia” for both training and running advanced AI systems. Morgan Stanley says every 500,000 TPUs sold to outside buyers could generate up to $13 billion in revenue for Google.

The bank also expects TSMC to produce 3.2 million TPUs next year, rising to 5 million in 2027 and 7 million in 2028. Analysts said growth in 2027 now looks stronger than earlier forecasts.

Google builds its processors mainly with Broadcom, with added support from MediaTek. The company says its edge comes from full vertical control over hardware, software, and AI models within one system. Koray Kavukcuoglu, Google’s AI architect and DeepMind CTO, said, “The most important thing is that full stack approach. I think we have a unique approach there.”

He also said Google’s data from billions of users gives it deep insight into how Gemini works across products like Search and AI Overviews.

Nvidia shares fell last month after The Information reported that Meta had held talks with Google about buying TPUs. Meta declined to comment. Analysts now say Google could strike similar supply deals with OpenAI, Elon Musk’s xAI, or Safe Superintelligence, with potential added revenue topping $100 billion over several years.

Nvidia defends while the TPU story cuts deeper

Nvidia pushed back after the selloff. The company said it remains “a generation ahead of the industry” and “the only platform that runs every AI model.” It also said, “We continue to supply to Google.” Nvidia added that its systems offer “greater performance, versatility, and fungibility” than TPUs, which it says target specific frameworks.

At the same time, developers now gain tools that ease the switch away from Nvidia’s Cuda software. AI coding tools now help rewrite workloads for TPU systems faster than before. That removes one of Nvidia’s strongest lock‑in defenses.

The TPU story began long before today’s AI boom. In 2013, Jeff Dean, Google’s chief scientist, gave an internal talk after a breakthrough in deep neural networks for speech systems. Jonathan Ross, then a Google hardware engineer, recalled the moment. “The first slide was good news, machine learning finally works. Slide two said bad news, we can’t afford it.” Dean calculated that if hundreds of millions of users spoke to Google for three minutes a day, data‑center capacity would need to double at a cost of tens of billions of dollars.

Ross began building the first TPU as a side project in 2013 while seated near the speech team. “We built that first chip with about 15 people,” he said in December 2023. Ross now runs AI chip firm Groq.

In 2016, AlphaGo defeated world Go champion Lee Sedol, and that historic match became a major AI milestone. Since then, TPUs have powered Google’s Search, ads, and YouTube systems for years.

Google used to update its TPUs every two years, but that cycle was changed to an annual 2 years ago in 2023.

A Google spokesperson said demand is rising on both fronts. “Google Cloud is seeing growing demand for both our custom TPUs and Nvidia GPUs. We will continue supporting both,” the company said.

Join a premium crypto trading community free for 30 days - normally $100/mo.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Bitcoin devs cheer block reconstruction stats, ignore security budget concerns

Bitcoin devs cheer block reconstruction stats, ignore security budget concerns

The post Bitcoin devs cheer block reconstruction stats, ignore security budget concerns appeared on BitcoinEthereumNews.com. This morning, Bitcoin Core developers celebrated improved block reconstruction statistics for node operators while conveniently ignoring the reason for these statistics — the downward trend in fees for Bitcoin’s security budget. Reacting with heart emojis and thumbs up to a green chart showing over 80% “successful compact block reconstructions without any requested transactions,” they conveniently omitted red trend lines of the fees that Bitcoin users pay for mining security which powered those green statistics. Block reconstructions occur when a node requests additional information about transactions within a compact block. Although compact blocks allow nodes to quickly relay valid bundles of transactions across the internet, the more frequently that nodes can reconstruct without extra, cumbersome transaction requests from their peers is a positive trend. Because so many nodes switched over in August to relay transactions bidding 0.1 sat/vB across their mempools, nodes now have to request less transaction data to reconstruct blocks containing sub-1 sat/vB transactions. After nodes switched over in August to accept and relay pending transactions bidding less than 1 sat/vB, disparate mempools became harmonized as most nodes had a better view of which transactions would likely join upcoming blocks. As a result, block reconstruction times improved, as nodes needed less information about these sub-1 sat/vB transactions. In July, several miners admitted that user demand for Bitcoin blockspace had persisted at such a low that they were willing to accept transaction fees of just 0.1 satoshi per virtual byte — 90% lower than their prior 1 sat/vB minimum. With so many blocks partially empty, they succumbed to the temptation to accept at least something — even 1 billionth of one bitcoin (BTC) — rather than $0 to fill up some of the excess blockspace. Read more: Bitcoin’s transaction fees have fallen to a multi-year low Green stats for block reconstruction after transaction fees crash After…
Share
BitcoinEthereumNews2025/09/18 04:07
OpenAI and Partners Launch the Agentic AI Foundation for Open-Source AI Development

OpenAI and Partners Launch the Agentic AI Foundation for Open-Source AI Development

The post OpenAI and Partners Launch the Agentic AI Foundation for Open-Source AI Development appeared on BitcoinEthereumNews.com. Peter Zhang Dec 09, 2025 17:56 OpenAI, Anthropic, and Block, supported by tech giants, establish the Agentic AI Foundation under the Linux Foundation to advance open-source agentic AI infrastructure. In a significant move for the artificial intelligence community, OpenAI, in collaboration with Anthropic and Block, has co-founded the Agentic AI Foundation (AAIF) under the auspices of the Linux Foundation. This initiative, supported by industry leaders such as Google, Microsoft, AWS, Bloomberg, and Cloudflare, aims to develop open-source standards for agentic AI systems as they transition from experimental phases to real-world applications, according to OpenAI. The Role of Open Standards The foundation’s mission is to create a neutral ground for developing interoperable infrastructure for AI agents. As AI technology becomes increasingly integrated into business and consumer environments, the need for standardized protocols grows. Open standards are crucial for ensuring that AI systems can operate safely, efficiently, and across various platforms without the risk of fragmentation. OpenAI’s contribution to the AAIF includes the AGENTS.md, a straightforward open format providing agents with project-specific instructions. This effort is designed to facilitate long-term support and widespread adoption across the AI community. Building the Open Ecosystem Over the past year, OpenAI has been instrumental in developing open-source agentic infrastructure. Contributions include the Agents SDK, Apps SDK, and the Agentic Commerce Protocol, alongside open-source initiatives like the gpt-oss models and Codex CLI. These resources have significantly impacted the development community, demonstrated by their adoption in over two million public pull requests on GitHub. OpenAI’s efforts have laid the groundwork for AAIF by showcasing the potential of open, interoperable infrastructure. The foundation is envisioned as a collaborative space where developers and enterprises can build upon shared standards, ensuring technological advancements benefit the broader community. Donating AGENTS.md The AGENTS.md format, initially…
Share
BitcoinEthereumNews2025/12/10 19:30