The post OpenAI plans mega AI city on foundations of custom chip advancements appeared on BitcoinEthereumNews.com. OpenAI is sketching out what looks like the blueprint for a machine-built civilization, one powered by its own chips, its own infrastructure, and enough electricity to light two New York Cities. The company’s massive AI city vision rests on a vision that is very nearly impossible pull off: designing and producing billions of custom chips in partnership with Broadcom to support what CEO Sam Altman calls the “computing spine” of the future. Sam told the Wall Street Journal that delivering the artificial-intelligence services people demand will require at least one AI-specific chip per user, a mind-bending projection that runs into the billions. Ali Farhadi, head of the Allen Institute for AI, backed that scale, saying if AI replaces human labor at the rate promised, “the world will need as many AI chips as it has conventional ones.” For OpenAI, this is about control; over costs, over power consumption, and over the long-term survival of its models as demand explodes. OpenAI links Broadcom, Nvidia, and memory giants for next-gen compute Nvidia of course still dominates the AI training space, with roughly 70% market share, which is why OpenAI has to continue using its GPUs for model training. But OpenAI is now splitting the pipeline: training happens on Nvidia, inference (the process of delivering answers to users) moves to Broadcom’s custom silicon. This two-track design could cut expenses and power usage at a scale where every percentage point matters. Jordan Nanos, a semiconductor researcher at SemiAnalysis, said Broadcom is helping OpenAI “remix the typical AI-chip recipe.” These chips won’t be generic. They’re being engineered specifically for OpenAI’s models, which rely on high-bandwidth memory, supplied by Samsung and SK Hynix, two firms the company recently partnered with. That type of memory allows faster data movement between processors, critical for systems like OpenAI’s Pulse,… The post OpenAI plans mega AI city on foundations of custom chip advancements appeared on BitcoinEthereumNews.com. OpenAI is sketching out what looks like the blueprint for a machine-built civilization, one powered by its own chips, its own infrastructure, and enough electricity to light two New York Cities. The company’s massive AI city vision rests on a vision that is very nearly impossible pull off: designing and producing billions of custom chips in partnership with Broadcom to support what CEO Sam Altman calls the “computing spine” of the future. Sam told the Wall Street Journal that delivering the artificial-intelligence services people demand will require at least one AI-specific chip per user, a mind-bending projection that runs into the billions. Ali Farhadi, head of the Allen Institute for AI, backed that scale, saying if AI replaces human labor at the rate promised, “the world will need as many AI chips as it has conventional ones.” For OpenAI, this is about control; over costs, over power consumption, and over the long-term survival of its models as demand explodes. OpenAI links Broadcom, Nvidia, and memory giants for next-gen compute Nvidia of course still dominates the AI training space, with roughly 70% market share, which is why OpenAI has to continue using its GPUs for model training. But OpenAI is now splitting the pipeline: training happens on Nvidia, inference (the process of delivering answers to users) moves to Broadcom’s custom silicon. This two-track design could cut expenses and power usage at a scale where every percentage point matters. Jordan Nanos, a semiconductor researcher at SemiAnalysis, said Broadcom is helping OpenAI “remix the typical AI-chip recipe.” These chips won’t be generic. They’re being engineered specifically for OpenAI’s models, which rely on high-bandwidth memory, supplied by Samsung and SK Hynix, two firms the company recently partnered with. That type of memory allows faster data movement between processors, critical for systems like OpenAI’s Pulse,…

OpenAI plans mega AI city on foundations of custom chip advancements

OpenAI is sketching out what looks like the blueprint for a machine-built civilization, one powered by its own chips, its own infrastructure, and enough electricity to light two New York Cities.

The company’s massive AI city vision rests on a vision that is very nearly impossible pull off: designing and producing billions of custom chips in partnership with Broadcom to support what CEO Sam Altman calls the “computing spine” of the future.

Sam told the Wall Street Journal that delivering the artificial-intelligence services people demand will require at least one AI-specific chip per user, a mind-bending projection that runs into the billions.

Ali Farhadi, head of the Allen Institute for AI, backed that scale, saying if AI replaces human labor at the rate promised, “the world will need as many AI chips as it has conventional ones.” For OpenAI, this is about control; over costs, over power consumption, and over the long-term survival of its models as demand explodes.

Nvidia of course still dominates the AI training space, with roughly 70% market share, which is why OpenAI has to continue using its GPUs for model training.

But OpenAI is now splitting the pipeline: training happens on Nvidia, inference (the process of delivering answers to users) moves to Broadcom’s custom silicon. This two-track design could cut expenses and power usage at a scale where every percentage point matters.

Jordan Nanos, a semiconductor researcher at SemiAnalysis, said Broadcom is helping OpenAI “remix the typical AI-chip recipe.” These chips won’t be generic. They’re being engineered specifically for OpenAI’s models, which rely on high-bandwidth memory, supplied by Samsung and SK Hynix, two firms the company recently partnered with.

That type of memory allows faster data movement between processors, critical for systems like OpenAI’s Pulse, an AI agent that scans the web daily to brief users. Pulse consumes so much computing power that Sam said it’s limited to those who pay $200 a month for the Pro tier.

This dependency on high-bandwidth memory ties directly to how OpenAI’s models operate. Early neural networks were “dense,” activating large sections of their systems for every query. Newer ones use “sparsity”, which activates only specific expert sections.

Instead of using 25% of the model to answer a question, modern systems trigger a fraction of a percent. That difference slashes power draw and speeds up response times. When a chip is built around that sparse logic, efficiency skyrockets, and Broadcom is the one making that hardware possible.

OpenAI’s gigawatt-scale AI supercomputers redefine infrastructure

Sam has said that OpenAI’s current compute footprint is around 2 gigawatts, spread across global data centers. The Broadcom partnership aims to build up to 10 gigawatts by 2030, forming the physical base for what insiders are calling AI cities, dense campuses of servers, storage, and custom interconnects tied together by Broadcom’s Tomahawk Ultra networking chips.

That’s only part of the wave. Over the past three weeks, OpenAI has added 16 gigawatts in fresh capacity deals with AMD and Nvidia, bringing the total to levels that could require nearly $1 trillion in investment.

xAI’s Memphis Colossus already reached 1.21 gigawatts this fall. Meta’s Hyperion facility in Louisiana is approved for 2.3 gigawatts, with Mark Zuckerberg targeting 5 gigawatts. The AI energy race is officially global.

Sam described this transformation as “the biggest joint industrial project in history,” saying even these deals are “a drop in the bucket compared to where we need to go.” Part of his goal is to diversify suppliers.

The Stargate campus in Abilene, Texas, being built by Oracle, will focus on AI training, mostly on Nvidia chips. AMD hardware will handle inference workloads, while Broadcom’s custom silicon fills the efficiency gap.

As Nanos put it, “OpenAI is looking quite far into the future, and trying to make sure they have access to enough supply of chips.”

Claim your free seat in an exclusive crypto trading community – limited to 1,000 members.

Source: https://www.cryptopolitan.com/openai-custom-chip-breakthrough/

Market Opportunity
null Logo
null Price(null)
--
----
USD
null (null) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Ethereum unveils roadmap focusing on scaling, interoperability, and security at Japan Dev Conference

Ethereum unveils roadmap focusing on scaling, interoperability, and security at Japan Dev Conference

The post Ethereum unveils roadmap focusing on scaling, interoperability, and security at Japan Dev Conference appeared on BitcoinEthereumNews.com. Key Takeaways Ethereum’s new roadmap was presented by Vitalik Buterin at the Japan Dev Conference. Short-term priorities include Layer 1 scaling and raising gas limits to enhance transaction throughput. Vitalik Buterin presented Ethereum’s development roadmap at the Japan Dev Conference today, outlining the blockchain platform’s priorities across multiple timeframes. The short-term goals focus on scaling solutions and increasing Layer 1 gas limits to improve transaction capacity. Mid-term objectives target enhanced cross-Layer 2 interoperability and faster network responsiveness to create a more seamless user experience across different scaling solutions. The long-term vision emphasizes building a secure, simple, quantum-resistant, and formally verified minimalist Ethereum network. This approach aims to future-proof the platform against emerging technological threats while maintaining its core functionality. The roadmap presentation comes as Ethereum continues to compete with other blockchain platforms for market share in the smart contract and decentralized application space. Source: https://cryptobriefing.com/ethereum-roadmap-scaling-interoperability-security-japan/
Share
BitcoinEthereumNews2025/09/18 00:25
Husky Inu (HINU) Completes Move To $0.00020688

Husky Inu (HINU) Completes Move To $0.00020688

Husky Inu (HINU) has completed its latest price jump, rising from $0.00020628 to $0.00020688. The price jump is part of the project’s pre-launch phase, which began on April 1, 2025.
Share
Cryptodaily2025/09/18 01:10
SEC dismisses civil action against Gemini with prejudice

SEC dismisses civil action against Gemini with prejudice

The SEC was satisfied with Gemini’s agreement to contribute $40 million toward the full recovery of Gemini Earn investors’ assets lost as a result of the Genesis
Share
Coinstats2026/01/24 06:43