The post OpenAI plans mega AI city on foundations of custom chip advancements appeared on BitcoinEthereumNews.com. OpenAI is sketching out what looks like the blueprint for a machine-built civilization, one powered by its own chips, its own infrastructure, and enough electricity to light two New York Cities. The company’s massive AI city vision rests on a vision that is very nearly impossible pull off: designing and producing billions of custom chips in partnership with Broadcom to support what CEO Sam Altman calls the “computing spine” of the future. Sam told the Wall Street Journal that delivering the artificial-intelligence services people demand will require at least one AI-specific chip per user, a mind-bending projection that runs into the billions. Ali Farhadi, head of the Allen Institute for AI, backed that scale, saying if AI replaces human labor at the rate promised, “the world will need as many AI chips as it has conventional ones.” For OpenAI, this is about control; over costs, over power consumption, and over the long-term survival of its models as demand explodes. OpenAI links Broadcom, Nvidia, and memory giants for next-gen compute Nvidia of course still dominates the AI training space, with roughly 70% market share, which is why OpenAI has to continue using its GPUs for model training. But OpenAI is now splitting the pipeline: training happens on Nvidia, inference (the process of delivering answers to users) moves to Broadcom’s custom silicon. This two-track design could cut expenses and power usage at a scale where every percentage point matters. Jordan Nanos, a semiconductor researcher at SemiAnalysis, said Broadcom is helping OpenAI “remix the typical AI-chip recipe.” These chips won’t be generic. They’re being engineered specifically for OpenAI’s models, which rely on high-bandwidth memory, supplied by Samsung and SK Hynix, two firms the company recently partnered with. That type of memory allows faster data movement between processors, critical for systems like OpenAI’s Pulse,… The post OpenAI plans mega AI city on foundations of custom chip advancements appeared on BitcoinEthereumNews.com. OpenAI is sketching out what looks like the blueprint for a machine-built civilization, one powered by its own chips, its own infrastructure, and enough electricity to light two New York Cities. The company’s massive AI city vision rests on a vision that is very nearly impossible pull off: designing and producing billions of custom chips in partnership with Broadcom to support what CEO Sam Altman calls the “computing spine” of the future. Sam told the Wall Street Journal that delivering the artificial-intelligence services people demand will require at least one AI-specific chip per user, a mind-bending projection that runs into the billions. Ali Farhadi, head of the Allen Institute for AI, backed that scale, saying if AI replaces human labor at the rate promised, “the world will need as many AI chips as it has conventional ones.” For OpenAI, this is about control; over costs, over power consumption, and over the long-term survival of its models as demand explodes. OpenAI links Broadcom, Nvidia, and memory giants for next-gen compute Nvidia of course still dominates the AI training space, with roughly 70% market share, which is why OpenAI has to continue using its GPUs for model training. But OpenAI is now splitting the pipeline: training happens on Nvidia, inference (the process of delivering answers to users) moves to Broadcom’s custom silicon. This two-track design could cut expenses and power usage at a scale where every percentage point matters. Jordan Nanos, a semiconductor researcher at SemiAnalysis, said Broadcom is helping OpenAI “remix the typical AI-chip recipe.” These chips won’t be generic. They’re being engineered specifically for OpenAI’s models, which rely on high-bandwidth memory, supplied by Samsung and SK Hynix, two firms the company recently partnered with. That type of memory allows faster data movement between processors, critical for systems like OpenAI’s Pulse,…

OpenAI plans mega AI city on foundations of custom chip advancements

2025/10/19 20:45
4분 읽기
이 콘텐츠에 대한 의견이나 우려 사항이 있으시면 crypto.news@mexc.com으로 연락주시기 바랍니다

OpenAI is sketching out what looks like the blueprint for a machine-built civilization, one powered by its own chips, its own infrastructure, and enough electricity to light two New York Cities.

The company’s massive AI city vision rests on a vision that is very nearly impossible pull off: designing and producing billions of custom chips in partnership with Broadcom to support what CEO Sam Altman calls the “computing spine” of the future.

Sam told the Wall Street Journal that delivering the artificial-intelligence services people demand will require at least one AI-specific chip per user, a mind-bending projection that runs into the billions.

Ali Farhadi, head of the Allen Institute for AI, backed that scale, saying if AI replaces human labor at the rate promised, “the world will need as many AI chips as it has conventional ones.” For OpenAI, this is about control; over costs, over power consumption, and over the long-term survival of its models as demand explodes.

OpenAI links Broadcom, Nvidia, and memory giants for next-gen compute

Nvidia of course still dominates the AI training space, with roughly 70% market share, which is why OpenAI has to continue using its GPUs for model training.

But OpenAI is now splitting the pipeline: training happens on Nvidia, inference (the process of delivering answers to users) moves to Broadcom’s custom silicon. This two-track design could cut expenses and power usage at a scale where every percentage point matters.

Jordan Nanos, a semiconductor researcher at SemiAnalysis, said Broadcom is helping OpenAI “remix the typical AI-chip recipe.” These chips won’t be generic. They’re being engineered specifically for OpenAI’s models, which rely on high-bandwidth memory, supplied by Samsung and SK Hynix, two firms the company recently partnered with.

That type of memory allows faster data movement between processors, critical for systems like OpenAI’s Pulse, an AI agent that scans the web daily to brief users. Pulse consumes so much computing power that Sam said it’s limited to those who pay $200 a month for the Pro tier.

This dependency on high-bandwidth memory ties directly to how OpenAI’s models operate. Early neural networks were “dense,” activating large sections of their systems for every query. Newer ones use “sparsity”, which activates only specific expert sections.

Instead of using 25% of the model to answer a question, modern systems trigger a fraction of a percent. That difference slashes power draw and speeds up response times. When a chip is built around that sparse logic, efficiency skyrockets, and Broadcom is the one making that hardware possible.

OpenAI’s gigawatt-scale AI supercomputers redefine infrastructure

Sam has said that OpenAI’s current compute footprint is around 2 gigawatts, spread across global data centers. The Broadcom partnership aims to build up to 10 gigawatts by 2030, forming the physical base for what insiders are calling AI cities, dense campuses of servers, storage, and custom interconnects tied together by Broadcom’s Tomahawk Ultra networking chips.

That’s only part of the wave. Over the past three weeks, OpenAI has added 16 gigawatts in fresh capacity deals with AMD and Nvidia, bringing the total to levels that could require nearly $1 trillion in investment.

xAI’s Memphis Colossus already reached 1.21 gigawatts this fall. Meta’s Hyperion facility in Louisiana is approved for 2.3 gigawatts, with Mark Zuckerberg targeting 5 gigawatts. The AI energy race is officially global.

Sam described this transformation as “the biggest joint industrial project in history,” saying even these deals are “a drop in the bucket compared to where we need to go.” Part of his goal is to diversify suppliers.

The Stargate campus in Abilene, Texas, being built by Oracle, will focus on AI training, mostly on Nvidia chips. AMD hardware will handle inference workloads, while Broadcom’s custom silicon fills the efficiency gap.

As Nanos put it, “OpenAI is looking quite far into the future, and trying to make sure they have access to enough supply of chips.”

Claim your free seat in an exclusive crypto trading community – limited to 1,000 members.

Source: https://www.cryptopolitan.com/openai-custom-chip-breakthrough/

시장 기회
플러리싱 에이아이 로고
플러리싱 에이아이 가격(SLEEPLESSAI)
$0.02151
$0.02151$0.02151
-1.55%
USD
플러리싱 에이아이 (SLEEPLESSAI) 실시간 가격 차트
면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, crypto.news@mexc.com으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.

USD1 Genesis: 0 Fees + 12% APR

USD1 Genesis: 0 Fees + 12% APRUSD1 Genesis: 0 Fees + 12% APR

New users: stake for up to 600% APR. Limited time!