BitcoinWorld Five architects of the AI economy explain where the wheels are coming off Earlier this week, five people who touch every layer of the AI supply chainBitcoinWorld Five architects of the AI economy explain where the wheels are coming off Earlier this week, five people who touch every layer of the AI supply chain

Five architects of the AI economy explain where the wheels are coming off

2026/05/07 13:45
10 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

BitcoinWorld

Five architects of the AI economy explain where the wheels are coming off

Earlier this week, five people who touch every layer of the AI supply chain sat down at the Milken Global Conference in Beverly Hills, where they talked with this editor about everything from chip shortages to orbital data centers to the possibility that the whole architecture that undergirds the tech is wrong. On stage with Bitcoin World: Christophe Fouquet, CEO of ASML, the Dutch company that holds a monopoly on the extreme ultraviolet lithography machines without which modern chips would not exist; Francis deSouza, COO of Google Cloud, who is overseeing one of the biggest infrastructure bets in corporate history; Qasar Younis, co-founder and CEO of Applied Intuition, a $15 billion physical AI company that started in simulation and has since moved into defense; Dimitry Shevelenko, the chief business officer of Perplexity, the AI-native search-to-agents company; and Eve Bodnia, a quantum physicist who left academia to challenge the foundational architecture most of the AI industry takes for granted at her startup, Logical Intelligence. Here is what the five had to say.

The bottlenecks are real

The AI boom is running into hard physical limits, and the constraints begin further down the stack than many may realize. Fouquet was the first to say it, describing a “huge acceleration of chips manufacturing,” while expressing his “strong belief” that despite all that effort, “for the next two, three, maybe five years, the market will be supply limited,” meaning the hyperscalers — Google, Microsoft, Amazon, Meta — are not going to get all the chips they are paying for, full stop.

DeSouza highlighted how big — and how fast growing — an issue this is, reminding the audience that Google Cloud’s revenue crossed $20 billion last quarter, growing 63%, while its backlog — the committed but not yet delivered revenue — nearly doubled in a single quarter, from $250 billion to $460 billion. “The demand is real,” he said with impressive calm.

For Younis, the constraint comes primarily from elsewhere. Applied Intuition builds autonomy systems for cars, trucks, drones, mining equipment and defense vehicles, and his bottleneck is not silicon — it is the data that one can only gather by sending machines into the real world and watching what happens. “You have to find it from the real world,” he said, and no amount of synthetic simulation fully closes that gap. “There will be a long time before you can fully train models that run on the physical world synthetically.”

The energy problem is also real

If chips are the first bottleneck, energy is the one looming behind it. DeSouza confirmed that Google is exploring data centers in space as a serious response to energy constraints. “You get access to more abundant energy,” he noted. Of course, even in orbit, it is not simple. DeSouza observed space is a vacuum, so eliminates convection, leaving radiation as the only way to shed heat into the surrounding environment — a much slower and harder-to-engineer process than the air and liquid cooling systems that data centers rely on today. But the company is still treating it as a legitimate path.

The deeper argument DeSouza made, somewhat unsurprisingly, was about efficiency through integration. Google’s strategy of co-engineering its full AI stack — from custom TPU chips through to models and agents — pays dividends in watts per flop that a company buying off-the-shelf components simply cannot replicate, he suggested. “Running Gemini on TPUs is much more energy efficient than any other configuration,” because chip designers know what is coming in the model before it ships, he said. In a world where energy availability is becoming a massive constraint on how far this tech can go, that kind of vertical integration is a major competitive advantage.

Fouquet echoed the point later in the discussion. “Nothing can be priceless,” he said. The industry is in a strange moment right now, investing extraordinary amounts of capital, driven by strategic necessity. But more compute means more energy, and more energy has a price.

A different kind of intelligence

While the rest of the industry debates scale, architecture, and inference efficiency within the large language model paradigm, Bodnia is building something very different. Her company, Logical Intelligence, is built on so-called energy-based models (EBMs), a class of AI that does not predict the next token in a sequence but instead attempts to understand the rules underlying data, in a way she argues is closer to how the human brain actually works.

“Language is a user interface between my brain and yours,” she said. “The reasoning itself is not attached to any language.” Her largest model runs to 200 million parameters — compared to the hundreds of billions in leading LLMs — and she claims it runs thousands of times faster. More importantly, it is designed to update its knowledge as data changes, rather than requiring retraining from scratch. For chip design, robotics and other domains where a system needs to grasp physical rules rather than linguistic patterns, she argues EBMs are the more natural fit. “When you drive a car, you are not searching for patterns in any language. You look around you, understand the rules about the world around you, and make a decision.” It is an interesting argument and one that is likely to attract more attention in the coming months, given the AI field is beginning to ask whether scale alone is sufficient.

Agents, guardrails, and trust

Shevelenko spent much of the conversation explaining how Perplexity has evolved from a search product into something it now calls a “digital worker.” Perplexity Computer, its newest offering, is designed not as a tool a knowledge worker uses, but as a staff that a knowledge worker directs. “Every day you wake up and you have a hundred staff on your team,” he said of the opportunity. “What are you going to do to make the most of it?”

It is a compelling pitch; it also raises obvious questions about control, so I asked them. His answer was granularity. Enterprise administrators can specify not just which connectors and tools an agent can access, but whether those permissions are read-only or read-write — a distinction that matters enormously when agents are acting inside corporate systems. When Comet, Perplexity’s computer-use agent, takes actions on a user’s behalf, it presents a plan and asks for approval first. Some users find the friction annoying, Shevelenko said, but he considers it essential, particularly after joining the board of Lazard, where he said he has found himself unexpectedly sympathetic to the conservative instincts of a CISO protecting a 180-year-old brand built entirely on client trust. “Granularity is the bedrock of good security hygiene,” he said.

Sovereignty, not just safety

Younis offered what may have been the panel’s most geopolitically charged observation: physical AI and national sovereignty are entangled in ways that purely digital AI never was. The internet initially spread as American technology and faced pushback only at the application layer — the Ubers and DoorDashes — when offline consequences became visible. Physical AI is different. Autonomous vehicles, defense drones, mining equipment, agricultural machines — these manifest in the real world in ways governments cannot ignore, raising questions about safety, data collection, and who ultimately controls systems that operate inside a nation’s borders.

“Almost consistently, every country is saying: we don’t want this intelligence in a physical form in our borders, controlled by another country.” Fewer nations, he told the crowd, can currently field a robotaxi than possess nuclear weapons.

Fouquet framed it a little differently. China’s AI progress is real — DeepSeek’s release earlier this year sent something close to a panic through parts of the industry — but that progress is constrained below the model layer. Without access to EUV lithography, Chinese chipmakers cannot manufacture the most advanced semiconductors, and models built on older hardware operate at a compounding disadvantage no matter how good the software gets. “Today, in the United States, you have the data, you have the computing access, you have the chips, you have the talent. China does a very good job on the top of the stack, but is lacking some elements below,” Fouquet said.

The generation question

Near the end of the panel, someone in the audience asked the obvious uncomfortable question: is all of this going to impact the next generation’s capacity for critical thinking? The answers were, perhaps unsurprisingly, optimistic, though not naively so.

DeSouza pointed to the scale of problems that more powerful tools might finally let humanity address: neurological diseases whose biological mechanisms we do not yet understand, greenhouse gas removal, and grid infrastructure that has been deferred for decades. “This should unleash us to the next level of creativity,” he said.

Shevelenko made a more pragmatic point: the entry-level job may be disappearing, but the ability to launch something independently has never been more accessible. “[For] anybody who has Perplexity Computer . . . the constraint is your own curiosity and agency.”

Younis drew the sharpest distinction between knowledge work and physical labor. He pointed to the fact that the average American farmer is 58 years old and that labor shortages in mining, long-haul trucking, and agriculture are chronic and growing — not because wages are too low, but because people do not want those jobs. In those domains, physical AI is not displacing willing workers. It is filling a void that already exists and looks only to deepen from here.

Conclusion

The Milken panel laid bare the AI industry’s most pressing tensions: supply constraints that will last years, energy limits that are pushing companies to consider orbital data centers, and a growing recognition that the current dominant architecture — massive language models — may not be the only path forward. Meanwhile, the geopolitical stakes are rising, as physical AI forces governments to confront sovereignty questions that digital platforms never did. For readers, the takeaway is clear: the AI boom is real, but it is colliding with hard physical and structural realities that no amount of software optimization alone can solve.

FAQs

Q1: Why are AI chips still in short supply despite massive investment?
ASML CEO Christophe Fouquet stated that even with accelerated manufacturing, the market will likely remain supply-limited for the next two to five years, meaning hyperscalers like Google and Microsoft will not receive all the chips they have ordered.

Q2: What are energy-based models and how do they differ from LLMs?
Energy-based models (EBMs), developed by Logical Intelligence, do not predict the next token in a sequence. Instead, they attempt to understand the underlying rules of data, similar to human reasoning. They are smaller, faster, and can update knowledge without full retraining.

Q3: How is physical AI different from digital AI in terms of national security?
Applied Intuition’s Qasar Younis noted that physical AI systems like autonomous vehicles and defense drones operate within national borders, raising sovereignty concerns that digital AI never did. Fewer nations can field a robotaxi than possess nuclear weapons.

This post Five architects of the AI economy explain where the wheels are coming off first appeared on BitcoinWorld.

Market Opportunity
Gensyn Logo
Gensyn Price(AI)
$0,03401
$0,03401$0,03401
+6,04%
USD
Gensyn (AI) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

Starter Gold Rush: Win $2,500!

Starter Gold Rush: Win $2,500!Starter Gold Rush: Win $2,500!

Start your first trade & capture every Alpha move