Whether we like it or not, and despite tales of its powers being greatly exaggerated, the AI genie is out of the box. What does that mean, and what can we do about it?Whether we like it or not, and despite tales of its powers being greatly exaggerated, the AI genie is out of the box. What does that mean, and what can we do about it?

The AI Bubble and Big Tech’s Bid to Dominate the Economy

Whether we like it or not, and despite tales of its powers being greatly exaggerated, the AI genie is out of the box. What does that mean, and what can we do about it?

In another twist of abysmal AI politics, OpenAI CEO Sam Altman just admitted that we are in an AI bubble, and AGI is losing relevance. You may find this baffling or hilarious, or you may be wondering where does that leave the AI influencer types.

But despite the absurdity, AI and the associated narrative have gotten way too important to dismiss. Connecting the dots to make sense of it all calls for long-standing experience in AI, engineering, business and beyond. In other words, for people like Georg Zoeller: a seasoned software and business engineer experienced in frontier technology in the gaming industry and Facebook.

Zoeller has been using AI in his work dating back to the 2010’s, to the point where AI is now at the core of what he does. Zoeller is the VP of Technology of NOVI Health, a Singapore-based healthcare startup, as well as the Co-Founder of the Centre for AI Leadership and the AI Literacy & Transformation Institute.

In a wide-ranging conversation with Zoeller, we addressed everything from AI first principles to its fatal flaws and its place in capitalism. Today, we discuss regulatory capture, copyright, the limits of the attention economy, the new AI religion, the builder’s conundrum, how the AI-powered transformation of software engineering is a glimpse into the future of work, AI literacy and how to navigate the brave new world.

https://www.youtube.com/watch?v=U2okYcZ-ATs&embedable=true

OpenAI’s regulatory capture

Picking up on where we left off in part 1 of the conversation, Zoeller argues that OpenAI will never deliver the ROI promised to investors because their competition arrived at the same point spending 100X (or more) less. The underwhelming release of GPT5 seems to support this thesis, prompting Altman to backpedal. But, Zoeller goes on to add, in a way that does not matter:

Zoeller points out that the embrace of the US government with the AI bros is so tight, that it even includes feeding the social media histories of every single person who crosses over the border into Grok and getting an assessment.

It’s an entirely non transparent process and the technology is not explainable. That would be a rights violation in every normal country, but we’re not in a normal scale anymore. To wit, whether we like it or not, the technology is already transforming our economy and society.

Moving fast and breaking copyright

Unregulated use of AI has broken copyright, Zoeller argues. There are numerous reports of AI-generated copies of original content flooding marketplaces like Amazon. Creators put a lot of time and effort, only to have their work stolen before their eyes – same content, different words.

Zoeller does not have a rosy view of copyright. He believes copyright was created to incentivize creators to continue to create, not because the world liked artists, but because the world liked publishers. Publishers lobbied hard, and they had a lot of money and power. Now copyright is broken, not just for books, but for the Internet at large.

Many of these models are trained on Web 2.0 concepts such as Stack Overflow. Places like this, where every software engineer would go to check for answers to questions and share knowledge freely was a Web 2.0 phenomenon that no longer works in the world of AI, Zoeller thinks.

He also thinks that with the incentives system being broken, we can no longer expect that you just need to make the right app and people will come. Attention is the underlying primitive for everything, more valuable than money, because it represents opportunity. We have failed to regulate the attention economy, and surrendered full control of all attention to a handful of platforms who will benefit massively from AI.

The limits of the attention economy

Facebook isn’t giving a Manhattan project’s worth of AI away for free because they love open source, Zoeller opines. They’re doing it because accelerating content creation means more and more content on the supply side. This means creators need a platform, and they have to pay to get through the door to customers.

The public market on the digital economy is mediated by a handful of companies. The cost of user acquisition for SMEs is often hundreds of dollars per user. This is a silent tax on every item sold, because advertisement is baked into the cost of every product. AI is now commoditizing knowledge economy digital products, just like the industrial revolution commoditized physical goods.

The AI religion

Governments, Zoeller points out, are careful not to regulate the technology out of fear of missing out on that magical growth that it might deliver. But we’ve reached a dead end, because the only outlet left for growth is automating, i.e. removing, jobs. That is redistributing, not growing GDP.

Big tech companies, Zoeller relates, always talk about the best things that may happen. That forces regulators to weigh present current negative effects to untold future riches: “No one has the guts to walk away from growth in a very growth constrained world, because we are just at that point where capitalism is kinda running out of growth”.

https://pod.co/orchestrate-all-the-things-podcast-connecting-the-dots-with-george-anadiotis/breaking-the-ai-bubble-big-tech-plus-ai-equals-economy-takeover?embedable=true

The techno-optimist scenarios claim that fueling AI will enable the world to take on its biggest issues, such as climate change, and create abundance for everyone. Zoeller calls this out on both technical and social grounds.

On the technical front, Zoeller points out that LLMs are hitting a wall. All the content in the world, every book that’s ever been written, is probably already in the training corpora of frontier LLMs. But there’s no scenario where that turns into superhuman intelligence.

The builder’s conundrum

Having worked with AI as well as having years of experience in the gaming industry, Zoeller uses this as an example of the conundrum that people are faced with. Hype and under-appreciation exist at the same time.

Zoeller thinks that the gaming industry is toast. Not because the technology can replace all creativity, but because it takes a large number of jobs in that industry and wipes them out.

3D modeling is no longer going to be a thing for most people because the technology will be able to do that. It went from not being able to draw hands in images to videos that are close to indistinguishable from reality pretty soon.

It’s only a matter of time until good storytellers get their hands on it and integrate it into their process. Zoeller thinks an AI native game studio is possible today. But the technology is moving so fast that in six months or a year it would be completely different.

AI is transforming software engineering

Zoeller also shared his experience working in a critical domain such as healthcare. The unreliability of LLM-based AI renders it unfit for such applications. And yet, people have to work with it. It’s initially scary and it will make you feel almost demotivated, as Zoeller puts it.

For example, software engineering becomes something different. Human coders are relegated to code reviewers, and reviewing code written by a machine that oscillates between making stupid errors and writing brilliant code is tedious.

Zoeller acknowledges that for some scenarios AI coding is so fast that coding by hand no longer makes sense. However, the problem is that AI coding brings a radically different aspect to software engineering, and not necessarily a good one. It introduces nondeterminism in the abstraction layers of software engineering.

AI is transforming software engineering, and not necessarily for the best. Source: MIT NewsSoftware engineering has evolved from working in low-level assembly languages to progressively higher levels of abstraction. There are many layers of abstraction now, meaning lots of complexity as well. But these layers are all deterministic, therefore they can be tested.

One set of inputs produces another set of outputs in a reliable way. That’s a precondition for testing, which is a big, hugely important part of the software engineering discipline.

:::tip

Join the Orchestrate all the Things Newsletter

Stories about how Technology, Data, AI and Media flow into each other shaping our lives.

Analysis, Essays, Interviews and News. Mid-to-long form, 1-3 times per month.

Subscribe here 👉 https://linkeddataorchestration.com/orchestrate-all-the-things/newsletter/

:::

AI Literacy

Either way, even for people who feel that it’s worthwhile pushing back, Zoeller’s message is clear: you still need to learn the technology. You can’t be credible unless you have hands-on exposure.

So then the question is, what’s the best way to do that? In other words, what is the best path towards AI Literacy? AI Literacy is an EU AI Act requirement as of 2025. This means that every organization that uses AI in one way or another should ensure AI Literacy for its people.

Zoeller is involved in AI Literacy programs as well, and he believes most educational institutions are not fit for purpose. The frontier nature of the technology is incompatible with their internal bureaucracy. Plus, Zoeller notes, the academic peer review system is deeply problematic at this point – sidestepped by Arxiv and hijacked by PR and AI slop.

There are six pillars of AI Literacy, with ‘Create’ showing a significant, positive effect on all others

To be up to date and useful, AI education requires educators who are using and experimenting with the technology and also keep up with the latest research. This, of course, is a full time job, and most of us can’t do that.

A typical example is so-called prompt engineering. Most of the educational material available is focused exclusively on ChatGPT, which is already a problem. But it gets worse, because most of it is also outdated and wrong by now. The models have changed, so the early techniques are no longer useful.

What you have to teach people, Zoeller emphasizes, is the fundamentals that help connect how this technology works with their own experience. Things such as the first principles of AI, how transformers work, what types of use cases lend themselves to AI, and what are the pitfalls.

Zoeller is focused on AI Literacy for executives and line managers. The reason, he says, is because people have to go through a trough of disillusionment realizing what AI can mean for their organization. He finds it doesn’t help to expose people with no agency to that knowledge. What does help is thoughtful change management and compliance policies.

Big Tech plus AI equals economy takeover

Whether you agree with Zoeller’s takes or not, there’s one thing on which most of us will agree on: it’s all moving at superhuman speed, and nobody is really able to keep up anymore. Zoeller refers to this as “a singularity”, although not the kind people usually think of in the context of AGI:

Zoeller thinks nobody is capable of such a prediction, because Big Tech companies are intentionally accelerating technology to keep the rest of the economy vulnerable to them. The rest of the economy is either going into the wrong adventure or paralyzed, as he puts it.

AI is not your average disruptive innovation

A typical argument used against AI skepticism is that this is the usual cycle of technological disruption: in due time, the dust will settle, we’ll figure it all out, and new jobs will be created that we can’t even imagine at this time.

Zoeller, who also happens to be teaching a university class on this, disagrees. What’s different this time is speed, he says. Unlike infrastructure for previous technologies such as electricity or automobiles, which took a long time to deploy physically, the infrastructure for AI is digital and it’s already there – APIs, apps, open source, GPUs, the cloud.

As each technology layers on top of each other, disruption accelerates. Source: PROS

People, he argues, are not very good at adopting new technologies – they have to be pushed. It’s often not pretty, but time makes all the difference in mitigating the impact on society. Time is something we don’t have this time around.

But that’s not the only thing that’s different about AI, he adds. To illustrate this point, Zoeller turns again to software. He refers to software as “a machine that automates a specific task”. AI represents the ability to print the machine, and it’s not created by a human anymore.

The brave new AI world

Software engineering is among the first professions to be disrupted, because there’s abundant data that can be used for training, and the output is testable. Ironically, software engineers are now on the receiving end of automation they helped bring about. Extrapolating this, Zoeller thinks we’re likely to see a future in which people first help train their replacements, then are relegated to quality assurance roles.

Where does that leave us? Zoeller is not optimistic. In fact, he clearly spells it out – we’re running another industrial revolution, but this one has new properties that make it extremely dangerous for society. Not because rogue AI may take over, but because it may lead to a full system collapse – or rather, we may add, accelerate the path we’re already on.

:::tip

Join the Orchestrate all the Things Newsletter

Stories about how Technology, Data, AI and Media flow into each other shaping our lives.

Analysis, Essays, Interviews and News. Mid-to-long form, 1-3 times per month.

Subscribe here 👉 https://linkeddataorchestration.com/orchestrate-all-the-things/newsletter/

:::

\

Market Opportunity
CreatorBid Logo
CreatorBid Price(BID)
$0.02871
$0.02871$0.02871
-1.20%
USD
CreatorBid (BID) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Crypto News: Donald Trump-Aligned Fed Governor To Speed Up Fed Rate Cuts?

Crypto News: Donald Trump-Aligned Fed Governor To Speed Up Fed Rate Cuts?

The post Crypto News: Donald Trump-Aligned Fed Governor To Speed Up Fed Rate Cuts? appeared on BitcoinEthereumNews.com. In recent crypto news, Stephen Miran swore in as the latest Federal Reserve governor on September 16, 2025, slipping into the board’s last open spot right before the Federal Open Market Committee kicks off its two-day rate discussion. Traders are betting heavily on a 25-basis-point trim, which would bring the federal funds rate down to 4.00%-4.25%, based on CME FedWatch Tool figures from September 15, 2025. Miran, who’s been Trump’s top economic advisor and a supporter of his trade ideas, joins a seven-member board where just three governors come from Democratic picks, according to the Fed’s records updated that same day. Crypto News: Miran’s Background and Quick Path to Confirmation The Senate greenlit Miran on September 15, 2025, with a tight 48-47 vote, following his nomination on September 2, 2025, as per a recent crypto news update. His stint runs only until January 31, 2026, stepping in for Adriana D. Kugler, who stepped down in August 2025 for reasons not made public. Miran earned his economics Ph.D. from Harvard and worked at the Treasury back in Trump’s first go-around. Afterward, he moved to Hudson Bay Capital Management as an economist, then looped back to the White House in December 2024 to head the Council of Economic Advisers. There, he helped craft Trump’s “reciprocal tariffs” approach, aimed at fixing trade gaps with China and the EU. He wouldn’t quit his White House gig, which irked Senator Elizabeth Warren at the September 7, 2025, confirmation hearings. That limited time frame means Miran gets to cast a vote straight away at the FOMC session starting September 16, 2025. The full board now features Chair Jerome H. Powell (Trump pick, term ends 2026), Vice Chair Philip N. Jefferson (Biden, to 2036), and folks like Lisa D. Cook (Biden, to 2028) and Michael S. Barr…
Share
BitcoinEthereumNews2025/09/18 03:14
What John Harbaugh And Mike Tomlin’s Departures Mean For NFL Coaching

What John Harbaugh And Mike Tomlin’s Departures Mean For NFL Coaching

The post What John Harbaugh And Mike Tomlin’s Departures Mean For NFL Coaching appeared on BitcoinEthereumNews.com. Baltimore Ravens head coach John Harbaugh (L
Share
BitcoinEthereumNews2026/01/15 10:56
Twitter founder's "weekend experiment": Bitchat encryption software becomes a "communication Noah's Ark"

Twitter founder's "weekend experiment": Bitchat encryption software becomes a "communication Noah's Ark"

Author: Nancy, PANews In the crypto world, both assets and technologies are gradually taking center stage with greater practical significance. In the past few months
Share
PANews2026/01/15 11:00