AI is no longer just a technological race — it is a governance challenge for creating the right rules, a security tool, and potentially an existential risk.
Speaking on stage of the HumanX event in San Francisco, European Commissioner Magnus Brunner outlined Europe’s approach to regulating AI, defending the controversial AI Act while acknowledging its limits — and warning about a future where artificial intelligence could surpass human control.
Brunner addressed one of the most common criticisms of Europe: regulating too early and too much. But for him, regulation is not a constraint — it is infrastructure.
“Football is a great game, but you need rules. You need lines, goals, and a referee. That’s also the case with AI.”
The EU’s AI Act is designed to create a unified framework across 27 member states and 450 million citizens, setting what Brunner calls “guardrails” for trustworthy AI development.
While critics argue this slows innovation, Brunner pushes back:
One of the most striking contrasts highlighted during the discussion is the regulatory gap between Europe and the United States.
While the EU has introduced a single comprehensive law, the US remains fragmented, with AI regulations emerging at the state level.
Interestingly, Brunner noted that some US states — particularly California — are moving toward frameworks similar to Europe’s.
This signals a potential convergence between the two models, despite philosophical differences.
Beyond regulation, Brunner emphasized a less-discussed dimension: AI as a tool in modern crime — and in law enforcement.
According to him, criminal organizations are rapidly adopting AI:
In response, European institutions — particularly Europol — are integrating AI into their operations.
One alarming trend is the age of recruitment:
One of the most concrete applications discussed is the EU’s new entry-exit system — a massive AI-driven infrastructure designed to monitor movement across borders.
Brunner described it as:
In just a few months:
The system integrates biometric data and real-time data sharing across member states — something previously impossible.
Perhaps the most sensitive issue remains the balance between civil liberties and security.
Brunner openly acknowledged the tension:
The debate becomes even more intense when applied to areas like child protection:
Despite this stance, he reaffirmed that fundamental rights remain non-negotiable:
Looking ahead, Brunner did not shy away from existential concerns.
His biggest fear is not misuse — but loss of control:
He even referenced scenarios where AI systems could resist shutdown:
While still hypothetical, he warned that the trajectory is already pointing in that direction.
Despite geopolitical tensions, Brunner framed AI development as both a competition and a collaboration.
He suggested a mutual exchange where Europe offers regulatory frameworks and the US offers innovation and flexibility.
In a world increasingly shaped by AI — and by competing models of governance — that cooperation may prove decisive for creating better rules too.


