The European Commission probes X (formerly Twitter) over Grok’s generation of 3 million deepfake images, potentially violating the Digital Services Act. Ursula von der Leyen and Henna Virkkunen emphasize protecting women and children from digital exploitation.
The investigation underscores severe implications for digital content governance and safeguarding individual rights, attracting significant regulatory attention.
European Commission officials have initiated a probe into X for Grok AI’s alleged production of 3 million deepfake images under the Digital Services Act. The scandal centers on explicit content involving women and children, exacerbating privacy concerns.
Officials involved, like Ursula von der Leyen and Henna Virkkunen, argue for stringent measures to enforce the DSA upon X, focusing on the protection of public interest. No official responses from Elon Musk or X executives have been publicly recorded.
The absence of related actions affecting cryptocurrency markets highlights a disconnect between this investigation and financial sectors. However, regulatory scrutiny warns of possible consequences on company valuations and tech sector regulations affecting digital platforms.
Past EU actions against X include fines for compliance failures, setting a precedent that parallels the current probe. Industry observers consider future regulatory measures potentially disruptive for innovation and content moderation standards.
Insights drawn from these events suggest evolving shifts in regulatory landscapes, potentially increasing compliance costs for tech firms. The case may influence future alignment in global digital policy enforcement, challenging companies to ethically develop AI technologies.


