Italy’s DeepSeek ruling reflects a wider AI pattern  Italy’s decision to close its antitrust probe into DeepSeek after the company agreed to improve warnings aboutItaly’s DeepSeek ruling reflects a wider AI pattern  Italy’s decision to close its antitrust probe into DeepSeek after the company agreed to improve warnings about

What Italy’s DeepSeek decision tells us about why businesses still don’t trust AI

2026/02/15 22:10
6 min read

Italy’s DeepSeek ruling reflects a wider AI pattern 

Italy’s decision to close its antitrust probe into DeepSeek after the company agreed to improve warnings about AI “hallucinations” has been widely framed as a pragmatic regulatory outcome. Since then, DeepSeek has revealed plans to launch a country-specific Italian version of its chatbot, with the major change being more pronounced disclosures about hallucinations. In 120 days, DeepSeek must also report back to regulators on technical efforts to reduce hallucination rates. 

On paper, this looks like progress. Italy’s AI law was billed as the first comprehensive framework of its kind, and the intervention shows regulators are serious about enforcing it. Italy’s anti-trust authority has stepped in, hallucination disclosure has been agreed, and DeepSeek has committed to technical improvements. 

But the ruling also exposes a deeper, unresolved issue that goes far beyond this one company. While DeepSeek has been asked to prove they are trying to reduce hallucination rates, the element of disclosure is being framed as more important than structural change. This signals regulatory comfort with warnings and caveats, even when the underlying accuracy problem remains. Disclosure does not create trust, or increase productivity – it merely makes the problem more visible. 

Transparency is becoming a substitute for safety 

Across jurisdictions, regulators are increasingly encouraging generative AI companies to explain hallucination risks to users. It’s understandable how regulators are reaching this conclusion. If AI systems can generate false or misleading information, users need to be warned. 

While this intention addresses a real concern, all a warning does is shift responsibility downstream, onto the person using the AI.  

This creates a nonsensical dynamic: AI providers acknowledge their systems can be wrong, regulators accept warnings as mitigation, and consumers and enterprises are left with tools officially labelled as unreliable. Yet the pressure remains to exploit AI to drive productivity, efficiency, and growth; this is especially problematic in high stakes, regulated environments. 

Why enterprises still don’t trust AI at scale 

The majority of businesses experimenting with AI are not trying to build chatbots for casual use. They are looking to deploy AI in areas like decision-support, claims-handling, legal analysis, compliance workflows, and customer communications. In these contexts, “this output might be wrong” is not a tolerable risk position. 

Organisations need to be able to answer basic questions about AI behaviour: 

  • Why did the system produce this output? 
  • What data did it rely on? 
  • What rules or constraints were applied, and what happens when it is uncertain? 

Businesses need AI that can show how it’s working, and prove its output is correct. If the only safeguard is a warning banner, their questions remain unaddressed. 

As a result of not having this, many organisations hit what can be described as an ‘AI trust ceiling’: a point where pilots stall, use cases stop expanding, and return on investment plateaus because outputs can’t be confidently relied on, audited, or defended.  

This is why AI regulations must prioritise increasing accuracy rather than disclosure. A study by the Massachusetts Institute of Technology (MIT) found that 95% of organisations that have integrated AI into their operations have seen zero return. This means the technology that was supposed to be our economy’s saving grace is potentially stalling productivity rather than aiding it.  

The trust ceiling is not just a regulatory problem 

It’s tempting for AI companies to frame the trust ceiling as a side effect of regulation  – something caused by cautious regulators or complex compliance requirements – but that’s not the case. The trust ceiling exists because of how most AI systems are built. 

Mistakes are built into large language models because of the engineering that underpins them. While they’ve improved dramatically over the past year, they are still probabilistic systems, meaning they are always predicting the next word rather than checking whether something is true. They’re optimised to sound convincing, not to guarantee correctness or to explain how an answer was reached. 

Warnings acknowledge this limitation rather than addressing it. They normalise the idea that hallucinations are an unavoidable feature of AI, rather than a design constraint that can be managed. 

That is why transparency alone will not help businesses get their AI chatbots out of the pilot phase and integrated into everyday workflow. It simply makes the limits more explicit, meaning that workers using it will need to check every single output manually.  

DeepSeek’s technical commitments are encouraging – but incomplete 

DeepSeek’s commitment to lowering hallucination rates through technical fixes is a positive step. Acknowledging that hallucinations are a global challenge and investing in mitigation is much better than ignoring the problem. 

However, even the Italian regulator noted that hallucinations “cannot be entirely eliminated.” The statement reads as the end of the conversation, but it needs to be the start of a more nuanced one about how we structurally constrain hallucinations to increase reliability.  

Designing systems that can say when they are uncertain, defer decisions, or be audited after the fact is transformative. This is achievable through reasoning models that combine probabilistic and deterministic approaches, such as neurosymbolic AI. 

Regulators and AI companies think this will slow innovation, but really, it will propel it.  Building AI systems that are fit for everyday use beyond demos and low-risk experimentation is the key to unlocking growth.  

Why disclosure-first regulation is limiting AI’s potential 

The DeepSeek case highlights a broader regulatory challenge. Disclosure is one of the few levers regulators can pull quickly, especially when dealing with fast-moving technologies. But disclosure is a blunt instrument.  

It treats all use cases as equal and assumes users can absorb and manage risk themselves. For enterprises operating under regimes like the EU AI Act, the FCA’s Consumer Duty, or sector-specific compliance rules, that assumption breaks down. These organisations cannot simply warn end users and move on. They remain accountable for outcomes, so many will choose to not deploy AI at all. 

Moving beyond the trust ceiling 

If AI is to move from experimentation to infrastructure, the industry needs to shift its focus. Instead of asking whether users have been warned, we should be asking whether systems are designed to be constrained, explainable, and auditable by default. 

That means prioritising architectures that combine probabilistic models with deterministic checks, provenance tracking, and explicit reasoning steps. It means treating explainability as a core requirement, not an add-on. Most importantly, it means recognising that trust is not built through disclaimers, but through systems that can consistently justify their behaviour. 

What the DeepSeek case should really signal 

Italy’s handling of the DeepSeek probe is not a failure of regulation. It is a signal that we are reaching the limits of what transparency-only approaches can achieve. Warnings may reduce legal exposure in the short term, but they do not raise the trust ceiling for businesses trying to deploy AI responsibly. 

If we want AI to deliver on its economic and societal promises, we need to move past the idea that informing users of risk is enough. The next phase of AI adoption will be defined not by who discloses the most, but by who designs systems that can be trusted with no warning required.  

Market Opportunity
Intuition Logo
Intuition Price(TRUST)
$0.07735
$0.07735$0.07735
-1.35%
USD
Intuition (TRUST) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Franklin Templeton CEO Dismisses 50bps Rate Cut Ahead FOMC

Franklin Templeton CEO Dismisses 50bps Rate Cut Ahead FOMC

The post Franklin Templeton CEO Dismisses 50bps Rate Cut Ahead FOMC appeared on BitcoinEthereumNews.com. Franklin Templeton CEO Jenny Johnson has weighed in on whether the Federal Reserve should make a 25 basis points (bps) Fed rate cut or 50 bps cut. This comes ahead of the Fed decision today at today’s FOMC meeting, with the market pricing in a 25 bps cut. Bitcoin and the broader crypto market are currently trading flat ahead of the rate cut decision. Franklin Templeton CEO Weighs In On Potential FOMC Decision In a CNBC interview, Jenny Johnson said that she expects the Fed to make a 25 bps cut today instead of a 50 bps cut. She acknowledged the jobs data, which suggested that the labor market is weakening. However, she noted that this data is backward-looking, indicating that it doesn’t show the current state of the economy. She alluded to the wage growth, which she remarked is an indication of a robust labor market. She added that retail sales are up and that consumers are still spending, despite inflation being sticky at 3%, which makes a case for why the FOMC should opt against a 50-basis-point Fed rate cut. In line with this, the Franklin Templeton CEO said that she would go with a 25 bps rate cut if she were Jerome Powell. She remarked that the Fed still has the October and December FOMC meetings to make further cuts if the incoming data warrants it. Johnson also asserted that the data show a robust economy. However, she noted that there can’t be an argument for no Fed rate cut since Powell already signaled at Jackson Hole that they were likely to lower interest rates at this meeting due to concerns over a weakening labor market. Notably, her comment comes as experts argue for both sides on why the Fed should make a 25 bps cut or…
Share
BitcoinEthereumNews2025/09/18 00:36
China Launches Cross-Border QR Code Payment Trial

China Launches Cross-Border QR Code Payment Trial

The post China Launches Cross-Border QR Code Payment Trial appeared on BitcoinEthereumNews.com. Key Points: Main event involves China initiating a cross-border QR code payment trial. Alipay and Ant International are key participants. Impact on financial security and regulatory focus on illicit finance. China’s central bank, led by Deputy Governor Lu Lei, initiated a trial of a unified cross-border QR code payment gateway with Alipay and Ant International as participants. This pilot addresses cross-border fund risks, aiming to enhance financial security amid rising money laundering through digital channels, despite muted crypto market reactions. China’s Cross-Border Payment Gateway Trial with Alipay The trial operation of a unified cross-border QR code payment gateway marks a milestone in China’s financial landscape. Prominent entities such as Alipay and Ant International are at the forefront, participating as the initial institutions in this venture. Lu Lei, Deputy Governor of the People’s Bank of China, highlighted the systemic risks posed by increased cross-border fund flows. Changes are expected in the dynamics of digital transactions, potentially enhancing transaction efficiency while tightening regulations around illicit finance. The initiative underscores China’s commitment to bolstering financial security amidst growing global fund movements. “The scale of cross-border fund flows is expanding, and the frequency is accelerating, providing opportunities for risks such as cross-border money laundering and terrorist financing. Some overseas illegal platforms transfer funds through channels such as virtual currencies and underground banks, creating a ‘resonance’ of risks at home and abroad, posing a challenge to China’s foreign exchange management and financial security.” — Lu Lei, Deputy Governor, People’s Bank of China Bitcoin and Impact of China’s Financial Initiatives Did you know? China’s latest initiative echoes the Payment Connect project of June 2025, furthering real-time cross-boundary remittances and expanding its influence on global financial systems. As of September 17, 2025, Bitcoin (BTC) stands at $115,748.72 with a market cap of $2.31 trillion, showing a 0.97%…
Share
BitcoinEthereumNews2025/09/18 05:28
Scaramucci Says Trump Memecoins Drained Altcoin Market, Yet Sees Bitcoin Reaching $150,000 by Year-End

Scaramucci Says Trump Memecoins Drained Altcoin Market, Yet Sees Bitcoin Reaching $150,000 by Year-End

Anthony Scaramucci, stated that the introduction of Trump coins in January 2025 had a negative impact on the cryptocurrency revolution.
Share
Coinstats2026/02/16 01:57