Italy’s DeepSeek ruling reflects a wider AI pattern  Italy’s decision to close its antitrust probe into DeepSeek after the company agreed to improve warnings aboutItaly’s DeepSeek ruling reflects a wider AI pattern  Italy’s decision to close its antitrust probe into DeepSeek after the company agreed to improve warnings about

What Italy’s DeepSeek decision tells us about why businesses still don’t trust AI

2026/02/15 22:10
6 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

Italy’s DeepSeek ruling reflects a wider AI pattern 

Italy’s decision to close its antitrust probe into DeepSeek after the company agreed to improve warnings about AI “hallucinations” has been widely framed as a pragmatic regulatory outcome. Since then, DeepSeek has revealed plans to launch a country-specific Italian version of its chatbot, with the major change being more pronounced disclosures about hallucinations. In 120 days, DeepSeek must also report back to regulators on technical efforts to reduce hallucination rates. 

On paper, this looks like progress. Italy’s AI law was billed as the first comprehensive framework of its kind, and the intervention shows regulators are serious about enforcing it. Italy’s anti-trust authority has stepped in, hallucination disclosure has been agreed, and DeepSeek has committed to technical improvements. 

But the ruling also exposes a deeper, unresolved issue that goes far beyond this one company. While DeepSeek has been asked to prove they are trying to reduce hallucination rates, the element of disclosure is being framed as more important than structural change. This signals regulatory comfort with warnings and caveats, even when the underlying accuracy problem remains. Disclosure does not create trust, or increase productivity – it merely makes the problem more visible. 

Transparency is becoming a substitute for safety 

Across jurisdictions, regulators are increasingly encouraging generative AI companies to explain hallucination risks to users. It’s understandable how regulators are reaching this conclusion. If AI systems can generate false or misleading information, users need to be warned. 

While this intention addresses a real concern, all a warning does is shift responsibility downstream, onto the person using the AI.  

This creates a nonsensical dynamic: AI providers acknowledge their systems can be wrong, regulators accept warnings as mitigation, and consumers and enterprises are left with tools officially labelled as unreliable. Yet the pressure remains to exploit AI to drive productivity, efficiency, and growth; this is especially problematic in high stakes, regulated environments. 

Why enterprises still don’t trust AI at scale 

The majority of businesses experimenting with AI are not trying to build chatbots for casual use. They are looking to deploy AI in areas like decision-support, claims-handling, legal analysis, compliance workflows, and customer communications. In these contexts, “this output might be wrong” is not a tolerable risk position. 

Organisations need to be able to answer basic questions about AI behaviour: 

  • Why did the system produce this output? 
  • What data did it rely on? 
  • What rules or constraints were applied, and what happens when it is uncertain? 

Businesses need AI that can show how it’s working, and prove its output is correct. If the only safeguard is a warning banner, their questions remain unaddressed. 

As a result of not having this, many organisations hit what can be described as an ‘AI trust ceiling’: a point where pilots stall, use cases stop expanding, and return on investment plateaus because outputs can’t be confidently relied on, audited, or defended.  

This is why AI regulations must prioritise increasing accuracy rather than disclosure. A study by the Massachusetts Institute of Technology (MIT) found that 95% of organisations that have integrated AI into their operations have seen zero return. This means the technology that was supposed to be our economy’s saving grace is potentially stalling productivity rather than aiding it.  

The trust ceiling is not just a regulatory problem 

It’s tempting for AI companies to frame the trust ceiling as a side effect of regulation  – something caused by cautious regulators or complex compliance requirements – but that’s not the case. The trust ceiling exists because of how most AI systems are built. 

Mistakes are built into large language models because of the engineering that underpins them. While they’ve improved dramatically over the past year, they are still probabilistic systems, meaning they are always predicting the next word rather than checking whether something is true. They’re optimised to sound convincing, not to guarantee correctness or to explain how an answer was reached. 

Warnings acknowledge this limitation rather than addressing it. They normalise the idea that hallucinations are an unavoidable feature of AI, rather than a design constraint that can be managed. 

That is why transparency alone will not help businesses get their AI chatbots out of the pilot phase and integrated into everyday workflow. It simply makes the limits more explicit, meaning that workers using it will need to check every single output manually.  

DeepSeek’s technical commitments are encouraging – but incomplete 

DeepSeek’s commitment to lowering hallucination rates through technical fixes is a positive step. Acknowledging that hallucinations are a global challenge and investing in mitigation is much better than ignoring the problem. 

However, even the Italian regulator noted that hallucinations “cannot be entirely eliminated.” The statement reads as the end of the conversation, but it needs to be the start of a more nuanced one about how we structurally constrain hallucinations to increase reliability.  

Designing systems that can say when they are uncertain, defer decisions, or be audited after the fact is transformative. This is achievable through reasoning models that combine probabilistic and deterministic approaches, such as neurosymbolic AI. 

Regulators and AI companies think this will slow innovation, but really, it will propel it.  Building AI systems that are fit for everyday use beyond demos and low-risk experimentation is the key to unlocking growth.  

Why disclosure-first regulation is limiting AI’s potential 

The DeepSeek case highlights a broader regulatory challenge. Disclosure is one of the few levers regulators can pull quickly, especially when dealing with fast-moving technologies. But disclosure is a blunt instrument.  

It treats all use cases as equal and assumes users can absorb and manage risk themselves. For enterprises operating under regimes like the EU AI Act, the FCA’s Consumer Duty, or sector-specific compliance rules, that assumption breaks down. These organisations cannot simply warn end users and move on. They remain accountable for outcomes, so many will choose to not deploy AI at all. 

Moving beyond the trust ceiling 

If AI is to move from experimentation to infrastructure, the industry needs to shift its focus. Instead of asking whether users have been warned, we should be asking whether systems are designed to be constrained, explainable, and auditable by default. 

That means prioritising architectures that combine probabilistic models with deterministic checks, provenance tracking, and explicit reasoning steps. It means treating explainability as a core requirement, not an add-on. Most importantly, it means recognising that trust is not built through disclaimers, but through systems that can consistently justify their behaviour. 

What the DeepSeek case should really signal 

Italy’s handling of the DeepSeek probe is not a failure of regulation. It is a signal that we are reaching the limits of what transparency-only approaches can achieve. Warnings may reduce legal exposure in the short term, but they do not raise the trust ceiling for businesses trying to deploy AI responsibly. 

If we want AI to deliver on its economic and societal promises, we need to move past the idea that informing users of risk is enough. The next phase of AI adoption will be defined not by who discloses the most, but by who designs systems that can be trusted with no warning required.  

Market Opportunity
Intuition Logo
Intuition Price(TRUST)
$0.06537
$0.06537$0.06537
-6.19%
USD
Intuition (TRUST) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

You May Also Like

Edges higher ahead of BoC-Fed policy outcome

Edges higher ahead of BoC-Fed policy outcome

The post Edges higher ahead of BoC-Fed policy outcome appeared on BitcoinEthereumNews.com. USD/CAD gains marginally to near 1.3760 ahead of monetary policy announcements by the Fed and the BoC. Both the Fed and the BoC are expected to lower interest rates. USD/CAD forms a Head and Shoulder chart pattern. The USD/CAD pair ticks up to near 1.3760 during the late European session on Wednesday. The Loonie pair gains marginally ahead of monetary policy outcomes by the Bank of Canada (BoC) and the Federal Reserve (Fed) during New York trading hours. Both the BoC and the Fed are expected to cut interest rates amid mounting labor market conditions in their respective economies. Inflationary pressures in the Canadian economy have cooled down, emerging as another reason behind the BoC’s dovish expectations. However, the Fed is expected to start the monetary-easing campaign despite the United States (US) inflation remaining higher. Investors will closely monitor press conferences from both Fed Chair Jerome Powell and BoC Governor Tiff Macklem to get cues about whether there will be more interest rate cuts in the remainder of the year. According to analysts from Barclays, the Fed’s latest median projections for interest rates are likely to call for three interest rate cuts by 2025. Ahead of the Fed’s monetary policy, the US Dollar Index (DXY), which tracks the Greenback’s value against six major currencies, holds onto Tuesday’s losses near 96.60. USD/CAD forms a Head and Shoulder chart pattern, which indicates a bearish reversal. The neckline of the above-mentioned chart pattern is plotted near 1.3715. The near-term trend of the pair remains bearish as it stays below the 20-day Exponential Moving Average (EMA), which trades around 1.3800. The 14-day Relative Strength Index (RSI) slides to near 40.00. A fresh bearish momentum would emerge if the RSI falls below that level. Going forward, the asset could slide towards the round level of…
Share
BitcoinEthereumNews2025/09/18 01:23
Crypto Supercycle in 2025? DeepSeek Ranks the Best Altcoins to Buy Right Now

Crypto Supercycle in 2025? DeepSeek Ranks the Best Altcoins to Buy Right Now

The post Crypto Supercycle in 2025? DeepSeek Ranks the Best Altcoins to Buy Right Now appeared on BitcoinEthereumNews.com. Crypto Supercycle in 2025? DeepSeek Ranks the Best Altcoins to Buy Right Now Sign Up for Our Newsletter! For updates and exclusive offers enter your email. As a crypto writer, Krishi splits his time between decoding the chaos of the markets and writing about it in a way that doesn’t put you to sleep. He’s been at it for nearly two years in the crypto trenches. Yes, he regrets missing the magnificent rallies that came before that (who doesn’t!), but he’s more than ready to put his money where his words are. Before diving headfirst into crypto, Krishi spent over five years writing for some of the biggest names in tech, including TechRadar, Tom’s Guide, and PC Gaming, covering everything from gadgets and cybersecurity to gaming and software. When he’s not scouring and writing about the latest happenings in crypto, Krishi trades the forex market while keeping crypto in his long-term HODL plans. He’s a Bitcoin believer, though he never lets that bias creep into his writing. This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Center or Cookie Policy. I Agree Source: https://bitcoinist.com/crypto-supercycle-2025-best-altcoins-to-buy-now-deepseek/
Share
BitcoinEthereumNews2025/09/18 01:45
Eric Trump bets Fed rate cut will send crypto stocks skyrocketing

Eric Trump bets Fed rate cut will send crypto stocks skyrocketing

Eric Trump is betting big on the fourth quarter. He says if the Federal Reserve cuts rates like everyone’s expecting, crypto stocks are going to rip higher… fast. “I just think you would potentially see this thing skyrocket,” Eric told Yahoo Finance, pointing to the usual year-end momentum in crypto. He says this moment matters […]
Share
Cryptopolitan2025/09/18 00:24

$30,000 in PRL + 15,000 USDT

$30,000 in PRL + 15,000 USDT$30,000 in PRL + 15,000 USDT

Deposit & trade PRL to boost your rewards!