The post Machines can’t separate truth from noise appeared on BitcoinEthereumNews.com. Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial. We marvel at how intelligent the latest AI models have become — until they confidently present us with complete nonsense. The irony is hard to miss: as AI systems grow more powerful, their ability to distinguish fact from fiction isn’t necessarily improving. In some ways, it’s getting worse. Summary AI reflects our information flaws. Models like GPT-5 struggle because training data is polluted with viral, engagement-driven content that prioritizes sensation over accuracy. Truth is no longer zero-sum. Many “truths” coexist, but current platforms centralize information flow, creating echo chambers and bias that feed both humans and AI. Decentralized attribution fixes the cycle. Reputation- and identity-linked systems, powered by crypto primitives, can reward accuracy, filter noise, and train AI on verifiable, trustworthy data. Consider OpenAI’s own findings: one version of GPT-4 (code-named “o3”) hallucinated answers about 33% of the time in benchmark tests, according to the company’s own paper. Its smaller successor (“o4-mini”) went off the rails nearly half the time. The newest model, GPT-5, was supposed to fix this and indeed claims to hallucinate far less (~9%). Yet many experienced users find GPT-5 dumber in practice—slower, more hesitant, and still often wrong (also evidencing the fact that benchmarks only get us so far). Nillion CTO, John Woods’, frustration was explicit when he said ChatGPT went from ‘essential to garbage’ after GPT-5’s release. Yet the reality is, the more advanced models will get increasingly worse at telling truth from noise. All of them, not just GPT.  Incredible how ChatGPT Plus went from essential to garbage with the release GPT-5. Most queries routed to tiny incapable models, a 32K context window and dogshit usage limits, and they still get… The post Machines can’t separate truth from noise appeared on BitcoinEthereumNews.com. Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial. We marvel at how intelligent the latest AI models have become — until they confidently present us with complete nonsense. The irony is hard to miss: as AI systems grow more powerful, their ability to distinguish fact from fiction isn’t necessarily improving. In some ways, it’s getting worse. Summary AI reflects our information flaws. Models like GPT-5 struggle because training data is polluted with viral, engagement-driven content that prioritizes sensation over accuracy. Truth is no longer zero-sum. Many “truths” coexist, but current platforms centralize information flow, creating echo chambers and bias that feed both humans and AI. Decentralized attribution fixes the cycle. Reputation- and identity-linked systems, powered by crypto primitives, can reward accuracy, filter noise, and train AI on verifiable, trustworthy data. Consider OpenAI’s own findings: one version of GPT-4 (code-named “o3”) hallucinated answers about 33% of the time in benchmark tests, according to the company’s own paper. Its smaller successor (“o4-mini”) went off the rails nearly half the time. The newest model, GPT-5, was supposed to fix this and indeed claims to hallucinate far less (~9%). Yet many experienced users find GPT-5 dumber in practice—slower, more hesitant, and still often wrong (also evidencing the fact that benchmarks only get us so far). Nillion CTO, John Woods’, frustration was explicit when he said ChatGPT went from ‘essential to garbage’ after GPT-5’s release. Yet the reality is, the more advanced models will get increasingly worse at telling truth from noise. All of them, not just GPT.  Incredible how ChatGPT Plus went from essential to garbage with the release GPT-5. Most queries routed to tiny incapable models, a 32K context window and dogshit usage limits, and they still get…

Machines can’t separate truth from noise

2025/10/30 20:53

Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial.

We marvel at how intelligent the latest AI models have become — until they confidently present us with complete nonsense. The irony is hard to miss: as AI systems grow more powerful, their ability to distinguish fact from fiction isn’t necessarily improving. In some ways, it’s getting worse.

Summary

  • AI reflects our information flaws. Models like GPT-5 struggle because training data is polluted with viral, engagement-driven content that prioritizes sensation over accuracy.
  • Truth is no longer zero-sum. Many “truths” coexist, but current platforms centralize information flow, creating echo chambers and bias that feed both humans and AI.
  • Decentralized attribution fixes the cycle. Reputation- and identity-linked systems, powered by crypto primitives, can reward accuracy, filter noise, and train AI on verifiable, trustworthy data.

Consider OpenAI’s own findings: one version of GPT-4 (code-named “o3”) hallucinated answers about 33% of the time in benchmark tests, according to the company’s own paper. Its smaller successor (“o4-mini”) went off the rails nearly half the time. The newest model, GPT-5, was supposed to fix this and indeed claims to hallucinate far less (~9%). Yet many experienced users find GPT-5 dumber in practice—slower, more hesitant, and still often wrong (also evidencing the fact that benchmarks only get us so far).

Nillion CTO, John Woods’, frustration was explicit when he said ChatGPT went from ‘essential to garbage’ after GPT-5’s release. Yet the reality is, the more advanced models will get increasingly worse at telling truth from noise. All of them, not just GPT. 

Why would a more advanced AI feel less reliable than its predecessors? One reason is that these systems are only as good as their training data, and the data we’re giving AI is fundamentally flawed. Today, this data largely comes from an information paradigm where engagement trumps accuracy while centralized gatekeepers amplify noise over signal to maximize profits. It’s thus naive to expect truthful AI without first fixing the data problem.

AI mirrors our collective information poisoning

High-quality training data is disappearing faster than we create it. There’s a recursive degradation loop at work: AI primarily digests web-based data; the web is becoming increasingly polluted with misleading, unverifiable AI slop; synthetic data trains the next generation of models to be even more disconnected from reality. 

More than bad training sets, it’s about the fundamental architecture of how we organize and verify information online. Over 65% of the world’s population spends hours on social media platforms designed to maximize engagement. We’re thus exposed, at an unprecedented scale, to algorithms that inadvertently reward misinformation. 

False stories trigger stronger emotional responses, so they spread faster than the corrective claims. Thus, the most viral content — i.e., the one most likely to be ingested by AI training pipelines — is systematically biased towards sensation over accuracy. 

Platforms also profit from attention, not truth. Data creators are rewarded for virality, not veracity. AI companies optimize for user satisfaction and engagement, not factual accuracy. And ‘success’ for chatbots is keeping users hooked with plausible-sounding responses.

That said, AI’s data/trust crisis is really an extension of the ongoing poisoning of our collective human consciousness. We’re feeding AI what we’re consuming ourselves. AI systems can’t tell the truth from noise, because we ourselves can’t. 

Truth is consensus after all. Whoever controls the information flow also controls the narratives we collectively perceive as ‘truth’ after they’re repeated enough times. And right now, a bunch of massive corporations hold the reins to truth, not us as individuals. That can change. It must. 

Truthful AI’s emergence is a positive-sum game

How do we fix this? How do we realign our information ecosystem — and by extension, AI — toward truth? It starts with reimagining how truth is created and maintained in the first place.

In the status quo, we often treat truth as a zero-sum game decided by whoever has the loudest voice or the highest authority. Information is siloed and tightly controlled; each platform or institution pushes its own version of reality. An AI (or a person) stuck in one of these silos ends up with a narrow, biased worldview. That’s how we get echo chambers, and that’s how both humans and AI wind up misled.

But many truths in life are not binary, zero-sum propositions. In fact, most meaningful truths are positive-sum — they can coexist and complement each other. What’s the “best” restaurant in New York? There’s no single correct answer, and that’s the beauty of it: the truth depends on your taste, your budget, your mood. My favorite song, being a jazz classic, doesn’t make your favorite pop anthem any less “true” for you. One person’s gain in understanding doesn’t have to mean another’s loss. Our perspectives can differ without nullifying each other.

This is why verifiable attribution and reputation primitives are so critical. Truth can’t just be about the content of a claim — it has to be about who is making it, what their incentives are, and how their past record holds up. If every assertion online carried with it a clear chain of authorship and a living reputation score, we could sift through noise without ceding control to centralized moderators. A bad-faith actor trying to spread disinformation would find their reputation degraded with every false claim. A thoughtful contributor with a long track record of accuracy would see their reputation — and influence — rise.

Crypto gives us the building blocks to make this work: decentralized identifiers, token-curated registries, staking mechanisms, and incentive structures that turn accuracy into an economic good. Imagine a knowledge graph where every statement is tied to a verifiable identity, every perspective carries a reputation score, and every truth claim can be challenged, staked against, and adjudicated in an open system. In that world, truth isn’t handed down from a single platform — it emerges organically from a network of attributed, reputationally-weighted voices.

Such a system flips the incentive landscape. Instead of content creators chasing virality at the expense of accuracy, they’d be staking their reputations — and often literal tokens — on the validity of their contributions. Instead of AI training on anonymous slop, it would be trained on attributed, reputation-weighted data where truth and trustworthiness are baked into the fabric of the information itself.

Now consider AI in this context. A model trained on such a reputation-aware graph would consume a much cleaner signal. It wouldn’t just parrot the most viral claim; it would learn to factor in attribution and credibility. Over time, agents themselves could participate in this system — staking on their outputs, building their own reputations, and competing not just on eloquence but on trustworthiness.

That’s how we break the cycle of poisoned information and build AI that reflects a positive-sum, decentralized vision of truth. Without verifiable attribution and decentralized reputation, we’ll always be stuck outsourcing “truth” to centralized platforms, and we’ll always be vulnerable to manipulation. 

With them, we can finally move beyond zero-sum authority and toward a system where truth emerges dynamically, resiliently, and — most importantly — together.

Billy Luedtke

Billy Luedtke has been building at the frontier of blockchain since Bitcoin in 2012 and Ethereum in 2014. He helped launch EY’s blockchain consulting practice and spent over five years at ConsenSys shaping the Ethereum ecosystem through roles in R&D, Developer Relations, token engineering, and decentralized identity. Billy also helped pioneer self-sovereign identity as Enterprise Lead at uPort, Co-Chair of the EEA’s Digital Identity Working Group, and a founding member of the Decentralized Identity Foundation. Today, he is the founder of Intuition, the native chain for Information Finance, transforming identities, claims, and reputation into verifiable, monetizable data for the next internet.

Source: https://crypto.news/ais-blind-spot-machines-cant-separate-truth-from-noise/

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

SEC urges caution on crypto wallets in latest investor guide

SEC urges caution on crypto wallets in latest investor guide

The SEC’s Office of Investor Education and Assistance issued a bulletin warning retail investors about crypto asset custody risks. The guidance covers how investors
Share
Crypto.news2025/12/15 01:45
A Netflix ‘KPop Demon Hunters’ Short Film Has Been Rated For Release

A Netflix ‘KPop Demon Hunters’ Short Film Has Been Rated For Release

The post A Netflix ‘KPop Demon Hunters’ Short Film Has Been Rated For Release appeared on BitcoinEthereumNews.com. KPop Demon Hunters Netflix Everyone has wondered what may be the next step for KPop Demon Hunters as an IP, given its record-breaking success on Netflix. Now, the answer may be something exactly no one predicted. According to a new filing with the MPA, something called Debut: A KPop Demon Hunters Story has been rated PG by the ratings body. It’s listed alongside some other films, and this is obviously something that has not been publicly announced. A short film could be well, very short, a few minutes, and likely no more than ten. Even that might be pushing it. Using say, Pixar shorts as a reference, most are between 4 and 8 minutes. The original movie is an hour and 36 minutes. The “Debut” in the title indicates some sort of flashback, perhaps to when HUNTR/X first arrived on the scene before they blew up. Previously, director Maggie Kang has commented about how there were more backstory components that were supposed to be in the film that were cut, but hinted those could be explored in a sequel. But perhaps some may be put into a short here. I very much doubt those scenes were fully produced and simply cut, but perhaps they were finished up for this short film here. When would Debut: KPop Demon Hunters theoretically arrive? I’m not sure the other films on the list are much help. Dead of Winter is out in less than two weeks. Mother Mary does not have a release date. Ne Zha 2 came out earlier this year. I’ve only seen news stories saying The Perfect Gamble was supposed to come out in Q1 2025, but I’ve seen no evidence that it actually has. KPop Demon Hunters Netflix It could be sooner rather than later as Netflix looks to capitalize…
Share
BitcoinEthereumNews2025/09/18 02:23
Fed rate decision September 2025

Fed rate decision September 2025

The post Fed rate decision September 2025 appeared on BitcoinEthereumNews.com. WASHINGTON – The Federal Reserve on Wednesday approved a widely anticipated rate cut and signaled that two more are on the way before the end of the year as concerns intensified over the U.S. labor market. In an 11-to-1 vote signaling less dissent than Wall Street had anticipated, the Federal Open Market Committee lowered its benchmark overnight lending rate by a quarter percentage point. The decision puts the overnight funds rate in a range between 4.00%-4.25%. Newly-installed Governor Stephen Miran was the only policymaker voting against the quarter-point move, instead advocating for a half-point cut. Governors Michelle Bowman and Christopher Waller, looked at for possible additional dissents, both voted for the 25-basis point reduction. All were appointed by President Donald Trump, who has badgered the Fed all summer to cut not merely in its traditional quarter-point moves but to lower the fed funds rate quickly and aggressively. In the post-meeting statement, the committee again characterized economic activity as having “moderated” but added language saying that “job gains have slowed” and noted that inflation “has moved up and remains somewhat elevated.” Lower job growth and higher inflation are in conflict with the Fed’s twin goals of stable prices and full employment.  “Uncertainty about the economic outlook remains elevated” the Fed statement said. “The Committee is attentive to the risks to both sides of its dual mandate and judges that downside risks to employment have risen.” Markets showed mixed reaction to the developments, with the Dow Jones Industrial Average up more than 300 points but the S&P 500 and Nasdaq Composite posting losses. Treasury yields were modestly lower. At his post-meeting news conference, Fed Chair Jerome Powell echoed the concerns about the labor market. “The marked slowing in both the supply of and demand for workers is unusual in this less dynamic…
Share
BitcoinEthereumNews2025/09/18 02:44