The post Machines can’t separate truth from noise appeared on BitcoinEthereumNews.com. Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial. We marvel at how intelligent the latest AI models have become — until they confidently present us with complete nonsense. The irony is hard to miss: as AI systems grow more powerful, their ability to distinguish fact from fiction isn’t necessarily improving. In some ways, it’s getting worse. Summary AI reflects our information flaws. Models like GPT-5 struggle because training data is polluted with viral, engagement-driven content that prioritizes sensation over accuracy. Truth is no longer zero-sum. Many “truths” coexist, but current platforms centralize information flow, creating echo chambers and bias that feed both humans and AI. Decentralized attribution fixes the cycle. Reputation- and identity-linked systems, powered by crypto primitives, can reward accuracy, filter noise, and train AI on verifiable, trustworthy data. Consider OpenAI’s own findings: one version of GPT-4 (code-named “o3”) hallucinated answers about 33% of the time in benchmark tests, according to the company’s own paper. Its smaller successor (“o4-mini”) went off the rails nearly half the time. The newest model, GPT-5, was supposed to fix this and indeed claims to hallucinate far less (~9%). Yet many experienced users find GPT-5 dumber in practice—slower, more hesitant, and still often wrong (also evidencing the fact that benchmarks only get us so far). Nillion CTO, John Woods’, frustration was explicit when he said ChatGPT went from ‘essential to garbage’ after GPT-5’s release. Yet the reality is, the more advanced models will get increasingly worse at telling truth from noise. All of them, not just GPT.  Incredible how ChatGPT Plus went from essential to garbage with the release GPT-5. Most queries routed to tiny incapable models, a 32K context window and dogshit usage limits, and they still get… The post Machines can’t separate truth from noise appeared on BitcoinEthereumNews.com. Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial. We marvel at how intelligent the latest AI models have become — until they confidently present us with complete nonsense. The irony is hard to miss: as AI systems grow more powerful, their ability to distinguish fact from fiction isn’t necessarily improving. In some ways, it’s getting worse. Summary AI reflects our information flaws. Models like GPT-5 struggle because training data is polluted with viral, engagement-driven content that prioritizes sensation over accuracy. Truth is no longer zero-sum. Many “truths” coexist, but current platforms centralize information flow, creating echo chambers and bias that feed both humans and AI. Decentralized attribution fixes the cycle. Reputation- and identity-linked systems, powered by crypto primitives, can reward accuracy, filter noise, and train AI on verifiable, trustworthy data. Consider OpenAI’s own findings: one version of GPT-4 (code-named “o3”) hallucinated answers about 33% of the time in benchmark tests, according to the company’s own paper. Its smaller successor (“o4-mini”) went off the rails nearly half the time. The newest model, GPT-5, was supposed to fix this and indeed claims to hallucinate far less (~9%). Yet many experienced users find GPT-5 dumber in practice—slower, more hesitant, and still often wrong (also evidencing the fact that benchmarks only get us so far). Nillion CTO, John Woods’, frustration was explicit when he said ChatGPT went from ‘essential to garbage’ after GPT-5’s release. Yet the reality is, the more advanced models will get increasingly worse at telling truth from noise. All of them, not just GPT.  Incredible how ChatGPT Plus went from essential to garbage with the release GPT-5. Most queries routed to tiny incapable models, a 32K context window and dogshit usage limits, and they still get…

Machines can’t separate truth from noise

2025/10/30 20:53

Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial.

We marvel at how intelligent the latest AI models have become — until they confidently present us with complete nonsense. The irony is hard to miss: as AI systems grow more powerful, their ability to distinguish fact from fiction isn’t necessarily improving. In some ways, it’s getting worse.

Summary

  • AI reflects our information flaws. Models like GPT-5 struggle because training data is polluted with viral, engagement-driven content that prioritizes sensation over accuracy.
  • Truth is no longer zero-sum. Many “truths” coexist, but current platforms centralize information flow, creating echo chambers and bias that feed both humans and AI.
  • Decentralized attribution fixes the cycle. Reputation- and identity-linked systems, powered by crypto primitives, can reward accuracy, filter noise, and train AI on verifiable, trustworthy data.

Consider OpenAI’s own findings: one version of GPT-4 (code-named “o3”) hallucinated answers about 33% of the time in benchmark tests, according to the company’s own paper. Its smaller successor (“o4-mini”) went off the rails nearly half the time. The newest model, GPT-5, was supposed to fix this and indeed claims to hallucinate far less (~9%). Yet many experienced users find GPT-5 dumber in practice—slower, more hesitant, and still often wrong (also evidencing the fact that benchmarks only get us so far).

Nillion CTO, John Woods’, frustration was explicit when he said ChatGPT went from ‘essential to garbage’ after GPT-5’s release. Yet the reality is, the more advanced models will get increasingly worse at telling truth from noise. All of them, not just GPT. 

Why would a more advanced AI feel less reliable than its predecessors? One reason is that these systems are only as good as their training data, and the data we’re giving AI is fundamentally flawed. Today, this data largely comes from an information paradigm where engagement trumps accuracy while centralized gatekeepers amplify noise over signal to maximize profits. It’s thus naive to expect truthful AI without first fixing the data problem.

AI mirrors our collective information poisoning

High-quality training data is disappearing faster than we create it. There’s a recursive degradation loop at work: AI primarily digests web-based data; the web is becoming increasingly polluted with misleading, unverifiable AI slop; synthetic data trains the next generation of models to be even more disconnected from reality. 

More than bad training sets, it’s about the fundamental architecture of how we organize and verify information online. Over 65% of the world’s population spends hours on social media platforms designed to maximize engagement. We’re thus exposed, at an unprecedented scale, to algorithms that inadvertently reward misinformation. 

False stories trigger stronger emotional responses, so they spread faster than the corrective claims. Thus, the most viral content — i.e., the one most likely to be ingested by AI training pipelines — is systematically biased towards sensation over accuracy. 

Platforms also profit from attention, not truth. Data creators are rewarded for virality, not veracity. AI companies optimize for user satisfaction and engagement, not factual accuracy. And ‘success’ for chatbots is keeping users hooked with plausible-sounding responses.

That said, AI’s data/trust crisis is really an extension of the ongoing poisoning of our collective human consciousness. We’re feeding AI what we’re consuming ourselves. AI systems can’t tell the truth from noise, because we ourselves can’t. 

Truth is consensus after all. Whoever controls the information flow also controls the narratives we collectively perceive as ‘truth’ after they’re repeated enough times. And right now, a bunch of massive corporations hold the reins to truth, not us as individuals. That can change. It must. 

Truthful AI’s emergence is a positive-sum game

How do we fix this? How do we realign our information ecosystem — and by extension, AI — toward truth? It starts with reimagining how truth is created and maintained in the first place.

In the status quo, we often treat truth as a zero-sum game decided by whoever has the loudest voice or the highest authority. Information is siloed and tightly controlled; each platform or institution pushes its own version of reality. An AI (or a person) stuck in one of these silos ends up with a narrow, biased worldview. That’s how we get echo chambers, and that’s how both humans and AI wind up misled.

But many truths in life are not binary, zero-sum propositions. In fact, most meaningful truths are positive-sum — they can coexist and complement each other. What’s the “best” restaurant in New York? There’s no single correct answer, and that’s the beauty of it: the truth depends on your taste, your budget, your mood. My favorite song, being a jazz classic, doesn’t make your favorite pop anthem any less “true” for you. One person’s gain in understanding doesn’t have to mean another’s loss. Our perspectives can differ without nullifying each other.

This is why verifiable attribution and reputation primitives are so critical. Truth can’t just be about the content of a claim — it has to be about who is making it, what their incentives are, and how their past record holds up. If every assertion online carried with it a clear chain of authorship and a living reputation score, we could sift through noise without ceding control to centralized moderators. A bad-faith actor trying to spread disinformation would find their reputation degraded with every false claim. A thoughtful contributor with a long track record of accuracy would see their reputation — and influence — rise.

Crypto gives us the building blocks to make this work: decentralized identifiers, token-curated registries, staking mechanisms, and incentive structures that turn accuracy into an economic good. Imagine a knowledge graph where every statement is tied to a verifiable identity, every perspective carries a reputation score, and every truth claim can be challenged, staked against, and adjudicated in an open system. In that world, truth isn’t handed down from a single platform — it emerges organically from a network of attributed, reputationally-weighted voices.

Such a system flips the incentive landscape. Instead of content creators chasing virality at the expense of accuracy, they’d be staking their reputations — and often literal tokens — on the validity of their contributions. Instead of AI training on anonymous slop, it would be trained on attributed, reputation-weighted data where truth and trustworthiness are baked into the fabric of the information itself.

Now consider AI in this context. A model trained on such a reputation-aware graph would consume a much cleaner signal. It wouldn’t just parrot the most viral claim; it would learn to factor in attribution and credibility. Over time, agents themselves could participate in this system — staking on their outputs, building their own reputations, and competing not just on eloquence but on trustworthiness.

That’s how we break the cycle of poisoned information and build AI that reflects a positive-sum, decentralized vision of truth. Without verifiable attribution and decentralized reputation, we’ll always be stuck outsourcing “truth” to centralized platforms, and we’ll always be vulnerable to manipulation. 

With them, we can finally move beyond zero-sum authority and toward a system where truth emerges dynamically, resiliently, and — most importantly — together.

Billy Luedtke

Billy Luedtke has been building at the frontier of blockchain since Bitcoin in 2012 and Ethereum in 2014. He helped launch EY’s blockchain consulting practice and spent over five years at ConsenSys shaping the Ethereum ecosystem through roles in R&D, Developer Relations, token engineering, and decentralized identity. Billy also helped pioneer self-sovereign identity as Enterprise Lead at uPort, Co-Chair of the EEA’s Digital Identity Working Group, and a founding member of the Decentralized Identity Foundation. Today, he is the founder of Intuition, the native chain for Information Finance, transforming identities, claims, and reputation into verifiable, monetizable data for the next internet.

Source: https://crypto.news/ais-blind-spot-machines-cant-separate-truth-from-noise/

Market Opportunity
Threshold Logo
Threshold Price(T)
$0,00943
$0,00943$0,00943
-2,88%
USD
Threshold (T) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Volante Technologies Customers Successfully Navigate Critical Regulatory Deadlines for EU SEPA Instant and Global SWIFT Cross-Border Payments

Volante Technologies Customers Successfully Navigate Critical Regulatory Deadlines for EU SEPA Instant and Global SWIFT Cross-Border Payments

PaaS leader ensures seamless migrations and uninterrupted payment operations LONDON–(BUSINESS WIRE)–Volante Technologies, the global leader in Payments as a Service
Share
AI Journal2025/12/16 17:16
Fed Acts on Economic Signals with Rate Cut

Fed Acts on Economic Signals with Rate Cut

In a significant pivot, the Federal Reserve reduced its benchmark interest rate following a prolonged ten-month hiatus. This decision, reflecting a strategic response to the current economic climate, has captured attention across financial sectors, with both market participants and policymakers keenly evaluating its potential impact.Continue Reading:Fed Acts on Economic Signals with Rate Cut
Share
Coinstats2025/09/18 02:28
Google's AP2 protocol has been released. Does encrypted AI still have a chance?

Google's AP2 protocol has been released. Does encrypted AI still have a chance?

Following the MCP and A2A protocols, the AI Agent market has seen another blockbuster arrival: the Agent Payments Protocol (AP2), developed by Google. This will clearly further enhance AI Agents' autonomous multi-tasking capabilities, but the unfortunate reality is that it has little to do with web3AI. Let's take a closer look: What problem does AP2 solve? Simply put, the MCP protocol is like a universal hook, enabling AI agents to connect to various external tools and data sources; A2A is a team collaboration communication protocol that allows multiple AI agents to cooperate with each other to complete complex tasks; AP2 completes the last piece of the puzzle - payment capability. In other words, MCP opens up connectivity, A2A promotes collaboration efficiency, and AP2 achieves value exchange. The arrival of AP2 truly injects "soul" into the autonomous collaboration and task execution of Multi-Agents. Imagine AI Agents connecting Qunar, Meituan, and Didi to complete the booking of flights, hotels, and car rentals, but then getting stuck at the point of "self-payment." What's the point of all that multitasking? So, remember this: AP2 is an extension of MCP+A2A, solving the last mile problem of AI Agent automated execution. What are the technical highlights of AP2? The core innovation of AP2 is the Mandates mechanism, which is divided into real-time authorization mode and delegated authorization mode. Real-time authorization is easy to understand. The AI Agent finds the product and shows it to you. The operation can only be performed after the user signs. Delegated authorization requires the user to set rules in advance, such as only buying the iPhone 17 when the price drops to 5,000. The AI Agent monitors the trigger conditions and executes automatically. The implementation logic is cryptographically signed using Verifiable Credentials (VCs). Users can set complex commission conditions, including price ranges, time limits, and payment method priorities, forming a tamper-proof digital contract. Once signed, the AI Agent executes according to the conditions, with VCs ensuring auditability and security at every step. Of particular note is the "A2A x402" extension, a technical component developed by Google specifically for crypto payments, developed in collaboration with Coinbase and the Ethereum Foundation. This extension enables AI Agents to seamlessly process stablecoins, ETH, and other blockchain assets, supporting native payment scenarios within the Web3 ecosystem. What kind of imagination space can AP2 bring? After analyzing the technical principles, do you think that's it? Yes, in fact, the AP2 is boring when it is disassembled alone. Its real charm lies in connecting and opening up the "MCP+A2A+AP2" technology stack, completely opening up the complete link of AI Agent's autonomous analysis+execution+payment. From now on, AI Agents can open up many application scenarios. For example, AI Agents for stock investment and financial management can help us monitor the market 24/7 and conduct independent transactions. Enterprise procurement AI Agents can automatically replenish and renew without human intervention. AP2's complementary payment capabilities will further expand the penetration of the Agent-to-Agent economy into more scenarios. Google obviously understands that after the technical framework is established, the ecological implementation must be relied upon, so it has brought in more than 60 partners to develop it, almost covering the entire payment and business ecosystem. Interestingly, it also involves major Crypto players such as Ethereum, Coinbase, MetaMask, and Sui. Combined with the current trend of currency and stock integration, the imagination space has been doubled. Is web3 AI really dead? Not entirely. Google's AP2 looks complete, but it only achieves technical compatibility with Crypto payments. It can only be regarded as an extension of the traditional authorization framework and belongs to the category of automated execution. There is a "paradigm" difference between it and the autonomous asset management pursued by pure Crypto native solutions. The Crypto-native solutions under exploration are taking the "decentralized custody + on-chain verification" route, including AI Agent autonomous asset management, AI Agent autonomous transactions (DeFAI), AI Agent digital identity and on-chain reputation system (ERC-8004...), AI Agent on-chain governance DAO framework, AI Agent NPC and digital avatars, and many other interesting and fun directions. Ultimately, once users get used to AI Agent payments in traditional fields, their acceptance of AI Agents autonomously owning digital assets will also increase. And for those scenarios that AP2 cannot reach, such as anonymous transactions, censorship-resistant payments, and decentralized asset management, there will always be a time for crypto-native solutions to show their strength? The two are more likely to be complementary rather than competitive, but to be honest, the key technological advancements behind AI Agents currently all come from web2AI, and web3AI still needs to keep up the good work!
Share
PANews2025/09/18 07:00