The post Machines can’t separate truth from noise appeared on BitcoinEthereumNews.com. Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial. We marvel at how intelligent the latest AI models have become — until they confidently present us with complete nonsense. The irony is hard to miss: as AI systems grow more powerful, their ability to distinguish fact from fiction isn’t necessarily improving. In some ways, it’s getting worse. Summary AI reflects our information flaws. Models like GPT-5 struggle because training data is polluted with viral, engagement-driven content that prioritizes sensation over accuracy. Truth is no longer zero-sum. Many “truths” coexist, but current platforms centralize information flow, creating echo chambers and bias that feed both humans and AI. Decentralized attribution fixes the cycle. Reputation- and identity-linked systems, powered by crypto primitives, can reward accuracy, filter noise, and train AI on verifiable, trustworthy data. Consider OpenAI’s own findings: one version of GPT-4 (code-named “o3”) hallucinated answers about 33% of the time in benchmark tests, according to the company’s own paper. Its smaller successor (“o4-mini”) went off the rails nearly half the time. The newest model, GPT-5, was supposed to fix this and indeed claims to hallucinate far less (~9%). Yet many experienced users find GPT-5 dumber in practice—slower, more hesitant, and still often wrong (also evidencing the fact that benchmarks only get us so far). Nillion CTO, John Woods’, frustration was explicit when he said ChatGPT went from ‘essential to garbage’ after GPT-5’s release. Yet the reality is, the more advanced models will get increasingly worse at telling truth from noise. All of them, not just GPT.  Incredible how ChatGPT Plus went from essential to garbage with the release GPT-5. Most queries routed to tiny incapable models, a 32K context window and dogshit usage limits, and they still get… The post Machines can’t separate truth from noise appeared on BitcoinEthereumNews.com. Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial. We marvel at how intelligent the latest AI models have become — until they confidently present us with complete nonsense. The irony is hard to miss: as AI systems grow more powerful, their ability to distinguish fact from fiction isn’t necessarily improving. In some ways, it’s getting worse. Summary AI reflects our information flaws. Models like GPT-5 struggle because training data is polluted with viral, engagement-driven content that prioritizes sensation over accuracy. Truth is no longer zero-sum. Many “truths” coexist, but current platforms centralize information flow, creating echo chambers and bias that feed both humans and AI. Decentralized attribution fixes the cycle. Reputation- and identity-linked systems, powered by crypto primitives, can reward accuracy, filter noise, and train AI on verifiable, trustworthy data. Consider OpenAI’s own findings: one version of GPT-4 (code-named “o3”) hallucinated answers about 33% of the time in benchmark tests, according to the company’s own paper. Its smaller successor (“o4-mini”) went off the rails nearly half the time. The newest model, GPT-5, was supposed to fix this and indeed claims to hallucinate far less (~9%). Yet many experienced users find GPT-5 dumber in practice—slower, more hesitant, and still often wrong (also evidencing the fact that benchmarks only get us so far). Nillion CTO, John Woods’, frustration was explicit when he said ChatGPT went from ‘essential to garbage’ after GPT-5’s release. Yet the reality is, the more advanced models will get increasingly worse at telling truth from noise. All of them, not just GPT.  Incredible how ChatGPT Plus went from essential to garbage with the release GPT-5. Most queries routed to tiny incapable models, a 32K context window and dogshit usage limits, and they still get…

Machines can’t separate truth from noise

2025/10/30 20:53

Disclosure: The views and opinions expressed here belong solely to the author and do not represent the views and opinions of crypto.news’ editorial.

We marvel at how intelligent the latest AI models have become — until they confidently present us with complete nonsense. The irony is hard to miss: as AI systems grow more powerful, their ability to distinguish fact from fiction isn’t necessarily improving. In some ways, it’s getting worse.

Summary

  • AI reflects our information flaws. Models like GPT-5 struggle because training data is polluted with viral, engagement-driven content that prioritizes sensation over accuracy.
  • Truth is no longer zero-sum. Many “truths” coexist, but current platforms centralize information flow, creating echo chambers and bias that feed both humans and AI.
  • Decentralized attribution fixes the cycle. Reputation- and identity-linked systems, powered by crypto primitives, can reward accuracy, filter noise, and train AI on verifiable, trustworthy data.

Consider OpenAI’s own findings: one version of GPT-4 (code-named “o3”) hallucinated answers about 33% of the time in benchmark tests, according to the company’s own paper. Its smaller successor (“o4-mini”) went off the rails nearly half the time. The newest model, GPT-5, was supposed to fix this and indeed claims to hallucinate far less (~9%). Yet many experienced users find GPT-5 dumber in practice—slower, more hesitant, and still often wrong (also evidencing the fact that benchmarks only get us so far).

Nillion CTO, John Woods’, frustration was explicit when he said ChatGPT went from ‘essential to garbage’ after GPT-5’s release. Yet the reality is, the more advanced models will get increasingly worse at telling truth from noise. All of them, not just GPT. 

Why would a more advanced AI feel less reliable than its predecessors? One reason is that these systems are only as good as their training data, and the data we’re giving AI is fundamentally flawed. Today, this data largely comes from an information paradigm where engagement trumps accuracy while centralized gatekeepers amplify noise over signal to maximize profits. It’s thus naive to expect truthful AI without first fixing the data problem.

AI mirrors our collective information poisoning

High-quality training data is disappearing faster than we create it. There’s a recursive degradation loop at work: AI primarily digests web-based data; the web is becoming increasingly polluted with misleading, unverifiable AI slop; synthetic data trains the next generation of models to be even more disconnected from reality. 

More than bad training sets, it’s about the fundamental architecture of how we organize and verify information online. Over 65% of the world’s population spends hours on social media platforms designed to maximize engagement. We’re thus exposed, at an unprecedented scale, to algorithms that inadvertently reward misinformation. 

False stories trigger stronger emotional responses, so they spread faster than the corrective claims. Thus, the most viral content — i.e., the one most likely to be ingested by AI training pipelines — is systematically biased towards sensation over accuracy. 

Platforms also profit from attention, not truth. Data creators are rewarded for virality, not veracity. AI companies optimize for user satisfaction and engagement, not factual accuracy. And ‘success’ for chatbots is keeping users hooked with plausible-sounding responses.

That said, AI’s data/trust crisis is really an extension of the ongoing poisoning of our collective human consciousness. We’re feeding AI what we’re consuming ourselves. AI systems can’t tell the truth from noise, because we ourselves can’t. 

Truth is consensus after all. Whoever controls the information flow also controls the narratives we collectively perceive as ‘truth’ after they’re repeated enough times. And right now, a bunch of massive corporations hold the reins to truth, not us as individuals. That can change. It must. 

Truthful AI’s emergence is a positive-sum game

How do we fix this? How do we realign our information ecosystem — and by extension, AI — toward truth? It starts with reimagining how truth is created and maintained in the first place.

In the status quo, we often treat truth as a zero-sum game decided by whoever has the loudest voice or the highest authority. Information is siloed and tightly controlled; each platform or institution pushes its own version of reality. An AI (or a person) stuck in one of these silos ends up with a narrow, biased worldview. That’s how we get echo chambers, and that’s how both humans and AI wind up misled.

But many truths in life are not binary, zero-sum propositions. In fact, most meaningful truths are positive-sum — they can coexist and complement each other. What’s the “best” restaurant in New York? There’s no single correct answer, and that’s the beauty of it: the truth depends on your taste, your budget, your mood. My favorite song, being a jazz classic, doesn’t make your favorite pop anthem any less “true” for you. One person’s gain in understanding doesn’t have to mean another’s loss. Our perspectives can differ without nullifying each other.

This is why verifiable attribution and reputation primitives are so critical. Truth can’t just be about the content of a claim — it has to be about who is making it, what their incentives are, and how their past record holds up. If every assertion online carried with it a clear chain of authorship and a living reputation score, we could sift through noise without ceding control to centralized moderators. A bad-faith actor trying to spread disinformation would find their reputation degraded with every false claim. A thoughtful contributor with a long track record of accuracy would see their reputation — and influence — rise.

Crypto gives us the building blocks to make this work: decentralized identifiers, token-curated registries, staking mechanisms, and incentive structures that turn accuracy into an economic good. Imagine a knowledge graph where every statement is tied to a verifiable identity, every perspective carries a reputation score, and every truth claim can be challenged, staked against, and adjudicated in an open system. In that world, truth isn’t handed down from a single platform — it emerges organically from a network of attributed, reputationally-weighted voices.

Such a system flips the incentive landscape. Instead of content creators chasing virality at the expense of accuracy, they’d be staking their reputations — and often literal tokens — on the validity of their contributions. Instead of AI training on anonymous slop, it would be trained on attributed, reputation-weighted data where truth and trustworthiness are baked into the fabric of the information itself.

Now consider AI in this context. A model trained on such a reputation-aware graph would consume a much cleaner signal. It wouldn’t just parrot the most viral claim; it would learn to factor in attribution and credibility. Over time, agents themselves could participate in this system — staking on their outputs, building their own reputations, and competing not just on eloquence but on trustworthiness.

That’s how we break the cycle of poisoned information and build AI that reflects a positive-sum, decentralized vision of truth. Without verifiable attribution and decentralized reputation, we’ll always be stuck outsourcing “truth” to centralized platforms, and we’ll always be vulnerable to manipulation. 

With them, we can finally move beyond zero-sum authority and toward a system where truth emerges dynamically, resiliently, and — most importantly — together.

Billy Luedtke

Billy Luedtke has been building at the frontier of blockchain since Bitcoin in 2012 and Ethereum in 2014. He helped launch EY’s blockchain consulting practice and spent over five years at ConsenSys shaping the Ethereum ecosystem through roles in R&D, Developer Relations, token engineering, and decentralized identity. Billy also helped pioneer self-sovereign identity as Enterprise Lead at uPort, Co-Chair of the EEA’s Digital Identity Working Group, and a founding member of the Decentralized Identity Foundation. Today, he is the founder of Intuition, the native chain for Information Finance, transforming identities, claims, and reputation into verifiable, monetizable data for the next internet.

Source: https://crypto.news/ais-blind-spot-machines-cant-separate-truth-from-noise/

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Tether's value surges over 40-fold, with a $500 billion valuation hinting at both capital and narrative ambitions.

Tether's value surges over 40-fold, with a $500 billion valuation hinting at both capital and narrative ambitions.

By Nancy, PANews News that Tether is in talks to raise funds at a $500 billion valuation has propelled it to new heights. If the deal goes through, its valuation would leap to the highest of any global crypto company, rivaling even Silicon Valley unicorns like OpenAI and SpaceX. Tether, with its strong capital base, boasts profit levels that have driven its price-to-earnings ratio beyond the reach of both crypto and traditional institutions. Yet, its pursuit of a new round of capital injection at a high valuation serves not only as a powerful testament to its profitability but also as a means of shaping the market narrative through capital operations, building momentum for future business and market expansion. Net worth soared more than 40 times in a year, and well-known core investors are being evaluated. On September 24, Bloomberg reported that stablecoin giant Tether is planning to sell approximately 3% of its shares at a valuation of $15 billion to $20 billion. If the deal goes through, Tether's valuation could reach approximately $500 billion, making it one of the world's most valuable private companies and potentially setting a record for the largest single financing in the history of the crypto industry. By comparison, in November 2024, Cantor Fitzgerald, a prominent US financial services firm, acquired approximately 5% of Tether for $600 million, valuing the company at approximately $12 billion. This means Tether's value has increased more than 40-fold in less than a year. However, since Cantor Fitzgerald's former CEO, Howard Lutnick, is currently the US Secretary of Commerce, the deal was interpreted as a "friendship price" that could potentially garner more political support for Tether. Tether's rapid rise in value is largely due to its dominant market share, impressive profit margins, and solid financial position. According to Coingecko data, as of September 24th, USDT's market capitalization exceeded $172 billion, setting a new record and accounting for over 60% of the market share. Furthermore, Tether CEO Paolo Ardoino recently admitted that Tether's profit margin is as high as 99%. The second-quarter financial report further demonstrates Tether's robust financial position, with $162.5 billion in reserve assets exceeding $157.1 billion in liabilities. "Tether has about $5.5 billion in cash, Bitcoin and equity assets on its balance sheet. If calculated based on the approximately $173 billion USDT in circulation and a 4% compound yield, and if it raises funds at a valuation of $500 billion, it means that its enterprise value to annualized return (PE) multiple is about 68 times," Dragonfly investor Omar pointed out. Sources familiar with the matter revealed that the disclosed valuation represents the upper end of the target range, and the final transaction value could be significantly lower. Negotiations are at an early stage, and investment details are subject to change. The transaction involves the issuance of new shares, not the sale of shares by existing investors. Paolo Ardoino later confirmed that the company is actively evaluating the possibility of raising capital from a number of prominent core investors. Behind the high valuation of external financing, the focus is on business expansion and compliance layout Tether has always been known to be "rich." The stablecoin giant is expected to generate $13.7 billion in net profit in 2024, thanks to interest income from U.S. Treasury bonds and cash assets. For any technology or financial company, this profit level is more than enough to support continued expansion. However, Tether is now launching a highly valued external financing plan. This is not only a capital operation strategy, but also relates to business expansion and regulatory compliance. According to Paolo Ardoino, Tether plans to raise funds to expand the company's strategic scale in existing and new business lines (stablecoins, distribution coverage, artificial intelligence, commodity trading, energy, communications, and media) by several orders of magnitude. He disclosed in July this year that Tether has invested in over 120 companies to date, and this number is expected to grow significantly in the coming months and years, with a focus on key areas such as payment infrastructure, renewable energy, Bitcoin, agriculture, artificial intelligence, and tokenization. In other words, Tether is trying to transform passive income that depends on the interest rate environment into active growth in cross-industry investments. But pressure is mounting. With the increasing number of competitors and the Federal Reserve resuming its interest rate cut cycle, Tether's main source of profit faces downward risks. The company has previously emphasized that its external investments are entirely sourced from its own profits. A decline in earnings expectations would mean a shrinking pool of funds available for expansion. However, the injection of substantial financing would provide Tether with ample liquidity for its investment portfolio. What truly necessitates Tether's capital and resources is expansion into the US market. With the implementation of the US GENIUS Act, stablecoin issuance enters a new compliance framework. This presents both a challenge and an opportunity for Tether. This is especially true after competitor Circle's successful IPO and capital market recognition, with its valuation soaring to $30 billion, further magnifying Tether's compliance shortcomings. On the one hand, USDT has long been on the gray edge, walking on the edge of regulation. Tether has successfully attracted public attention through extremely small equity transactions and huge valuations, and has also used this to enhance the market narrative, thereby breaking the negative perception of the outside world and significantly enhancing its own influence. On the other hand, unlike Circle's IPO, Tether has chosen a different path to gain mainstream market acceptance. In September of this year, Tether announced that it would launch a US-native stablecoin, USAT, by the end of the year. Unlike the widely circulated USDT, USAT is designed specifically for businesses and institutions operating under US regulations. It is issued by Anchorage Digital, a licensed digital asset bank, and operates on Tether's global distribution network. This allows Tether to retain control over its core profits while meeting regulatory compliance requirements. The personnel arrangements also make this new card intriguing. USAT's CEO is Bo Hines (see also: 29-Year-Old Crypto Upstart Bo Hines: From White House Crypto Liaison to Rapid Assignment to Tether's US Stablecoin ). In August of this year, Tether appointed him as its Digital Asset and US Strategy Advisor, responsible for developing and executing Tether's US market development strategy and strengthening communication with policymakers. As previously reported by PANews, Hines previously served as the White House Digital Asset Policy Advisor, where he was responsible for promoting crypto policy and facilitating the passage of the GENIUS Act, a US stablecoin, and has accumulated extensive connections in the political and business circles. This provides USAT with an additional layer of protection when entering the US market. Cantor Fitzgerald, the advisor to this financing round, is also noteworthy. As one of the Federal Reserve's designated principal dealers, Cantor boasts extensive experience in investment banking and private equity, building close ties to Wall Street's political and business networks. Furthermore, Cantor is the primary custodian of Tether's reserve assets, providing firsthand insight into the latter's fund operations. For external investors, Cantor's involvement not only adds credibility to Tether's financing valuation but also provides added certainty for the launch of USAT in the US market.
Share
PANews2025/09/24 15:52