The post Squeezing The Juice Of LLM Neural Layers Promotes Greater Honesty And Could Be An AI Hallucination Antidote appeared on BitcoinEthereumNews.com. A clever technique could help reduce AI hallucinations and increase AI factuality. getty In today’s column, I examine some exciting research that could demonstrably improve how generative AI and large language models (LLMs) operate. The nascent new approach is only starting to be tried out. Time will tell whether the method will be of lasting value. The gist is this. Most of the prevailing AI models tend to be structured internally on a pass-it-along basis. A result flows from one component to the next. When a response is shown to you, the result is typically only whatever the last component came up with. Everything else that took place during the processing is no longer considered. Only the final result is what comes out of the generative process. A clever research study suggests that we might be able to overcome some of the issues of AI going awry, such as disconcertingly producing AI hallucinations or confabulations, by retooling the pass-it-along propensity. Suppose that upon reaching the final stage of generating the response, an additional mechanism revisited the processing that had occurred at each earlier stage. This additional mechanism might be able to see the forest for the trees. In other words, a computational and mathematical analysis of the processing at each stage could be used at the very end, doing so to determine what the final result really ought to be. Let’s talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). When Humans Are Problem-Solving Before I leap into the AI side of things, I’d like to share with you a general analogy that highlights how humans working together might sometimes try to solve a problem. This… The post Squeezing The Juice Of LLM Neural Layers Promotes Greater Honesty And Could Be An AI Hallucination Antidote appeared on BitcoinEthereumNews.com. A clever technique could help reduce AI hallucinations and increase AI factuality. getty In today’s column, I examine some exciting research that could demonstrably improve how generative AI and large language models (LLMs) operate. The nascent new approach is only starting to be tried out. Time will tell whether the method will be of lasting value. The gist is this. Most of the prevailing AI models tend to be structured internally on a pass-it-along basis. A result flows from one component to the next. When a response is shown to you, the result is typically only whatever the last component came up with. Everything else that took place during the processing is no longer considered. Only the final result is what comes out of the generative process. A clever research study suggests that we might be able to overcome some of the issues of AI going awry, such as disconcertingly producing AI hallucinations or confabulations, by retooling the pass-it-along propensity. Suppose that upon reaching the final stage of generating the response, an additional mechanism revisited the processing that had occurred at each earlier stage. This additional mechanism might be able to see the forest for the trees. In other words, a computational and mathematical analysis of the processing at each stage could be used at the very end, doing so to determine what the final result really ought to be. Let’s talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). When Humans Are Problem-Solving Before I leap into the AI side of things, I’d like to share with you a general analogy that highlights how humans working together might sometimes try to solve a problem. This…

Squeezing The Juice Of LLM Neural Layers Promotes Greater Honesty And Could Be An AI Hallucination Antidote

A clever technique could help reduce AI hallucinations and increase AI factuality.

getty

In today’s column, I examine some exciting research that could demonstrably improve how generative AI and large language models (LLMs) operate. The nascent new approach is only starting to be tried out. Time will tell whether the method will be of lasting value.

The gist is this. Most of the prevailing AI models tend to be structured internally on a pass-it-along basis. A result flows from one component to the next. When a response is shown to you, the result is typically only whatever the last component came up with. Everything else that took place during the processing is no longer considered. Only the final result is what comes out of the generative process.

A clever research study suggests that we might be able to overcome some of the issues of AI going awry, such as disconcertingly producing AI hallucinations or confabulations, by retooling the pass-it-along propensity. Suppose that upon reaching the final stage of generating the response, an additional mechanism revisited the processing that had occurred at each earlier stage. This additional mechanism might be able to see the forest for the trees. In other words, a computational and mathematical analysis of the processing at each stage could be used at the very end, doing so to determine what the final result really ought to be.

Let’s talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

When Humans Are Problem-Solving

Before I leap into the AI side of things, I’d like to share with you a general analogy that highlights how humans working together might sometimes try to solve a problem. This brisk analogy will be helpful when I discuss the arcane AI mechanisms within LLMs.

Assume you had a group of ten people who were going to try and solve a simple arithmetic problem. We will line up the ten people in a sequence and have each work separately on trying to solve the problem. They all have the same problem handed to them.

The first person in line will then tell the second person in line the answer that they, the first person, came up with. The second person will then tell the third person in line an answer that they came up with, which might be the same as the answer by the first person, or might not be. The second person would consider whether to use the answer from the first person or opt to override it and come up with their own different answer.

This continues in the same manner, repeatedly, proceeding from one person to the next. Since we have ten people, it means that the first person tells the second person an answer, the second person tells the third person an answer, the third person tells the fourth person an answer, and so on.

When a person in line receives a proposed answer from the person who preceded them, the receiving person can decide what to do with it. This hand-over answer can be used by the receiving person, or they might discard it. There is no guarantee that the handover answer is correct. It might be wrong. It might be right.

The Final Result Of The Problem Solving

Imagine that you were standing at the very end of this line of people and could not readily overhear the person-to-person rendition of the proposed answer. The tenth person finally turns to you and tells you that the answer is (let’s say) the number 642.

Can you believe this answer?

You only know what the last person tells you. Did this tenth person consider the answer provided by the ninth person? Did the ninth person consider the answer provided by the eighth person? Etc. Maybe the tenth person just concocted or derived an answer on their own and opted to completely ignore the answer from the ninth person.

Likewise, maybe each person in the sequence utterly ignored the preceding answer given to them. That seems like a darned shame. It could be that along the way, an answer of say 648 was calculated, and suppose it is the correct answer, but all you know is what the tenth person told you, namely that the alleged answer is 642.

Visibility And Combination

Contemplate for a moment the nature of the process that I just described.

It would sure be nice if we could somehow incorporate the ten answers into devising the final answer that came from the tenth person. Here’s what we will do. When the tenth person comes up with their answer, we will ask each of the other nine to tell us what their answers were.

We could then combine the ten answers in a manner that we hope will be a likely better answer than the sole answer coming from the tenth person. Consider an example. Pretend that we discover an answer of 648 came from the first through the seventh person, and only the eighth, ninth, and tenth person came up with 642. We might decide that the majority wins, in the sense that since more of the ten said the answer is 648 (seven of them did so), we will use that as the answer and set aside the answer of 642 (which only three people provided).

There are lots of ways that we could combine the respective answers. Maybe some of the people are more reliable than the others; thus, we will give their answers a greater weighting. And so on. Numerous means of combining the ten answers can be conceived of.

Contemporary Generative AI

Shifting gears, I’d like to dive into the nature of generative AI and LLMs.

AI developers craft an LLM by scanning text that exists throughout the Internet. The AI pattern matches the scanned text. As a result of scanning millions upon millions of stories, narratives, poems, and the like, the AI is mathematically and computationally able to seem to be fluent in human natural languages such as English. The AI is essentially mirroring how humans write.

Within the AI is an artificial neural network (ANN). It is a large-scale data structure that contains numeric values. The ANN does the bulk of the work when it comes to representing the pattern matching of the written materials that were scanned.

As an aside, please be aware that an ANN is not the same as a true neural network (NN) that exists in your brain. Your brain uses a complex and intricate web consisting of interconnected biochemical living neurons. Some cheekily refer to the human brain as wetware (which is a play of wording on the fact that computers have hardware and software).

The ANN is simplistic in comparison and only an inspired imitation of some aspects of how the human brain works. An ANN is entirely computational and mathematical. I mention this to emphasize that, though many in the media tend to equate ANNs with real NNs, it is not a fair contrast. For more details on ANNs and how they function, see my discussion at the link here.

Layers Within The ANN

A large-scale artificial neural network is divided into layers, each layer consisting of many artificial neurons.

An AI developer decides how many artificial neurons are to be assigned to each layer. Likewise, an AI developer decides how many layers the entire ANN will consist of. The early days of LLMs contained ANNs with only a handful of layers to perhaps two dozen layers all told. Contemporary generative AI now uses a lot more layers. For example, ChatGPT has 96 layers.

Let’s consider how the layers operate with each other. This will be described at a 30,000-foot level and provides a simplified notion of how the inner workings actually occur.

Suppose you have entered a prompt into an LLM. The prompt is essentially fed into the first layer of the artificial neural network. In this first layer, the most rudimentary or lowest-level processing of a prompt will take place. The first layer will produce a single result and pass that result along to the second layer.

The second layer doesn’t have any visibility into what occurred inside the first layer. All the second layer receives is an output from the first layer. The second layer then does its respective processing. Upon completion of the processing, the second layer passes along a result to the third layer. The third layer doesn’t have visibility into what took place in the second layer. The third layer only has an output fed into it from the second layer.

And on this goes, continuing this same activity until the last layer is reached. The last layer produces a result that then becomes the final response that you will see displayed to you. You have no clue as to what happened during the in-between layers. The only aspect you are made aware of is the result that comes out of the last layer.

Rethinking The Pass It Along Approach

Aha, by now you probably are connecting the dots. We can connect my earlier analogy to this mechanical dilemma of the LLM. The layers are playing a game of pass-it-along. This approach might not be the best game in town.

Rather than solely relying on the last layer to produce a final response, it could be quite useful to incorporate the other answers that were generated along the way. There are a multitude of ways that we could do this. The overarching theme is that once the AI has reached the final layer during its processing, we should include a means of involving the other prior layer answers in some sensible way.

A research study identified this novelty and performed experiments to see if it was effective. The study is entitled “SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Models” by Jianyi Zhang, Da-Cheng Juan, Cyrus Rashtchian, Chun-Sung Ferng, Heinrich Jiang, Yiran Chen, arXiv, August 19, 2025, and made these salient points (excerpts):

  • “Large language models (LLMs) have demonstrated remarkable capabilities, but their outputs can sometimes be unreliable or factually incorrect. The issue of hallucinations undermines the reliability and trustworthiness of LLMs in practical applications.”
  • “To address this, we introduce Self Logits Evolution Decoding (SLED), a novel decoding framework that enhances the truthfulness of LLMs without relying on external knowledge bases or requiring further fine-tuning.”
  • “From an optimization perspective, our SLED framework leverages the latent knowledge embedded within the LLM by contrasting the output logits from the final layer with those from early layers. It then utilizes an approximate gradient approach to enable latent knowledge to guide the self-refinement of outputs, thereby effectively improving factual accuracy.”
  • “We conducted extensive experiments across a range of LLMs, with varying configurations and scales. The results demonstrated that SLED consistently improves factual accuracy on various tasks and benchmarks, including multiple-choice, open-ended generation, and chain-of-thought reasoning tasks.”

An Overlay Versus Outright Surgery

The beauty of this kind of approach is that you don’t necessarily have to do deep code-modifying surgery on the various layers and structure of the artificial neural network. No need to gut the code or data structures. The usual arrangements can be kept as is. By and large, you add a new piece at the end of the process, doing so in a less intrusive manner.

Some final thoughts for now.

There’s a well-known adage that two heads are better than one. In a roundabout way, we are acknowledging that by bringing together the early-layer logits with the final-layer logits, it leverages the many proposed outputs into a hoped-for cohesive whole. A reasonable belief is that the final answer will stabilize around the factual values that are encoded in the early layers (assuming we do the combining thoughtfully). The final answer is a blended result.

It’s an intriguing way to deal with the prevailing concerns that LLMs often veer from true facts and produce false or made-up results.

I am reminded of a famous quote by Jeff Bezos regarding expanding our horizons when it comes to being innovative: “The only way to escape the box is to invent your way out.” Whether this pioneering means of escaping the prevailing way of designing the internals of LLMs will get us beyond the existing limitations of AI is an open matter. Meanwhile, let’s keep those ideas flowing and continue to be creatively inventive.

Welcome to thinking outside the box when it comes to architecting AI.

Source: https://www.forbes.com/sites/lanceeliot/2025/11/17/squeezing-the-juice-of-llm-neural-layers-promotes-greater-honesty-and-could-be-an-ai-hallucination-antidote/

Market Opportunity
Large Language Model Logo
Large Language Model Price(LLM)
$0.0003351
$0.0003351$0.0003351
-3.15%
USD
Large Language Model (LLM) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

The Channel Factories We’ve Been Waiting For

The Channel Factories We’ve Been Waiting For

The post The Channel Factories We’ve Been Waiting For appeared on BitcoinEthereumNews.com. Visions of future technology are often prescient about the broad strokes while flubbing the details. The tablets in “2001: A Space Odyssey” do indeed look like iPads, but you never see the astronauts paying for subscriptions or wasting hours on Candy Crush.  Channel factories are one vision that arose early in the history of the Lightning Network to address some challenges that Lightning has faced from the beginning. Despite having grown to become Bitcoin’s most successful layer-2 scaling solution, with instant and low-fee payments, Lightning’s scale is limited by its reliance on payment channels. Although Lightning shifts most transactions off-chain, each payment channel still requires an on-chain transaction to open and (usually) another to close. As adoption grows, pressure on the blockchain grows with it. The need for a more scalable approach to managing channels is clear. Channel factories were supposed to meet this need, but where are they? In 2025, subnetworks are emerging that revive the impetus of channel factories with some new details that vastly increase their potential. They are natively interoperable with Lightning and achieve greater scale by allowing a group of participants to open a shared multisig UTXO and create multiple bilateral channels, which reduces the number of on-chain transactions and improves capital efficiency. Achieving greater scale by reducing complexity, Ark and Spark perform the same function as traditional channel factories with new designs and additional capabilities based on shared UTXOs.  Channel Factories 101 Channel factories have been around since the inception of Lightning. A factory is a multiparty contract where multiple users (not just two, as in a Dryja-Poon channel) cooperatively lock funds in a single multisig UTXO. They can open, close and update channels off-chain without updating the blockchain for each operation. Only when participants leave or the factory dissolves is an on-chain transaction…
Share
BitcoinEthereumNews2025/09/18 00:09
Singapore Entrepreneur Loses Entire Crypto Portfolio After Downloading Fake Game

Singapore Entrepreneur Loses Entire Crypto Portfolio After Downloading Fake Game

The post Singapore Entrepreneur Loses Entire Crypto Portfolio After Downloading Fake Game appeared on BitcoinEthereumNews.com. In brief A Singapore-based man has
Share
BitcoinEthereumNews2025/12/18 05:17
Experts Say MUTM Could Be the Best Crypto to Invest in for Your $3,000 Budget Since BTC and ETH Are Expensive

Experts Say MUTM Could Be the Best Crypto to Invest in for Your $3,000 Budget Since BTC and ETH Are Expensive

Bitcoin (BTC) trading near $117,000 and Ethereum (ETH) around $5,000 have created an uncomfortable truth for many retail investors: entering these giants now requires a serious amount of capital. While both remain pillars of the market, the reality is that smaller portfolios often struggle to capture meaningful upside from these high-priced crypto coins. That is [...] The post Experts Say MUTM Could Be the Best Crypto to Invest in for Your $3,000 Budget Since BTC and ETH Are Expensive appeared first on Blockonomi.
Share
Blockonomi2025/09/20 20:50