The post AI Models Might Be Able to Predict What You’ll Buy Better Than You Can appeared on BitcoinEthereumNews.com. In brief A new study shows LLMs can mimic human purchase intent by mapping free-text answers to Likert ratings through semantic similarity. Method achieved 90% of human test–retest reliability on 9,300 real survey responses. The study raises questions about bias, generalization, and how far “synthetic consumers” can stand in for real people. Forget focus groups: A new study found that large language models can forecast whether you want to buy something with striking accuracy, dramatically outperforming traditional marketing tools. Researchers at the University of Mannheim and ETH Zürich have found that large language models can replicate human purchase intent—the “How likely are you to buy this?” metric beloved by marketers—by transforming free-form text into structured survey data. In a paper published last week, the team introduced a method called “Semantic Similarity Rating,” which converts the model’s open-ended responses into numerical “Likert” ratings, a five-point scale used in traditional consumer research. Rather than asking a model to pick a number between one and five, the researchers had it respond naturally—“I’d definitely buy this,” or “Maybe if it were on sale”—and then measured how semantically close those statements were to canonical answers like “I would definitely buy this” or “I would not buy this.”  Each answer was mapped in embedding space to the nearest reference statement, effectively turning LLM text into statistical ratings. “We show that optimizing for semantic similarity rather than numeric labels yields purchase-intent distributions that closely match human survey data,” the authors wrote. “LLM-generated responses achieved 90% of the reliability of repeated human surveys while preserving natural variation in attitudes.” In tests across 9,300 real human survey responses about personal-care products, the SSR method produced synthetic respondents whose Likert distributions nearly mirrored the originals. In other words: when asked to “think like consumers,” the models did. Why it matters… The post AI Models Might Be Able to Predict What You’ll Buy Better Than You Can appeared on BitcoinEthereumNews.com. In brief A new study shows LLMs can mimic human purchase intent by mapping free-text answers to Likert ratings through semantic similarity. Method achieved 90% of human test–retest reliability on 9,300 real survey responses. The study raises questions about bias, generalization, and how far “synthetic consumers” can stand in for real people. Forget focus groups: A new study found that large language models can forecast whether you want to buy something with striking accuracy, dramatically outperforming traditional marketing tools. Researchers at the University of Mannheim and ETH Zürich have found that large language models can replicate human purchase intent—the “How likely are you to buy this?” metric beloved by marketers—by transforming free-form text into structured survey data. In a paper published last week, the team introduced a method called “Semantic Similarity Rating,” which converts the model’s open-ended responses into numerical “Likert” ratings, a five-point scale used in traditional consumer research. Rather than asking a model to pick a number between one and five, the researchers had it respond naturally—“I’d definitely buy this,” or “Maybe if it were on sale”—and then measured how semantically close those statements were to canonical answers like “I would definitely buy this” or “I would not buy this.”  Each answer was mapped in embedding space to the nearest reference statement, effectively turning LLM text into statistical ratings. “We show that optimizing for semantic similarity rather than numeric labels yields purchase-intent distributions that closely match human survey data,” the authors wrote. “LLM-generated responses achieved 90% of the reliability of repeated human surveys while preserving natural variation in attitudes.” In tests across 9,300 real human survey responses about personal-care products, the SSR method produced synthetic respondents whose Likert distributions nearly mirrored the originals. In other words: when asked to “think like consumers,” the models did. Why it matters…

AI Models Might Be Able to Predict What You’ll Buy Better Than You Can

In brief

  • A new study shows LLMs can mimic human purchase intent by mapping free-text answers to Likert ratings through semantic similarity.
  • Method achieved 90% of human test–retest reliability on 9,300 real survey responses.
  • The study raises questions about bias, generalization, and how far “synthetic consumers” can stand in for real people.

Forget focus groups: A new study found that large language models can forecast whether you want to buy something with striking accuracy, dramatically outperforming traditional marketing tools.

Researchers at the University of Mannheim and ETH Zürich have found that large language models can replicate human purchase intent—the “How likely are you to buy this?” metric beloved by marketers—by transforming free-form text into structured survey data.

In a paper published last week, the team introduced a method called “Semantic Similarity Rating,” which converts the model’s open-ended responses into numerical “Likert” ratings, a five-point scale used in traditional consumer research.

Rather than asking a model to pick a number between one and five, the researchers had it respond naturally—“I’d definitely buy this,” or “Maybe if it were on sale”—and then measured how semantically close those statements were to canonical answers like “I would definitely buy this” or “I would not buy this.”

Each answer was mapped in embedding space to the nearest reference statement, effectively turning LLM text into statistical ratings. “We show that optimizing for semantic similarity rather than numeric labels yields purchase-intent distributions that closely match human survey data,” the authors wrote. “LLM-generated responses achieved 90% of the reliability of repeated human surveys while preserving natural variation in attitudes.”

In tests across 9,300 real human survey responses about personal-care products, the SSR method produced synthetic respondents whose Likert distributions nearly mirrored the originals. In other words: when asked to “think like consumers,” the models did.

Why it matters

The finding could reshape how companies conduct product testing and market research. Consumer surveys are notoriously expensive, slow, and vulnerable to bias. Synthetic respondents—if they behave like real ones—could let companies screen thousands of products or messages for a fraction of the cost.

It also validates a deeper claim: that the geometry of an LLM’s semantic space encodes not just language understanding but attitudinal reasoning. By comparing answers in embedding space rather than treating them as literal text, the study demonstrates that model semantics can stand in for human judgment with surprising fidelity.

At the same time, it raises familiar ethical and methodological risks. The researchers tested only one product category, leaving open whether the same approach would hold for financial decisions or politically charged topics. And synthetic “consumers” could easily become synthetic targets: the same modeling techniques could help optimize political persuasion, advertising, or behavioral nudges.

As the authors put it, “market-driven optimization pressures can systematically erode alignment”—a phrase that resonates far beyond marketing.

A note of skepticism

The authors acknowledge that their test domain—personal-care products—is narrow and may not generalize to high-stakes or emotionally charged purchases. The SSR mapping also depends on carefully chosen reference statements: small wording changes can skew results. Moreover, the study relies on human survey data as “ground truth,” even though such data is notoriously noisy and culturally biased.

Critics point out that embedding-based similarity assumes that language vectors map neatly onto human attitudes, an assumption that may fail when context or irony enters the mix. The paper’s own reliability numbers—90% of human test-retest consistency—sound impressive but still leave room for significant drift. In short, the method works on average, but it’s not yet clear whether those averages capture real human diversity or simply reflect the model’s training priors.

The bigger picture

Academic interest in “synthetic consumer modeling” has surged in 2025 as companies experiment with AI-based focus groups and predictive polling. Similar work by MIT and the University of Cambridge has shown that LLMs can mimic demographic and psychometric segments with moderate reliability, but none have previously demonstrated a close statistical match to real purchase-intent data.

For now, the SSR method remains a research prototype, but it hints at a future where LLMs might not just answer questions—but represent the public itself.

Whether that’s an advance or a hallucination in the making is still up for debate.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.

Source: https://decrypt.co/343838/ai-models-might-be-able-to-predict-what-youll-buy-better-than-you-can

Market Opportunity
null Logo
null Price(null)
--
----
USD
null (null) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

The Channel Factories We’ve Been Waiting For

The Channel Factories We’ve Been Waiting For

The post The Channel Factories We’ve Been Waiting For appeared on BitcoinEthereumNews.com. Visions of future technology are often prescient about the broad strokes while flubbing the details. The tablets in “2001: A Space Odyssey” do indeed look like iPads, but you never see the astronauts paying for subscriptions or wasting hours on Candy Crush.  Channel factories are one vision that arose early in the history of the Lightning Network to address some challenges that Lightning has faced from the beginning. Despite having grown to become Bitcoin’s most successful layer-2 scaling solution, with instant and low-fee payments, Lightning’s scale is limited by its reliance on payment channels. Although Lightning shifts most transactions off-chain, each payment channel still requires an on-chain transaction to open and (usually) another to close. As adoption grows, pressure on the blockchain grows with it. The need for a more scalable approach to managing channels is clear. Channel factories were supposed to meet this need, but where are they? In 2025, subnetworks are emerging that revive the impetus of channel factories with some new details that vastly increase their potential. They are natively interoperable with Lightning and achieve greater scale by allowing a group of participants to open a shared multisig UTXO and create multiple bilateral channels, which reduces the number of on-chain transactions and improves capital efficiency. Achieving greater scale by reducing complexity, Ark and Spark perform the same function as traditional channel factories with new designs and additional capabilities based on shared UTXOs.  Channel Factories 101 Channel factories have been around since the inception of Lightning. A factory is a multiparty contract where multiple users (not just two, as in a Dryja-Poon channel) cooperatively lock funds in a single multisig UTXO. They can open, close and update channels off-chain without updating the blockchain for each operation. Only when participants leave or the factory dissolves is an on-chain transaction…
Share
BitcoinEthereumNews2025/09/18 00:09
‘Sinners’ Earns 16 Oscar Nominations, Shattering All-Time Record

‘Sinners’ Earns 16 Oscar Nominations, Shattering All-Time Record

The post ‘Sinners’ Earns 16 Oscar Nominations, Shattering All-Time Record appeared on BitcoinEthereumNews.com. Topline “Sinners” shattered a 75-year-old record
Share
BitcoinEthereumNews2026/01/23 02:34
‘Return To Silent Hill’ Is The Worst-Reviewed Video Game Movie In 19 Years

‘Return To Silent Hill’ Is The Worst-Reviewed Video Game Movie In 19 Years

The post ‘Return To Silent Hill’ Is The Worst-Reviewed Video Game Movie In 19 Years appeared on BitcoinEthereumNews.com. Return to Silent Hil Return to Silent Hil
Share
BitcoinEthereumNews2026/01/23 02:19