The post AI Models Might Be Able to Predict What You’ll Buy Better Than You Can appeared on BitcoinEthereumNews.com. In brief A new study shows LLMs can mimic human purchase intent by mapping free-text answers to Likert ratings through semantic similarity. Method achieved 90% of human test–retest reliability on 9,300 real survey responses. The study raises questions about bias, generalization, and how far “synthetic consumers” can stand in for real people. Forget focus groups: A new study found that large language models can forecast whether you want to buy something with striking accuracy, dramatically outperforming traditional marketing tools. Researchers at the University of Mannheim and ETH Zürich have found that large language models can replicate human purchase intent—the “How likely are you to buy this?” metric beloved by marketers—by transforming free-form text into structured survey data. In a paper published last week, the team introduced a method called “Semantic Similarity Rating,” which converts the model’s open-ended responses into numerical “Likert” ratings, a five-point scale used in traditional consumer research. Rather than asking a model to pick a number between one and five, the researchers had it respond naturally—“I’d definitely buy this,” or “Maybe if it were on sale”—and then measured how semantically close those statements were to canonical answers like “I would definitely buy this” or “I would not buy this.”  Each answer was mapped in embedding space to the nearest reference statement, effectively turning LLM text into statistical ratings. “We show that optimizing for semantic similarity rather than numeric labels yields purchase-intent distributions that closely match human survey data,” the authors wrote. “LLM-generated responses achieved 90% of the reliability of repeated human surveys while preserving natural variation in attitudes.” In tests across 9,300 real human survey responses about personal-care products, the SSR method produced synthetic respondents whose Likert distributions nearly mirrored the originals. In other words: when asked to “think like consumers,” the models did. Why it matters… The post AI Models Might Be Able to Predict What You’ll Buy Better Than You Can appeared on BitcoinEthereumNews.com. In brief A new study shows LLMs can mimic human purchase intent by mapping free-text answers to Likert ratings through semantic similarity. Method achieved 90% of human test–retest reliability on 9,300 real survey responses. The study raises questions about bias, generalization, and how far “synthetic consumers” can stand in for real people. Forget focus groups: A new study found that large language models can forecast whether you want to buy something with striking accuracy, dramatically outperforming traditional marketing tools. Researchers at the University of Mannheim and ETH Zürich have found that large language models can replicate human purchase intent—the “How likely are you to buy this?” metric beloved by marketers—by transforming free-form text into structured survey data. In a paper published last week, the team introduced a method called “Semantic Similarity Rating,” which converts the model’s open-ended responses into numerical “Likert” ratings, a five-point scale used in traditional consumer research. Rather than asking a model to pick a number between one and five, the researchers had it respond naturally—“I’d definitely buy this,” or “Maybe if it were on sale”—and then measured how semantically close those statements were to canonical answers like “I would definitely buy this” or “I would not buy this.”  Each answer was mapped in embedding space to the nearest reference statement, effectively turning LLM text into statistical ratings. “We show that optimizing for semantic similarity rather than numeric labels yields purchase-intent distributions that closely match human survey data,” the authors wrote. “LLM-generated responses achieved 90% of the reliability of repeated human surveys while preserving natural variation in attitudes.” In tests across 9,300 real human survey responses about personal-care products, the SSR method produced synthetic respondents whose Likert distributions nearly mirrored the originals. In other words: when asked to “think like consumers,” the models did. Why it matters…

AI Models Might Be Able to Predict What You’ll Buy Better Than You Can

In brief

  • A new study shows LLMs can mimic human purchase intent by mapping free-text answers to Likert ratings through semantic similarity.
  • Method achieved 90% of human test–retest reliability on 9,300 real survey responses.
  • The study raises questions about bias, generalization, and how far “synthetic consumers” can stand in for real people.

Forget focus groups: A new study found that large language models can forecast whether you want to buy something with striking accuracy, dramatically outperforming traditional marketing tools.

Researchers at the University of Mannheim and ETH Zürich have found that large language models can replicate human purchase intent—the “How likely are you to buy this?” metric beloved by marketers—by transforming free-form text into structured survey data.

In a paper published last week, the team introduced a method called “Semantic Similarity Rating,” which converts the model’s open-ended responses into numerical “Likert” ratings, a five-point scale used in traditional consumer research.

Rather than asking a model to pick a number between one and five, the researchers had it respond naturally—“I’d definitely buy this,” or “Maybe if it were on sale”—and then measured how semantically close those statements were to canonical answers like “I would definitely buy this” or “I would not buy this.”

Each answer was mapped in embedding space to the nearest reference statement, effectively turning LLM text into statistical ratings. “We show that optimizing for semantic similarity rather than numeric labels yields purchase-intent distributions that closely match human survey data,” the authors wrote. “LLM-generated responses achieved 90% of the reliability of repeated human surveys while preserving natural variation in attitudes.”

In tests across 9,300 real human survey responses about personal-care products, the SSR method produced synthetic respondents whose Likert distributions nearly mirrored the originals. In other words: when asked to “think like consumers,” the models did.

Why it matters

The finding could reshape how companies conduct product testing and market research. Consumer surveys are notoriously expensive, slow, and vulnerable to bias. Synthetic respondents—if they behave like real ones—could let companies screen thousands of products or messages for a fraction of the cost.

It also validates a deeper claim: that the geometry of an LLM’s semantic space encodes not just language understanding but attitudinal reasoning. By comparing answers in embedding space rather than treating them as literal text, the study demonstrates that model semantics can stand in for human judgment with surprising fidelity.

At the same time, it raises familiar ethical and methodological risks. The researchers tested only one product category, leaving open whether the same approach would hold for financial decisions or politically charged topics. And synthetic “consumers” could easily become synthetic targets: the same modeling techniques could help optimize political persuasion, advertising, or behavioral nudges.

As the authors put it, “market-driven optimization pressures can systematically erode alignment”—a phrase that resonates far beyond marketing.

A note of skepticism

The authors acknowledge that their test domain—personal-care products—is narrow and may not generalize to high-stakes or emotionally charged purchases. The SSR mapping also depends on carefully chosen reference statements: small wording changes can skew results. Moreover, the study relies on human survey data as “ground truth,” even though such data is notoriously noisy and culturally biased.

Critics point out that embedding-based similarity assumes that language vectors map neatly onto human attitudes, an assumption that may fail when context or irony enters the mix. The paper’s own reliability numbers—90% of human test-retest consistency—sound impressive but still leave room for significant drift. In short, the method works on average, but it’s not yet clear whether those averages capture real human diversity or simply reflect the model’s training priors.

The bigger picture

Academic interest in “synthetic consumer modeling” has surged in 2025 as companies experiment with AI-based focus groups and predictive polling. Similar work by MIT and the University of Cambridge has shown that LLMs can mimic demographic and psychometric segments with moderate reliability, but none have previously demonstrated a close statistical match to real purchase-intent data.

For now, the SSR method remains a research prototype, but it hints at a future where LLMs might not just answer questions—but represent the public itself.

Whether that’s an advance or a hallucination in the making is still up for debate.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.

Source: https://decrypt.co/343838/ai-models-might-be-able-to-predict-what-youll-buy-better-than-you-can

Market Opportunity
null Logo
null Price(null)
--
----
USD
null (null) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

The Channel Factories We’ve Been Waiting For

The Channel Factories We’ve Been Waiting For

The post The Channel Factories We’ve Been Waiting For appeared on BitcoinEthereumNews.com. Visions of future technology are often prescient about the broad strokes while flubbing the details. The tablets in “2001: A Space Odyssey” do indeed look like iPads, but you never see the astronauts paying for subscriptions or wasting hours on Candy Crush.  Channel factories are one vision that arose early in the history of the Lightning Network to address some challenges that Lightning has faced from the beginning. Despite having grown to become Bitcoin’s most successful layer-2 scaling solution, with instant and low-fee payments, Lightning’s scale is limited by its reliance on payment channels. Although Lightning shifts most transactions off-chain, each payment channel still requires an on-chain transaction to open and (usually) another to close. As adoption grows, pressure on the blockchain grows with it. The need for a more scalable approach to managing channels is clear. Channel factories were supposed to meet this need, but where are they? In 2025, subnetworks are emerging that revive the impetus of channel factories with some new details that vastly increase their potential. They are natively interoperable with Lightning and achieve greater scale by allowing a group of participants to open a shared multisig UTXO and create multiple bilateral channels, which reduces the number of on-chain transactions and improves capital efficiency. Achieving greater scale by reducing complexity, Ark and Spark perform the same function as traditional channel factories with new designs and additional capabilities based on shared UTXOs.  Channel Factories 101 Channel factories have been around since the inception of Lightning. A factory is a multiparty contract where multiple users (not just two, as in a Dryja-Poon channel) cooperatively lock funds in a single multisig UTXO. They can open, close and update channels off-chain without updating the blockchain for each operation. Only when participants leave or the factory dissolves is an on-chain transaction…
Share
BitcoinEthereumNews2025/09/18 00:09
Gold Hits $3,700 as Sprott’s Wong Says Dollar’s Store-of-Value Crown May Slip

Gold Hits $3,700 as Sprott’s Wong Says Dollar’s Store-of-Value Crown May Slip

The post Gold Hits $3,700 as Sprott’s Wong Says Dollar’s Store-of-Value Crown May Slip appeared on BitcoinEthereumNews.com. Gold is strutting its way into record territory, smashing through $3,700 an ounce Wednesday morning, as Sprott Asset Management strategist Paul Wong says the yellow metal may finally snatch the dollar’s most coveted role: store of value. Wong Warns: Fiscal Dominance Puts U.S. Dollar on Notice, Gold on Top Gold prices eased slightly to $3,678.9 […] Source: https://news.bitcoin.com/gold-hits-3700-as-sprotts-wong-says-dollars-store-of-value-crown-may-slip/
Share
BitcoinEthereumNews2025/09/18 00:33
AI in Sports: The Controversial Experiment of the Oakland Ballers’ AI Manager

AI in Sports: The Controversial Experiment of the Oakland Ballers’ AI Manager

BitcoinWorld AI in Sports: The Controversial Experiment of the Oakland Ballers’ AI Manager In the rapidly evolving landscape of technology, the boundaries of innovation are constantly being pushed. For those immersed in the world of cryptocurrency and blockchain, the concept of decentralized decision-making and experimental technology is familiar territory. So, when an independent baseball team decided to let an AI manage a game, it naturally sparked a fascinating conversation. This bold move by the Oakland Ballers wasn’t just about baseball; it was a profound experiment in AI in sports, echoing the spirit of disruption and questioning traditional structures that resonates deeply within the tech community. The Mathematical Heart of Baseball and the Rise of AI in Sports Baseball, often called America’s pastime, is more than just a game of skill and athleticism; it’s a deeply mathematical sport. Every pitch, every swing, every defensive shift can be broken down into granular statistics. Major League teams employ legions of data engineers to crunch numbers, seeking minute advantages that can influence managerial decisions. This data-driven approach, while effective, sometimes leads to an almost absurd level of overanalysis, reminiscent of Mr. Burns in that classic Simpsons episode, who famously pulled an eight-time all-star for Homer Simpson based on ‘playing the percentages.’ This deep analytical foundation makes baseball a prime candidate for technological experimentation, especially with artificial intelligence. The integration of AI in sports isn’t just a futuristic fantasy; it’s becoming a tangible reality, promising to optimize strategies, enhance player performance analysis, and even redefine the role of human coaches. From predictive analytics for player injuries to real-time strategic adjustments, AI offers a new lens through which to view and manage athletic competition. The Oakland Ballers, with their independent spirit, decided to take this concept further than most, venturing into uncharted territory. The Oakland Ballers’ Bold Experiment with Baseball AI The story of the Oakland Ballers is one of resilience and innovation. Founded by edtech entrepreneur Paul Freedman, the Ballers emerged as a beacon of hope for Oakland baseball fans after the painful departure of the Major League A’s. Though a minor league team, the ‘Oakland B’s’ quickly garnered a national following, winning a title in just two seasons. This unique position—a major league team in a minor league market—gave them the freedom to experiment in ways larger leagues couldn’t. Freedman explained to Bitcoin World, "We can play with things and experiment with things way before the MLB or NBA or any of those leagues could do something." This experimental ethos led them to a groundbreaking partnership with Distillery, an AI company, to develop software capable of managing a baseball game in real time. The core of this initiative was to see how a sophisticated Baseball AI could perform under live game conditions. Unlike previous fan-controlled experiments where humorous decisions often trumped strategic ones, this AI initiative aimed for pure data-driven optimization. The implications of such an experiment extend beyond the diamond, touching upon how artificial intelligence might reshape various industries, including those reliant on complex, real-time decision-making. Navigating AI Decision-Making on the Field The choice of baseball for this AI experiment was deliberate. As Freedman noted, "Baseball is the perfect place to do an initial experiment like this, because it is so data-driven, and decisions are made very analytically." The slow pace between pitches allows ample time for an AI system to process data and recommend actions. Distillery trained OpenAI’s ChatGPT on an immense dataset, including over a century of baseball statistics and specific Ballers game data, to mimic the strategic thinking of their human manager, Aaron Miles. The goal wasn’t to replace human ingenuity but to augment it. Freedman clarified, "What the AI did was figure out what our human coach would have done – the ingenuity on strategy and the concepts came from [Miles], and the ability to use the data and recognize patterns… is what the AI did throughout the course of the game." This highlights a critical distinction in the current state of AI decision-making: AI as a powerful tool for optimization, rather than an autonomous replacement for human expertise. During the AI-controlled game, the system performed remarkably, making almost identical decisions to Miles regarding pitching changes, lineup adjustments, and pinch hitters. The only instance where Miles had to intervene was due to a player’s unexpected illness, a scenario outside the AI’s programmed scope. This singular override underscores the enduring necessity of human oversight for unforeseen circumstances and ethical considerations. The manager himself, Aaron Miles, embraced the experiment with good humor, even offering the tablet running the AI for a handshake with the opposing manager, a symbolic gesture of technology meeting tradition. Aspect Human Manager (Aaron Miles) AI Manager (Distillery’s AI) Decision-making Basis Experience, intuition, real-time observation, data analysis Centuries of baseball data, Ballers’ game history, pattern recognition via ChatGPT Key Decisions Made Pitching changes, lineup construction, pinch hitters Identical decisions to Miles for pitching changes, lineup, pinch hitters Override Instances Miles overrode AI once due to player illness Required human override for unexpected player health issue Outcome of Game Smooth execution of managerial strategy Smooth execution, mirroring human decisions The Critical Role of Fan Engagement and Backlash Despite the smooth execution of the AI’s managerial duties, the experiment triggered an unexpected wave of backlash from the Oakland Ballers’ dedicated fanbase. For many, the involvement of companies like OpenAI, which powered Distillery’s AI, felt like a betrayal. Fans expressed concerns that such enterprises prioritize "winning" the AI race over thorough safety testing and ethical deployment. This sentiment was amplified by the recent history of corporate greed that led to the departure of multiple professional sports franchises from Oakland, creating a deep-seated mistrust among locals. Comments like "There goes the Ballers trying to appeal to Bay Area techies instead of baseball fans" highlighted a perceived disconnect. The issue wasn’t just about AI; it was about the broader cultural tension between technological advancement and community values. Fan engagement, crucial for any sports team, proved to be a double-edged sword. While fans had previously embraced novel concepts like fan-controlled games, the AI experiment touched a nerve related to corporate influence and the perceived erosion of authenticity. Paul Freedman acknowledged the unforeseen negative reaction, stating, "It never feels good to have your fans be like, ‘We hate this.’" The Ballers do not intend to repeat this specific AI experiment. However, the experience sparked a vital conversation about the ethical implications and societal acceptance of new technologies. This public discourse, though initially uncomfortable, is essential for navigating the complex future of AI. It underscores that while technology can optimize processes, the human element—emotion, community, and trust—remains paramount. A Look Ahead: Balancing Innovation and Community in the Age of AI The Oakland Ballers’ experiment serves as a compelling case study in the ongoing dialogue surrounding artificial intelligence. It showcased the impressive capabilities of AI in sports for data-driven strategy while simultaneously revealing the critical importance of public perception and fan engagement. The journey of the Oakland Ballers, from a team born out of protest to pioneers in sports technology, reflects a broader societal challenge: how to embrace innovation without alienating the communities it serves. As AI continues to integrate into various aspects of life, including sports and even the financial sector where cryptocurrencies thrive, understanding its practical applications and potential pitfalls becomes increasingly vital. The Ballers’ experience reminds us that while AI can be an incredible tool for optimization, the human touch, ethical considerations, and genuine connection with stakeholders are indispensable. The conversation about AI’s role in our future has just begun, and experiments like these, even with their bumps, are crucial steps in shaping that dialogue responsibly. To learn more about the latest AI in sports trends, explore our article on key developments shaping AI features, institutional adoption, and future applications. This post AI in Sports: The Controversial Experiment of the Oakland Ballers’ AI Manager first appeared on BitcoinWorld.
Share
Coinstats2025/09/23 04:45