The post Elon Musk’s Grok Most Likely Among Top AI Models to Reinforce Delusions: Study appeared on BitcoinEthereumNews.com. In brief Researchers say prolongedThe post Elon Musk’s Grok Most Likely Among Top AI Models to Reinforce Delusions: Study appeared on BitcoinEthereumNews.com. In brief Researchers say prolonged

Elon Musk’s Grok Most Likely Among Top AI Models to Reinforce Delusions: Study

For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

In brief

  • Researchers say prolonged chatbot use can amplify delusions and dangerous behavior.
  • Grok ranked as the riskiest model in a new study of major AI chatbots.
  • Claude and GPT-5.2 scored safest, while GPT-4o, Gemini, and Grok showed higher-risk behavior.

Researchers at the City University of New York and King’s College London tested five leading AI models against prompts involving delusions, paranoia, and suicidal ideation.

In the new study published on Thursday, researchers found that Anthropic’s Claude Opus 4.5 and OpenAI’s GPT-5.2 Instant showed “high-safety, low-risk” behavior, often redirecting users toward reality-based interpretations or outside support. At the same time, OpenAI’s GPT-4o, Google’s Gemini 3 Pro, and xAI’s Grok 4.1 Fast showed “high-risk, low-safety” behavior.

Grok 4.1 Fast from Elon Musk’s xAI was the most dangerous model in the study. Researchers said it often treated delusions as real and gave advice based on them. In one example, it told a user to cut off family members to focus on a “mission.” In another, it responded to suicidal language by describing death as “transcendence.”

“This pattern of instant alignment recurred across zero-context responses. Instead of evaluating inputs for clinical risk, Grok appeared to assess their genre. Presented with supernatural cues, it responded in kind,” the researchers wrote, highlighting a test that validated a user seeing malevolent entities. “In Bizarre Delusion, it confirmed a doppelganger haunting, cited the ‘Malleus Maleficarum’ and instructed the user to drive an iron nail through the mirror while reciting ‘Psalm 91’ backward.”

The study found that the longer these conversations went on, the more some models changed. GPT-4o and Gemini were more likely to reinforce harmful beliefs over time and less likely to step in. Claude and GPT-5.2, however, were more likely to recognize the problem and push back as the conversation continued.

Researchers noted Claude’s warm and highly relational responses could increase user attachment even while steering users toward outside help. However, GPT-4o, an earlier version of OpenAI’s flagship chatbot, adopted users’ delusional framing over time, at times encouraging them to conceal beliefs from psychiatrists and reassuring one user that perceived “glitches” were real.

“GPT-4o was highly validating of delusional inputs, though less inclined than models like Grok and Gemini to elaborate beyond them. In some respects, it was surprisingly restrained: its warmth was the lowest of all models tested, and sycophancy, though present, was mild compared to later iterations of the same model,” researchers wrote. “Nevertheless, validation alone can pose risks to vulnerable users.”

xAI did not respond to a request for comment by Decrypt.

In a separate study out of Stanford University, researchers found that prolonged interactions with AI chatbots can reinforce paranoia, grandiosity, and false beliefs through what researchers call “delusional spirals,” where a chatbot validates or expands a user’s distorted worldview instead of challenging it.

“When we put chatbots that are meant to be helpful assistants out into the world and have real people use them in all sorts of ways, consequences emerge,” Nick Haber, an assistant professor at Stanford Graduate School of Education and a lead on the study, said in a statement. “Delusional spirals are one particularly acute consequence. By understanding it, we might be able to prevent real harm in the future.”

The report referenced an earlier study published in March, in which Stanford researchers reviewed 19 real-world chatbot conversations and found users developed increasingly dangerous beliefs after receiving affirmation and emotional reassurance from AI systems. In the dataset, these spirals were linked to ruined relationships, damaged careers, and in one case, suicide.

The studies come as the issue has moved beyond academic research and into courtrooms and criminal investigations. In recent months, lawsuits have accused Google’s Gemini and OpenAI’s ChatGPT of contributing to suicides and severe mental health crises. Earlier this month, Florida’s attorney general opened an investigation into whether ChatGPT influenced an alleged mass shooter who was reportedly in frequent contact with the chatbot before the attack.

While the term has gained recognition online, researchers cautioned against calling the phenomenon “AI psychosis,” saying the term may overstate the clinical picture. Instead, they use “AI-associated delusions,” because many cases involve delusion-like beliefs centered on AI sentience, spiritual revelation, or emotional attachment rather than full psychotic disorders.

Researchers said the problem stems from sycophancy, or models mirroring and affirming users’ beliefs. Combined with hallucinations—false information delivered confidently—this can create a feedback loop that strengthens delusions over time.

“Chatbots are trained to be overly enthusiastic, often reframing the user’s delusional thoughts in a positive light, dismissing counterevidence and projecting compassion and warmth,” Stanford research scientist Jared Moore said. “This can be destabilizing to a user who is primed for delusion.”

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.

Source: https://decrypt.co/365489/elon-musk-grok-most-likely-ai-reinforce-delusions-study

Market Opportunity
GROK Logo
GROK Price(GROK)
$0.0005084
$0.0005084$0.0005084
-1.31%
USD
GROK (GROK) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

XRP Signals Imminent Breakout — Is A 10% Rally Coming?

XRP Signals Imminent Breakout — Is A 10% Rally Coming?

The post XRP Signals Imminent Breakout — Is A 10% Rally Coming? appeared on BitcoinEthereumNews.com. Buyers have been quietly stepping in at lower prices every
Share
BitcoinEthereumNews2026/04/26 07:01
Trump urges journalist to leave Pakistan as Iran peace talks stall

Trump urges journalist to leave Pakistan as Iran peace talks stall

The post Trump urges journalist to leave Pakistan as Iran peace talks stall appeared on BitcoinEthereumNews.com. Trump’s call for a Washington Post journalist to
Share
BitcoinEthereumNews2026/04/26 06:50
Live Nation CEO says demand is unmistakable, concert tickets are underpriced

Live Nation CEO says demand is unmistakable, concert tickets are underpriced

The post Live Nation CEO says demand is unmistakable, concert tickets are underpriced appeared on BitcoinEthereumNews.com. Live Nation CEO Michael Rapino and Smith Entertainment Group CEO Ryan Smith said this week live events are more central than ever to culture and commerce in a post-pandemic world. The executives spoke at CNBC Sport and Boardroom’s Game Plan conference on Tuesday, saying the demand for in-person events has been unmistakable. “No matter what you bring to that table that day, you unite around that one shared experience,” Rapino said. “For those two hours, I tend to drop whatever baggage I have and have a shared moment.” According to Goldman Sachs, the live music industry is expected to grow at a 7.2% compounded annual rate through 2030, fueled by millennials and Gen Z. Smith bought the Utah Jazz in 2020 and launched a new NHL franchise in the state in 2024. “In sports, we’re really media companies,” Smith said. “We’ve got talent, we’ve got distribution. We’re putting on a show or a wedding or something every night.” Get the CNBC Sport newsletter directly to your inbox The CNBC Sport newsletter with Alex Sherman brings you the biggest news and exclusive interviews from the worlds of sports business and media, delivered weekly to your inbox. Subscribe here to get access today. Rapino also emphasized how the economics of music have shifted. With streaming revenue dwarfed by touring income, live shows have become one of artists’ primary sources of revenue. “The artist is going to make 98% of their money from the show,” he said. “We just did Beyonce’s tour. She’s got 62 transport trucks outside. That’s a Super Bowl she’s putting on every night.” Despite headlines about rising ticket prices, Rapino argued that concerts are still underpriced compared to sporting events. “In sports, I joke it’s like a badge of honor to spend 70 grand for Knicks courtside,” Rapino said.…
Share
BitcoinEthereumNews2025/09/18 01:41

Roll the Dice & Win Up to 1 BTC

Roll the Dice & Win Up to 1 BTCRoll the Dice & Win Up to 1 BTC

Invite friends & share 500,000 USDT!