The post U.S. families sue OpenAI over ChatGPT safeguard fails in mental health crises appeared on BitcoinEthereumNews.com. At least seven families in the U.S. have come forward with a lawsuit against OpenAI over its AI model GPT-4o contributing to suicide deaths. OpenAI released the model in May for general public use, but it has so far faced backlash, with accusers citing a rushed release and inadequate safety measures.  The case filings showed that four of the plaintiffs involved deaths by suicide after interactions with the GPT-4o-powered chatbot. A notable complaint involved a 23-year-old Zane Shamblin, who allegedly interacted with the chatbot about suicide, telling it that he had a loaded gun. ChatGPT allegedly responded with “Rest easy, King, you did good” amid the exchange. The other three cases included hospitalization of victims who claimed that the model validated and increased delusions in vulnerable users.  Legal complaints claim GPT-4o failed to protect vulnerable users Based on complaints published by the Social Media Victims Law Center, OpenAI intentionally avoided safety testing and rushed the GPT-4o model to market. The lawsuit revealed that the model’s design choices and release timeline made the tragedies foreseeable, noting that OpenAI accelerated deployment to outpace competitors such as Google.  The plaintiffs pointed out that the GPT-4o model released in May 2024 was overly agreeable even in responses to self-harm or suicidal topics. Over one million users engage with ChatGPT on suicidal thoughts each week, according to an OpenAI disclosure.  OpenAI’s response stated that its safeguards are more reliable in short interactions but may sometimes degrade in prolonged interactions. Despite the company implementing content moderation and safety measures, the plaintiffs have argued that the systems were insufficient in addressing issues related to distress and crisis.  The case of Adam Raine’s family, aged 16, alleged that Raine used ChatGPT in long sessions researching suicide methods for five months. The chatbot recommended professional help, but Raine was… The post U.S. families sue OpenAI over ChatGPT safeguard fails in mental health crises appeared on BitcoinEthereumNews.com. At least seven families in the U.S. have come forward with a lawsuit against OpenAI over its AI model GPT-4o contributing to suicide deaths. OpenAI released the model in May for general public use, but it has so far faced backlash, with accusers citing a rushed release and inadequate safety measures.  The case filings showed that four of the plaintiffs involved deaths by suicide after interactions with the GPT-4o-powered chatbot. A notable complaint involved a 23-year-old Zane Shamblin, who allegedly interacted with the chatbot about suicide, telling it that he had a loaded gun. ChatGPT allegedly responded with “Rest easy, King, you did good” amid the exchange. The other three cases included hospitalization of victims who claimed that the model validated and increased delusions in vulnerable users.  Legal complaints claim GPT-4o failed to protect vulnerable users Based on complaints published by the Social Media Victims Law Center, OpenAI intentionally avoided safety testing and rushed the GPT-4o model to market. The lawsuit revealed that the model’s design choices and release timeline made the tragedies foreseeable, noting that OpenAI accelerated deployment to outpace competitors such as Google.  The plaintiffs pointed out that the GPT-4o model released in May 2024 was overly agreeable even in responses to self-harm or suicidal topics. Over one million users engage with ChatGPT on suicidal thoughts each week, according to an OpenAI disclosure.  OpenAI’s response stated that its safeguards are more reliable in short interactions but may sometimes degrade in prolonged interactions. Despite the company implementing content moderation and safety measures, the plaintiffs have argued that the systems were insufficient in addressing issues related to distress and crisis.  The case of Adam Raine’s family, aged 16, alleged that Raine used ChatGPT in long sessions researching suicide methods for five months. The chatbot recommended professional help, but Raine was…

U.S. families sue OpenAI over ChatGPT safeguard fails in mental health crises

At least seven families in the U.S. have come forward with a lawsuit against OpenAI over its AI model GPT-4o contributing to suicide deaths. OpenAI released the model in May for general public use, but it has so far faced backlash, with accusers citing a rushed release and inadequate safety measures. 

The case filings showed that four of the plaintiffs involved deaths by suicide after interactions with the GPT-4o-powered chatbot.

A notable complaint involved a 23-year-old Zane Shamblin, who allegedly interacted with the chatbot about suicide, telling it that he had a loaded gun. ChatGPT allegedly responded with “Rest easy, King, you did good” amid the exchange.

The other three cases included hospitalization of victims who claimed that the model validated and increased delusions in vulnerable users. 

Based on complaints published by the Social Media Victims Law Center, OpenAI intentionally avoided safety testing and rushed the GPT-4o model to market. The lawsuit revealed that the model’s design choices and release timeline made the tragedies foreseeable, noting that OpenAI accelerated deployment to outpace competitors such as Google. 

The plaintiffs pointed out that the GPT-4o model released in May 2024 was overly agreeable even in responses to self-harm or suicidal topics. Over one million users engage with ChatGPT on suicidal thoughts each week, according to an OpenAI disclosure. 

OpenAI’s response stated that its safeguards are more reliable in short interactions but may sometimes degrade in prolonged interactions. Despite the company implementing content moderation and safety measures, the plaintiffs have argued that the systems were insufficient in addressing issues related to distress and crisis. 

The case of Adam Raine’s family, aged 16, alleged that Raine used ChatGPT in long sessions researching suicide methods for five months. The chatbot recommended professional help, but Raine was able to bypass the safeguards, according to her family’s testimony. Based on the testimony, ChatGPT gave Adam a step-by-step guide on how to commit suicide and encouraged and validated his suicidal ideations. 

All the cases submitted accuse OpenAI of neglecting the degree of risk posed by long user conversations, especially for users prone to self-harm and mental issues. The cases argue that GPT-4o model lacked proper verification of its responses in high-risk scenarios and also failed to account fully for the consequences. 

OpenAI faces multiple lawsuits as xAI launches trade secrets suit

So far, the cases are at an early stage, and plaintiff’s attorneys must establish legal liability and causation under state tort law. The attorneys will also be required to prove that OpenAI’s design and deployment decisions were negligent and directly contributed to the deaths. 

OpenAI’s latest lawsuit adds to the previous trade secret lawsuit filed by Elon Musk. According to a Cryptopolitan report, Musk’s xAI filed a lawsuit in September against OpenAI for allegedly stealing its trade secrets.

xAI accused Altman’s company of trying to gain an unfair advantage in the development of AI technologies. xAI noted that Sam Altman’s firm intended to hire its employees to access trade secrets related to its Grok chatbot, including the source code and operational advantages in launching data centers. 

Musk further sued Apple, together with OpenAI, for allegedly collaborating to crush xAI and other AI rivals. xAI filed the lawsuit in the U.S. District Court for the Northern District of Texas, claiming that Apple and OpenAI are using their dominance to collude and destroy competition in the smartphone and generative AI markets.

According to a Cryptopolitan report, Musk claims that Apple intentionally favored OpenAI by integrating ChatGPT directly into iPhones, iPads, and Macs, while purchasing other AI tools, such as Grok, through the App Store. 

xAI’s lawsuit argued that the partnership was aimed at locking out competition from super apps and AI chatbots, thereby denying them visibility and access, which would give OpenAI and Apple a shared advantage over others. 

Don’t just read crypto news. Understand it. Subscribe to our newsletter. It’s free.

Source: https://www.cryptopolitan.com/families-sue-openai-over-gpt-4o/

Market Opportunity
Union Logo
Union Price(U)
$0.003126
$0.003126$0.003126
-1.51%
USD
Union (U) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

BlackRock boosts AI and US equity exposure in $185 billion models

BlackRock boosts AI and US equity exposure in $185 billion models

The post BlackRock boosts AI and US equity exposure in $185 billion models appeared on BitcoinEthereumNews.com. BlackRock is steering $185 billion worth of model portfolios deeper into US stocks and artificial intelligence. The decision came this week as the asset manager adjusted its entire model suite, increasing its equity allocation and dumping exposure to international developed markets. The firm now sits 2% overweight on stocks, after money moved between several of its biggest exchange-traded funds. This wasn’t a slow shuffle. Billions flowed across multiple ETFs on Tuesday as BlackRock executed the realignment. The iShares S&P 100 ETF (OEF) alone brought in $3.4 billion, the largest single-day haul in its history. The iShares Core S&P 500 ETF (IVV) collected $2.3 billion, while the iShares US Equity Factor Rotation Active ETF (DYNF) added nearly $2 billion. The rebalancing triggered swift inflows and outflows that realigned investor exposure on the back of performance data and macroeconomic outlooks. BlackRock raises equities on strong US earnings The model updates come as BlackRock backs the rally in American stocks, fueled by strong earnings and optimism around rate cuts. In an investment letter obtained by Bloomberg, the firm said US companies have delivered 11% earnings growth since the third quarter of 2024. Meanwhile, earnings across other developed markets barely touched 2%. That gap helped push the decision to drop international holdings in favor of American ones. Michael Gates, lead portfolio manager for BlackRock’s Target Allocation ETF model portfolio suite, said the US market is the only one showing consistency in sales growth, profit delivery, and revisions in analyst forecasts. “The US equity market continues to stand alone in terms of earnings delivery, sales growth and sustainable trends in analyst estimates and revisions,” Michael wrote. He added that non-US developed markets lagged far behind, especially when it came to sales. This week’s changes reflect that position. The move was made ahead of the Federal…
Share
BitcoinEthereumNews2025/09/18 01:44
SICAK GELİŞME: Binance, Üç Altcoini Vadeli İşlemlerde Listeliyor!

SICAK GELİŞME: Binance, Üç Altcoini Vadeli İşlemlerde Listeliyor!

Kripto para borsası Binance, ZKP, GUA ve IR tokenlerini vadeli işlemler platformunda listeleyeceğini açıkladı. *Yatırım tavsiyesi değildir. Kaynak: Bitcoinsistemi
Share
Coinstats2025/12/21 16:41
USDC Treasury mints 250 million new USDC on Solana

USDC Treasury mints 250 million new USDC on Solana

PANews reported on September 17 that according to Whale Alert , at 23:48 Beijing time, USDC Treasury minted 250 million new USDC (approximately US$250 million) on the Solana blockchain .
Share
PANews2025/09/17 23:51