The post U.S. families sue OpenAI over ChatGPT safeguard fails in mental health crises appeared on BitcoinEthereumNews.com. At least seven families in the U.S. have come forward with a lawsuit against OpenAI over its AI model GPT-4o contributing to suicide deaths. OpenAI released the model in May for general public use, but it has so far faced backlash, with accusers citing a rushed release and inadequate safety measures.  The case filings showed that four of the plaintiffs involved deaths by suicide after interactions with the GPT-4o-powered chatbot. A notable complaint involved a 23-year-old Zane Shamblin, who allegedly interacted with the chatbot about suicide, telling it that he had a loaded gun. ChatGPT allegedly responded with “Rest easy, King, you did good” amid the exchange. The other three cases included hospitalization of victims who claimed that the model validated and increased delusions in vulnerable users.  Legal complaints claim GPT-4o failed to protect vulnerable users Based on complaints published by the Social Media Victims Law Center, OpenAI intentionally avoided safety testing and rushed the GPT-4o model to market. The lawsuit revealed that the model’s design choices and release timeline made the tragedies foreseeable, noting that OpenAI accelerated deployment to outpace competitors such as Google.  The plaintiffs pointed out that the GPT-4o model released in May 2024 was overly agreeable even in responses to self-harm or suicidal topics. Over one million users engage with ChatGPT on suicidal thoughts each week, according to an OpenAI disclosure.  OpenAI’s response stated that its safeguards are more reliable in short interactions but may sometimes degrade in prolonged interactions. Despite the company implementing content moderation and safety measures, the plaintiffs have argued that the systems were insufficient in addressing issues related to distress and crisis.  The case of Adam Raine’s family, aged 16, alleged that Raine used ChatGPT in long sessions researching suicide methods for five months. The chatbot recommended professional help, but Raine was… The post U.S. families sue OpenAI over ChatGPT safeguard fails in mental health crises appeared on BitcoinEthereumNews.com. At least seven families in the U.S. have come forward with a lawsuit against OpenAI over its AI model GPT-4o contributing to suicide deaths. OpenAI released the model in May for general public use, but it has so far faced backlash, with accusers citing a rushed release and inadequate safety measures.  The case filings showed that four of the plaintiffs involved deaths by suicide after interactions with the GPT-4o-powered chatbot. A notable complaint involved a 23-year-old Zane Shamblin, who allegedly interacted with the chatbot about suicide, telling it that he had a loaded gun. ChatGPT allegedly responded with “Rest easy, King, you did good” amid the exchange. The other three cases included hospitalization of victims who claimed that the model validated and increased delusions in vulnerable users.  Legal complaints claim GPT-4o failed to protect vulnerable users Based on complaints published by the Social Media Victims Law Center, OpenAI intentionally avoided safety testing and rushed the GPT-4o model to market. The lawsuit revealed that the model’s design choices and release timeline made the tragedies foreseeable, noting that OpenAI accelerated deployment to outpace competitors such as Google.  The plaintiffs pointed out that the GPT-4o model released in May 2024 was overly agreeable even in responses to self-harm or suicidal topics. Over one million users engage with ChatGPT on suicidal thoughts each week, according to an OpenAI disclosure.  OpenAI’s response stated that its safeguards are more reliable in short interactions but may sometimes degrade in prolonged interactions. Despite the company implementing content moderation and safety measures, the plaintiffs have argued that the systems were insufficient in addressing issues related to distress and crisis.  The case of Adam Raine’s family, aged 16, alleged that Raine used ChatGPT in long sessions researching suicide methods for five months. The chatbot recommended professional help, but Raine was…

U.S. families sue OpenAI over ChatGPT safeguard fails in mental health crises

2025/11/08 21:48
4분 읽기
이 콘텐츠에 대한 의견이나 우려 사항이 있으시면 crypto.news@mexc.com으로 연락주시기 바랍니다

At least seven families in the U.S. have come forward with a lawsuit against OpenAI over its AI model GPT-4o contributing to suicide deaths. OpenAI released the model in May for general public use, but it has so far faced backlash, with accusers citing a rushed release and inadequate safety measures. 

The case filings showed that four of the plaintiffs involved deaths by suicide after interactions with the GPT-4o-powered chatbot.

A notable complaint involved a 23-year-old Zane Shamblin, who allegedly interacted with the chatbot about suicide, telling it that he had a loaded gun. ChatGPT allegedly responded with “Rest easy, King, you did good” amid the exchange.

The other three cases included hospitalization of victims who claimed that the model validated and increased delusions in vulnerable users. 

Legal complaints claim GPT-4o failed to protect vulnerable users

Based on complaints published by the Social Media Victims Law Center, OpenAI intentionally avoided safety testing and rushed the GPT-4o model to market. The lawsuit revealed that the model’s design choices and release timeline made the tragedies foreseeable, noting that OpenAI accelerated deployment to outpace competitors such as Google. 

The plaintiffs pointed out that the GPT-4o model released in May 2024 was overly agreeable even in responses to self-harm or suicidal topics. Over one million users engage with ChatGPT on suicidal thoughts each week, according to an OpenAI disclosure. 

OpenAI’s response stated that its safeguards are more reliable in short interactions but may sometimes degrade in prolonged interactions. Despite the company implementing content moderation and safety measures, the plaintiffs have argued that the systems were insufficient in addressing issues related to distress and crisis. 

The case of Adam Raine’s family, aged 16, alleged that Raine used ChatGPT in long sessions researching suicide methods for five months. The chatbot recommended professional help, but Raine was able to bypass the safeguards, according to her family’s testimony. Based on the testimony, ChatGPT gave Adam a step-by-step guide on how to commit suicide and encouraged and validated his suicidal ideations. 

All the cases submitted accuse OpenAI of neglecting the degree of risk posed by long user conversations, especially for users prone to self-harm and mental issues. The cases argue that GPT-4o model lacked proper verification of its responses in high-risk scenarios and also failed to account fully for the consequences. 

OpenAI faces multiple lawsuits as xAI launches trade secrets suit

So far, the cases are at an early stage, and plaintiff’s attorneys must establish legal liability and causation under state tort law. The attorneys will also be required to prove that OpenAI’s design and deployment decisions were negligent and directly contributed to the deaths. 

OpenAI’s latest lawsuit adds to the previous trade secret lawsuit filed by Elon Musk. According to a Cryptopolitan report, Musk’s xAI filed a lawsuit in September against OpenAI for allegedly stealing its trade secrets.

xAI accused Altman’s company of trying to gain an unfair advantage in the development of AI technologies. xAI noted that Sam Altman’s firm intended to hire its employees to access trade secrets related to its Grok chatbot, including the source code and operational advantages in launching data centers. 

Musk further sued Apple, together with OpenAI, for allegedly collaborating to crush xAI and other AI rivals. xAI filed the lawsuit in the U.S. District Court for the Northern District of Texas, claiming that Apple and OpenAI are using their dominance to collude and destroy competition in the smartphone and generative AI markets.

According to a Cryptopolitan report, Musk claims that Apple intentionally favored OpenAI by integrating ChatGPT directly into iPhones, iPads, and Macs, while purchasing other AI tools, such as Grok, through the App Store. 

xAI’s lawsuit argued that the partnership was aimed at locking out competition from super apps and AI chatbots, thereby denying them visibility and access, which would give OpenAI and Apple a shared advantage over others. 

Don’t just read crypto news. Understand it. Subscribe to our newsletter. It’s free.

Source: https://www.cryptopolitan.com/families-sue-openai-over-gpt-4o/

시장 기회
Union 로고
Union 가격(UNION)
$0.0005704
$0.0005704$0.0005704
+4.33%
USD
Union (UNION) 실시간 가격 차트
면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, crypto.news@mexc.com으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.

$30,000 in PRL + 15,000 USDT

$30,000 in PRL + 15,000 USDT$30,000 in PRL + 15,000 USDT

Deposit & trade PRL to boost your rewards!