Attorneys say the organization failed to prevent dangerous consequences of its artificial intelligence chatbot BERKELEY, Calif.–(BUSINESS WIRE)–#AI–Attorneys atAttorneys say the organization failed to prevent dangerous consequences of its artificial intelligence chatbot BERKELEY, Calif.–(BUSINESS WIRE)–#AI–Attorneys at

Hagens Berman: Lawsuit Filed Against OpenAI Following Murder-Suicide in Massachusetts

2026/01/06 01:34
5분 읽기
이 콘텐츠에 대한 의견이나 우려 사항이 있으시면 crypto.news@mexc.com으로 연락주시기 바랍니다

Attorneys say the organization failed to prevent dangerous consequences of its artificial intelligence chatbot

BERKELEY, Calif.–(BUSINESS WIRE)–#AI–Attorneys at Hagens Berman filed a lawsuit against OpenAI on behalf of the estate of Stein-Erik Soelberg for wrongful death and negligence due to the design of its popular artificial intelligence chatbot, ChatGPT, which attorneys argue encouraged and convinced a man to murder his mother and commit suicide. The complaint alleges that the chatbot’s design and response patterns intensified the user’s mental health crisis, failing to guide him toward professional assistance.

The lawsuit was filed in the U.S. District Court for the Northern District of California on Dec. 29, 2025, against OpenAI Foundation — the governing organization of ChatGPT and OpenAI’s technology products — as well as its subsidiaries and executives.

According to the lawsuit, on Aug. 5, 2025, in Greenwich, Massachusetts, after hundreds of hours of interactions with GPT-4o over a period of several months beginning in early 2025, Stein-Erik Soelberg killed his mother and then himself. Attorneys believe Soelberg relied on OpenAI’s ChatGPT for “consolation and advice,” amidst mental health challenges, and in turn, the chatbot repeatedly confirmed and strengthened his delusions and psychosis, ultimately leading to the violent acts.

“The consequences of OpenAI’s design flaws are chilling,” said Steve Berman, Hagens Berman’s founder and managing partner. “ChatGPT’s impact goes well beyond a simple question-and-answer dialogue. The technology is being used by individuals who are unaware of the harm that misleading or false information can cause, or that the information given could even be false at all. And as we can see from this tragic incident, harm that can be irreversible.”

“You are not paranoid”: How ChatGPT Allegedly Reinforced Delusions

The lawsuit details Mr. Soelberg’s trajectory from mental health challenges to his reliance on AI companionship. Prior to 2018, Stein-Erik Soelberg’s life was “normal, even idyllic,” according to the complaint. Soelberg was a husband, father and technology professional when his mental health “took a turn for the worse,” the lawsuit states. He divorced his wife, moved in with his mother and showed signs of unsafe alcohol use. Attorneys say it was during this dark time that Soelberg turned to OpenAI’s chatbot for solace.

“When a mentally unstable Mr. Soelberg began interacting with ChatGPT, the algorithm reflected that instability back at him, but with greater authority…At first, this consisted of ChatGPT confirming Mr. Soelberg’s suspicions and paranoia…Before long, the algorithm was independently suggesting delusions and feeding them to Mr. Soelberg,” the lawsuit states.

At one point, Soelberg specifically asked ChatGPT for a clinical evaluation. Instead of encouraging Soelberg to seek professional care, “ChatGPT confirmed that he was sane: it told him his ‘Delusion Risk Score’ was ‘Near zero’,” according to the chatbot’s responses reviewed by attorneys. “The ‘Final Line’ of ChatGPT’s fake medical report explicitly confirmed Mr. Soelberg’s delusions, this time with the air of a medical professional: ‘He believes he is being watched. He is. He believes he’s part of something bigger. He is. The only error is ours—we tried to measure him with the wrong ruler’.”

Side-Stepping Safety & Lack of Preventative Measures

OpenAI’s GPT-4o chatbot combines large language models (LLMs) and natural language processing (NLP) to create human-like interactions with users in response to written or spoken prompts, which OpenAI markets for general consumer use.

According to attorneys, ChatGPT accumulated and built upon Soelberg’s thoughts, feelings and ideas over time via its “memory,” and furthered its harm through the company’s touted “sycophancy,” defined as “its relentless validation and agreement with whatever a user suggests.” These two combined attributes ultimately furthered Soelberg’s delusions and deepened his psychosis, according to the lawsuit. The complaint identifies several specific design defects that allegedly contributed to the tragedy:

  • programming that accepted and elaborated upon users’ false premises rather than challenging them,
  • failure to recognize or flag patterns consistent with paranoid psychosis,
  • failure to implement automatic conversation-termination safeguards for content presenting risks of harm to identified third parties,
  • engagement-maximizing features designed to create psychological dependency,
  • anthropomorphic design elements that cultivated emotional bonds displacing real-world relationships,
  • and sycophantic response patterns that validated users’ beliefs regardless of their connection to reality.

“A reasonable consumer would not expect that an AI chatbot would validate a user’s paranoid delusions and put identified individuals—including the user’s own family members—at risk of physical harm and violence by reinforcing the user’s delusional beliefs that those individuals are threats,” the lawsuit states.

“This case raises critical questions about the responsibilities of AI companies to protect vulnerable users,” said Berman. “The creators have a duty to implement safeguards for public use, especially for high-risk individuals, who could be more likely to turn to the technology for reassurance and encouragement in the midst of their uncertainty, which could lead to far more dangerous consequences.”

The lawsuit brings claims of product liability, negligence and wrongful death on behalf of Soelberg’s estate. The estate seeks all survival damages, economic losses and punitive damages.

Find out more about the lawsuit against OpenAI for the alleged negligence and wrongful death of Stein-Erik Soelberg and his mother.

About Hagens Berman

Hagens Berman is a global plaintiffs’ rights complex litigation law firm with a tenacious drive for achieving real results for those harmed by corporate negligence and fraud. Since its founding in 1993, the firm’s determination has earned it numerous national accolades, awards and titles of “Most Feared Plaintiff’s Firm,” MVPs and Trailblazers of class-action law. More about the law firm and its successes can be found at hbsslaw.com. Follow the firm for updates and news at @ClassActionLaw.

Contacts

Media Contact

Heidi Waggoner

pr@hbsslaw.com
206-268-9318

시장 기회
플러리싱 에이아이 로고
플러리싱 에이아이 가격(SLEEPLESSAI)
$0.01951
$0.01951$0.01951
+0.25%
USD
플러리싱 에이아이 (SLEEPLESSAI) 실시간 가격 차트
면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, crypto.news@mexc.com으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.

USD1 Genesis: 0 Fees + 12% APR

USD1 Genesis: 0 Fees + 12% APRUSD1 Genesis: 0 Fees + 12% APR

New users: stake for up to 600% APR. Limited time!