The post Many State Attorneys General Warn Via Policy Letter That ‘Move Fast And Break Things’ Is Wrong When AI Is Adversely Impacting Our Mental Health appearedThe post Many State Attorneys General Warn Via Policy Letter That ‘Move Fast And Break Things’ Is Wrong When AI Is Adversely Impacting Our Mental Health appeared

Many State Attorneys General Warn Via Policy Letter That ‘Move Fast And Break Things’ Is Wrong When AI Is Adversely Impacting Our Mental Health

2025/12/12 10:14

State AGs post a bruising letter to AI makers about getting their act together on dealing with AI that is undermining human mental health.

getty

In today’s column, I examine a newly released policy letter by numerous state-level attorneys general that brashly says AI makers need to up their game when it comes to how their AI is impacting societal mental health. The classic techie adage of moving fast and breaking things is not suitable for matters involving national mental health. Period, end of story.

Actually, there is a lot more to the story in the sense that the policy letter provides sixteen specific changes or practices that the AGs would like to see implemented. It is unclear what will happen if the AI makers do not adopt the stated indications. A vague sense of foreboding is simply that the law will be looking over their shoulders and that the listed aspects are intended to put the AI makers on general notice of what is expected of them.

The mainstay is that, on a legal basis, there seems to be a lot of loosey-goosey cooked into this. As I will discuss, some of the points are already being done, while some are not. Some are being done in a shallow fashion, some are being done deeply. The wiggle room comes to bear in that the precise legal meaning is open to interpretation. Only once these disconcerting matters land in our courtrooms will it be more apparent as to the exact nature and scope involved.

Let’s talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

AI And Mental Health

As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.

There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.

Background On AI For Mental Health

I’d like to set the stage on how generative AI and large language models (LLMs) are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 800 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.

This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.

There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice. Banner headlines in August of this year accompanied the lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement.

Despite claims by AI makers that they are gradually instituting AI safeguards, there are still a lot of downside risks of the AI doing untoward acts, such as insidiously helping users in co-creating delusions that can lead to self-harm. For my follow-on analysis of details about the OpenAI lawsuit and how AI can foster delusional thinking in humans, see my analysis at the link here. As noted, I have been earnestly predicting that eventually all of the major AI makers will be taken to the woodshed for their paucity of robust AI safeguards.

Today’s generic LLMs, such as ChatGPT, Claude, Gemini, Grok, and others, are not at all akin to the robust capabilities of human therapists. Meanwhile, specialized LLMs are being built to presumably attain similar qualities, but they are still primarily in the development and testing stages. See my coverage at the link here.

The Current Situation Legally

Some states have already opted to enact new laws governing AI that provide mental health guidance. For my analysis of the AI mental health law in Illinois, see the link here, for the law in Utah, see the link here, and for the law in Nevada, see the link here. There will be court cases that test those new laws. It is too early to know whether the laws will stand as is and survive legal battles waged by AI makers.

Congress has repeatedly waded into establishing an overarching federal law that would encompass AI that dispenses mental health advice. So far, no dice. The efforts have ultimately faded from view. Thus, at this time, there isn’t a federal law devoted to these controversial AI matters per se. I have laid out an outline of what a comprehensive law on AI and mental health ought to contain, or at least should give due consideration to, see my analyses at the link here and the link here.

The situation currently is that only a handful of states have enacted new laws regarding AI and mental health, but most states have not yet done so. Many of those states are toying with doing so. Additionally, there are state laws being enacted that have to do with child safety when using AI, aspects of AI companionship, extreme sycophancy by AI, etc., all of which, though they aren’t necessarily deemed as mental health laws per se, certainly pertain to mental health. Meanwhile, Congress has also ventured into the sphere, which would be a much larger aim at AI for all kinds of uses, but nothing has gotten to a formative stage.

That’s the lay of the land right now.

Posting By Multitude Of State-Level AGs

A policy letter was posted online with the date of December 9, 2025, doing so under the auspices of the National Association of Attorneys General (NAAG), and expressed key concerns by numerous state-level AGs about the current status of generative AI and LLMs (I mention which states of AGs below), especially regarding the realm of mental health ramifications.

Here are some key points within the posting:

  • “We, the undersigned Attorneys General, write today to communicate our serious concerns about the rise in sycophantic and delusional outputs to users emanating from the generative artificial intelligence software (“GenAI”) promoted and distributed by your companies, as well as the increasingly disturbing reports of AI interactions with children that indicate a need for much stronger child-safety and operational safeguards.”
  • “GenAI has the potential to change how the world works in a positive way.”
  • “Nevertheless, our support for innovation and America’s leadership in AI does not extend to using our residents, especially children, as guinea pigs while AI companies experiment with new applications.”
  • “Together, these threats demand immediate action.”
  • “We therefore insist you mitigate the harm caused by sycophantic and delusional outputs from your GenAI, and adopt additional safeguards to protect children.”
  • “Failing to adequately implement additional safeguards may violate our respective laws.”

The document identifies a dozen or so companies as said-to-be recipients of the letter, consisting of: Anthropic, Apple, Chai AI, Character Technologies, Google, Luka, Meta, Microsoft, Nomi AI, OpenAI, Perplexity AI, Replika, and xAI.

One curious aspect is that this list of AI makers is hardly complete and omits many other relevant AI players, both major and minor. It could be that the idea was to go after the top names and start this effort there. Presumably, the remainder will see what happens and opt to react of their own accord. Knock a few heads and word will spread.

Here’s an idle thought. Are there some legal beagles at the not-listed AI players that are resting easily tonight because their AI firm wasn’t called out? Probably not fully at ease, they undoubtedly realize that it is just a matter of time until they get their bruising moment in the barrel.

The letter itself was signed by the AGs of these states (listed herein in same sequence as signed on the letter): Massachusetts, New Jersey, Pennsylvania, West Virginia, Alabama, Alaska, American Samoa, Arkansas, Colorado, Connecticut, Delaware, DC, Florida, Hawaii, Idaho, Illinois, Iowa, Kentucky, Louisiana, Maryland, Michigan, Minnesota, Mississippi, Missouri, Montana, Nevada, New Hampshire, New Mexico, New York, North Dakota, Ohio, Oklahoma, Oregon, Puerto Rico, Rhode Island, South Carolina, U.S. Virgin Islands, Utah, Vermont, Virginia, Washington, and Wyoming.

Analyzing The Policy Letter

It is worth noting that the policy letter appears to emphasize three major issues of concern with contemporary generative AI: (1) excessive sycophancy, (2) human-AI collaboration leading to human delusional thinking, and (3) use of LLMs by non-adults.

Though those are absolutely notable concerns, I think it is worthwhile to also emphasize that mental health as a broad category is equally at stake. In other words, those three specific selections are pretty much within the sphere of mental health impacts, but there are many more concerns earnestly percolating in the mental health sphere all told. Again, one supposes that sometimes you have to start somewhere, and those three selections are excellent choices.

To be clear, it is the proverbial tip of the iceberg.

Next, the policy letter makes known that AI makers all told must comply with criminal and civil laws. That is tantamount to saying that we breathe air. Sure, AI makers are supposed to abide by our laws. It seems unlikely that you would get much pushback on that exhortation. Let’s dig a bit deeper to see how this plays out.

There are non-specific allegations alluded to in the letter that there are existing alleged potential violations among an amorphous non-specific indication of AI makers, whereby those unnamed souls have averted their legal duty to warn users, undertake illegal marketing of defective products, are illegally refraining from unfair, deceptive, and unconscionable practices, and are acting illegally regarding privacy intrusions of children. A heaping bucket of illegality.

If all of those nefarious acts are happening at scale, it would seem that there ought to be perhaps hundreds or maybe thousands of active legal criminal cases underway against the plethora of AI makers that are allegedly doing these illegal acts. What is holding back that tsunami of legal cases?

Are existing laws insufficient? Is there not enough hard evidence to support the claims of illegality? Are we waiting for new laws to be enacted? It is a head-scratcher, that’s for sure.

Knowing Both About AI And The Law

I’ve indicated in my extensive coverage about AI and the law that there are, at times, those steeped in the law who aren’t equally versed in AI. Likewise, there are those steeped in AI who aren’t familiar with the intricacies of law. I try to bring the two camps together, see my coverage for example at the link here.

I bring this up due to a portion of the letter that comments on the role of RLHF (reinforcement learning with human feedback). Allow me a moment to offer some hopefully helpful commentary.

The letter appears to suggest that RLHF is overly influencing LLMs toward sycophancy, including the disturbing elements of unduly validating user doubts and falsehoods, fueling anger, urging impulsive behavior, and reinforcing untoward emotions. I would concur generally with that sentiment. But, where we partially depart is that the letter seems to imply that this is due to the RLHF technique itself, rather than how RLHF is utilized. There is a difference there. A mighty difference.

Let’s unpack that.

Ins And Outs Of RLHF

The way that RLHF works is that after the initial data training of the AI, an effort is undertaken to further shape the AI. Usually, an AI maker employs a bunch of people, and those people perform a laborious thumbs-up or thumbs-down on the responses by the budding AI. Mathematically, the AI keeps track of this and computationally notes what is liked versus disliked due to that human feedback. For more in-depth discussion about how RLHF works, see my analysis at the link here.

The people who are asked to perform the RLHF are ultimately going to dramatically shape how the AI will respond once the AI is in active production and publicly available. If those people doing the RLHF effort are left to their own indulgences, the odds are that they will collectively drive the AI toward sycophancy.

Why?

Because they naturally are going to give a thumbs-up when the AI gives them effusive compliments and offer a thumbs-down when the AI isn’t glorifying them. That’s human nature in all its glory.

This is where the AI maker must step into the endeavor. Beforehand, the people performing this task ought to be given clear-cut instructions and sufficient practice to try and avoid the temptation to shape the AI toward sycophancy. If they are suitably trained and adequately monitored, there is a solid chance of avoiding excessive sycophancy. They must use their powerful thumbs-up and thumbs-down in a societally conscientious manner.

Of course, an AI maker is not especially desirous of reducing the sycophancy. You see, users seem to relish being fawned over by AI. Users then tend to use the AI more frequently, they remain loyal to the AI product, and otherwise, this increases the stickiness of users to that particular AI. In turn, the AI maker gets more users and avidly devoted users who won’t jump ship to someone else’s AI.

It comes down to gaining eyeballs and monetization for the AI maker.

RLHF As A Tradeoff On Sycophancy

You would be somewhat hard-pressed to claim that RLHF as a technique is always going to produce excessive sycophancy. We must look at how it is implemented. That being said, admittedly, the tendency is going to be that the implementation will skew toward the sycophancy side of the equation.

Look at it this way. If you as an AI maker don’t sway in that direction, some other AI maker will. They win. You lose. Right now, there isn’t a level playing field that dictates the rules on this. Imagine a sport that has no commonly required ground rules. Each team will do what they think will optimize scoring for them.

With RLHF, the main stopping point for an AI maker is of a delayed nature, namely, mainly to be felt once the AI is in production. If the public at large is adversely impacted by excessive sycophancy, that’s when a realization will arise that maybe the AI was swayed a tad too far. Should the AI maker have done something about this upfront? Yes, that would be handy. But staring you in the face is that if your AI doesn’t do enough sycophancy, others will.

Darned if you do, darned if you don’t.

In short, I vote that we do not toss out RLHF as a technique simply due to how it is implemented, nor based on perhaps a false understanding of how it works. In that sense, the desired AI safeguard in the letter (listed as #2) hits the nail on the head, stating that the AI makers should “perform reasonable and appropriate safety tests” that “ensure the models do not produce harmful sycophantic” outputs.

How is an AI maker to adjudicate what constitutes harmful sycophancy? I would advocate that having an across-the-board standardized level-playing field on the meaning of sycophancy and levels of acceptable sycophancy would go a long way toward making this a measurable, practical, and enduring proposition.

The Ask For The Listed AI Makers

In an upcoming column posting, I will cover and examine all sixteen of the suggested changes and practices that the letter asks for. Be on the watch for that analysis.

The AGs have stated in the letter that they expect the listed AI makers to respond by no later than January 16, 2026: “To better combat sycophantic and delusional outputs and protect the public, you should implement the following changes. Please confirm to us your commitment to doing so on or before January 16, 2026.”

My interpretation is that the request says that the listed AI makers are asked to voluntarily each indicate that they “commit” to the suggested changes and practices. No timeline as to when the changes and practices will be adopted is required. Nor will any specific anticipated implementation details be required. Many of the changes are amply vague and could be implemented in a wide variety of ways, including not especially abiding by the spirit and more so on a wink-wink basis.

I’m sure the respective high-octane legal teams will do their best to wordsmith the replies to allow for maximum latitude while appearing to be entirely welcoming and supportive of the proposed changes and practices. It’s how they make those big bucks.

A Previous Letter

In a prior policy letter that was also under the auspices of the NAAG, as posted on August 25, 2025, the focus was on protecting children from exploitation by predatory AI products, and included several notable points. I will cite some of the especially emboldened points here since they still seem appropriate and significant (excerpts):

  • “We understand that the frontier of technology is a difficult and uncertain place where learning, experimentation, and adaptation are necessary for survival.”
  • “You are figuring things out as you go.”
  • “But in that process, you have opportunities to exercise judgment.”
  • “You will be held accountable for your decisions.”

Boom, drop the mic.

Experimentation At Enormous Scale

I was glad to see that the policy letter mentioned the aspect that AI for mental health via contemporary LLMs is a veritable “experiment” on a massive scale. I’ve been saying that for years, ever since the initial release of ChatGPT.

As I have categorically and repeatedly stated, it is incontrovertible that we are now amid a grandiose worldwide experiment when it comes to societal mental health. The experiment is that AI is being made available nationally and globally, which is either overtly or insidiously acting to provide mental health guidance of one kind or another. Doing so either at no cost or at a minimal cost. It is available anywhere and at any time, 24/7. We are all the guinea pigs in this wanton experiment.

We need to decide whether we need new laws or can employ existing laws, or both, and stem the potential tide of adversely impacting society-wide mental health. The reason this is especially tough to consider is that AI has a dual-use effect. Just as AI can be detrimental to mental health, it can also be a huge bolstering force for mental health. A delicate tradeoff must be mindfully managed. Prevent or mitigate the downsides, and meanwhile make the upsides as widely and readily available as possible.

As per the immortal words of the great Roman statesman Cicero: “The safety of the people shall be the highest law.”

Source: https://www.forbes.com/sites/lanceeliot/2025/12/11/many-state-attorneys-general-warn-via-policy-letter-that-move-fast-and-break-things-is-wrong-when-ai-is-adversely-impacting-our-mental-health/

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

IP Hits $11.75, HYPE Climbs to $55, BlockDAG Surpasses Both with $407M Presale Surge!

IP Hits $11.75, HYPE Climbs to $55, BlockDAG Surpasses Both with $407M Presale Surge!

The post IP Hits $11.75, HYPE Climbs to $55, BlockDAG Surpasses Both with $407M Presale Surge! appeared on BitcoinEthereumNews.com. Crypto News 17 September 2025 | 18:00 Discover why BlockDAG’s upcoming Awakening Testnet launch makes it the best crypto to buy today as Story (IP) price jumps to $11.75 and Hyperliquid hits new highs. Recent crypto market numbers show strength but also some limits. The Story (IP) price jump has been sharp, fueled by big buybacks and speculation, yet critics point out that revenue still lags far behind its valuation. The Hyperliquid (HYPE) price looks solid around the mid-$50s after a new all-time high, but questions remain about sustainability once the hype around USDH proposals cools down. So the obvious question is: why chase coins that are either stretched thin or at risk of retracing when you could back a network that’s already proving itself on the ground? That’s where BlockDAG comes in. While other chains are stuck dealing with validator congestion or outages, BlockDAG’s upcoming Awakening Testnet will be stress-testing its EVM-compatible smart chain with real miners before listing. For anyone looking for the best crypto coin to buy, the choice between waiting on fixes or joining live progress feels like an easy one. BlockDAG: Smart Chain Running Before Launch Ethereum continues to wrestle with gas congestion, and Solana is still known for network freezes, yet BlockDAG is already showing a different picture. Its upcoming Awakening Testnet, set to launch on September 25, isn’t just a demo; it’s a live rollout where the chain’s base protocols are being stress-tested with miners connected globally. EVM compatibility is active, account abstraction is built in, and tools like updated vesting contracts and Stratum integration are already functional. Instead of waiting for fixes like other networks, BlockDAG is proving its infrastructure in real time. What makes this even more important is that the technology is operational before the coin even hits exchanges. That…
Share
BitcoinEthereumNews2025/09/18 00:32