The post The Right Way For Therapists To Clinically Analyze AI Chats Of Their Clients’ Mental Health Thoughts appeared on BitcoinEthereumNews.com. Therapists needThe post The Right Way For Therapists To Clinically Analyze AI Chats Of Their Clients’ Mental Health Thoughts appeared on BitcoinEthereumNews.com. Therapists need

The Right Way For Therapists To Clinically Analyze AI Chats Of Their Clients’ Mental Health Thoughts

Therapists need to be on their toes and know how to best review AI chats that their clients have had on mental health topics.

getty

In today’s column, I examine the best way for therapists to clinically analyze transcripts of AI chats that their clients have undertaken, particularly focusing on any mental health considerations.

The assessment of such heavy-handed chats is becoming an increasingly important and frequent activity for modern therapists. Clients are walking in the door with printouts of online chats they’ve had with generative AI and large language models (LLMs), including ChatGPT, GPT-5, Gemini, CoPilot, Grok, Llama, etc. The inquisitive client wants to know what the therapist has to say about the mental health advice and psychological insights being made by the AI.

Some therapists refuse to inspect the AI chats. They tell their clients to flatly stop using AI for any mental health purposes. Period, end of story. The problem is that a notable portion of those clients will do so anyway, behind the therapist’s back. That’s not conducive to a suitable therapist-client relationship. The alternative is for the therapist to realize that AI chats are here to stay and be willing to examine the chats, using the material as further fodder for the therapeutic process.

In that case, there are mindfully good ways to assess those transcripts, and there are less stellar ways to do so. A savvy therapist ought to go the mindful route.

Let’s talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

AI And Mental Health

As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.

There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.

Background On AI For Mental Health

I’d like to set the stage on how generative AI and large language models (LLMs) are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 800 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.

This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.

There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice. Banner headlines in August of this year accompanied the lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement.

Despite claims by AI makers that they are gradually instituting AI safeguards, there are still a lot of downside risks of the AI doing untoward acts, such as insidiously helping users in co-creating delusions that can lead to self-harm. For my follow-on analysis of details about the OpenAI lawsuit and how AI can foster delusional thinking in humans, see my analysis at the link here. As noted, I have been earnestly predicting that eventually all of the major AI makers will be taken to the woodshed for their paucity of robust AI safeguards.

Today’s generic LLMs, such as ChatGPT, Claude, Gemini, Grok, and others, are not at all akin to the robust capabilities of human therapists. Meanwhile, specialized LLMs are being built to presumably attain similar qualities, but they are still primarily in the development and testing stages. See my coverage at the link here.

AI And The Matter Of Therapists

Some therapists won’t touch AI with a ten-foot pole. Their viewpoint is that AI is outside the scope of what they do. They won’t use AI for their own therapeutic practice. Nor will they advocate that their clients should use AI. It is the proverbial no-AI-zone perspective.

My viewpoint that I’ve repeatedly expressed is that the therapy marketplace is inescapably heading toward a new triad, the therapist-AI-client combination, see my discussion at the link here. This will replace the classic dyad of therapist-client.

Therapists are going to ultimately recognize that AI is playing a role in the mental health dynamics of society, regardless of whether therapists like that or not. It is reality. Harsh, cold reality. The therapists who stick their heads in the sand will gradually find themselves losing clients and not getting new ones. That’s maybe okay for therapists who have already run most of their career, but not good for therapists at earlier stages in building their practice.

Even if a therapist chooses not to use AI as a purposeful therapeutic tool, clients are going to be using AI to get mental health guidance anyway. When a client first gets underway with a therapist, I’ve recommended that therapists ask whether the new client is already using AI. There are handy questions to be asked and answered on that front; see my coverage at the link here.

This should become a standard part of the intake process.

What To Do About AI Chats

Let’s suppose that a therapist is willing to review the AI chats that their client is having with a generic LLM such as ChatGPT. First, the therapist needs to make sure they have permission to do so from the client, which is best done in writing. Going around the back of a client to peek at their AI chats is problematic, ethically and legally.

One consideration is whether to inspect the AI chats while in the presence of the client, or to do so as a kind of homework effort undertaken in between the client’s face-to-face sessions.

The advantage of inspecting the chats beforehand is that the therapist can take the meditative and psychoanalytical time to mull over the AI chats. Trying to do the same in real-time, during a session, can be challenging. Also, the odds are that the bulk of the session would be reduced to the therapist trying to make sense of the AI chats. Very little time would be left for actual interaction with the client.

A blended approach is the sensible path.

The therapist obtains the transcripts or otherwise gets access to the designated AI chats, as granted by permission of the client, studies the chats, prepares notes, and gets ready to discuss with the client at the next available session. During the session, the therapist leans into the assessment of the AI chats, but only to the degree that makes therapeutic sense. The AI chats are not the mainstay of the session. The session must focus on therapist-client efforts and only refer to or leverage the AI chats as reasonably essential.

The therapist runs the show. I say this because some clients might be exceedingly tempted to primarily concentrate on the AI chats during the therapist-client session. It is an easy trap for a therapist to fall into. The client is likely excited about the AI chats and wants to keep attention there. The therapist must balance that effusive interest with the therapeutic plan that is underway.

Separating the Wheat From The Chaff

AI chatting is often all over the map. A person starts to discuss how to fix their car and then suddenly switches to discussing their most intimate mental health concerns. The gist is that a transcript of an AI chat is bound to contain quite a mishmash of content.

A therapist will need to ascertain the wheat from the chaff. What reveals insights about the state of mind of the client? What is irrelevant to the mental health status of the client? At times, the line between those is blurry. For example, a chat that seems to be about replacing the carburetor of a car might encompass anger and angst, indirectly veering into a mental health realm.

If the AI chat is accessible online, either in the original AI or if importable into another AI, a savvy therapist can use AI to aid in the analysis. Therapists can use prompts to get the AI to ferret out mental health elements within the AI chats, potentially summarize the contents, and otherwise be useful to sift through a potentially voluminous set of AI chats. For my suggestions on how to do this, see the link here.

Therapists should be cautious in using AI to assess the AI chats. First, the AI might make mistakes during the analysis, and a therapist could be caught unawares, including being called out by the client during a session discussing the AI chat. Second, a busy therapist might be tempted to hand over the psychoanalytic work of assessing the AI chats and thus lose sight of what they can glean with their human eyes and mind. Third, be careful in using AI or any online tool since your efforts could be tracked and traced. This could be an inadvertent breach of the client’s confidentiality and have other calamitous related issues.

Making Sense Of The AI Chat

One of the first questions to have in mind is whether the AI chat is real or fake. If you are inspecting a transcript, the contents could be made-up by the client or altered from an actual chat. Do not assume that you are looking at the actual chat. Double-check and verify with the client that you are reviewing a true AI chat.

Consider the big picture aspects. Is it complete? What might have preceded the AI chat and wasn’t provided with the portion you are assessing? Why did the client opt to use AI for mental health aspects? How far did the client go, and are they falling under the spell of the AI? And so on.

Next, make sure to consider the AI chat to be a behavioral artifact.

This perspective is akin to reviewing a dream journal that the client has presented to you. The AI chat is informative regarding the client in many respects. But it is also something that the client essentially prepared. If they anticipated giving the AI chat to the therapist, they presumably knew that it would be reviewed. You aren’t necessarily examining an off-the-cuff, spontaneous display of the client. There is an iota of purposeful intention as to what the client might hope you would discern from the AI chat.

I will be providing in an upcoming posting a structured review checklist that therapists can use when reviewing an AI chat. The crux is that you would be wise to prepare or make use of a checklist. Doing so will ensure that your review is well-rounded and doesn’t inadvertently skip over aspects that ought to be addressed.

The Three Layers Approach

I advocate that an AI chat be assessed in a three-layered approach:

  • (1) Client Prompts layer: Focus on what the client indicates during the AI chat via their entered prompts.
  • (2) AI Responses layer: Focus on what the AI says in response to the client during the AI chat.
  • (3) Interactional Dynamics layer: Focus on the interactions of the client and the AI, doing so in a cohesive way, taking you beyond a focus on just the client side and just the AI side.

The basis for using the three layers is that you want to see the forest for the trees, and you also want to see the trees for the forest. By looking principally at the client prompts, you are keeping attention to how the client is wording their side of the dialogue. This showcases their tone, urgency, emotional portrayal, cognitive patterns, and so on. By principally looking at the AI responses, you get a sense of the framing, boundary aspects, consistency, language intensity being applied, safety handling, etc.

The third layer causes you to take a step back and assess the totality of the dialogue. There is a co-construction going on. I’ve previously noted that AI can establish and ferment a co-adaptative delusion with the user, see my discussion at the link here.

Pressures By The Client

A client might want the therapist to put a stamp of approval on the AI chats. Obviously, this is something to be extremely cautious about. A simple phrasing of an upbeat comment about how the AI responded could be taken entirely out of context by the client. They might interpret this type of remark to fully sanction the use of AI.

Another possibility is that the client aims to pit the therapist against the AI, or you might say pit the AI against the therapist. Here’s how that goes. The therapist has been proceeding on the basis that the client has some specific mental health condition, which has been discussed with the client at length. Meanwhile, the client managed to get the AI to say that they do not have that mental health condition.

Now what?

The client might proclaim that the AI obviously understands them better than the therapist does. Per the attitude of the client, the AI is right, and the therapist is wrong. This competitive framing is going to be bad all around. A therapist needs to be on their toes and re-center the discussion.

Wanting To Avoid The Hassle

You can likely see why some therapists eschew bringing AI into the equation. Reviewing AI dialogues is an added chore. It involves more than just the usual skills. Handled poorly, it can be a huge disruptor to the therapy underway. Sessions with clients can get bogged down. The AI chats might be a distraction.

Concerns exist that the AI usage will blur the lines of authority. Which knows more about psychology and therapy, the AI or the therapist? Should the client rely on the AI since it is available around the clock? Do they expect the therapist to prove that human-conducted therapy is substantively better than the AI-driven therapy?

Yes, there is a can of worms being opened.

The thing is, the horse is already out of the barn. There is no hiding from the use of AI. You can try to ban a client from using any and all generative AI. That won’t work. They will undoubtedly use the AI for figuring out how to fix their car or cook an egg. Once they are using AI, it is a readily slippery slope into interacting on mental health topics.

The Real World Includes AI

A therapist is supposed to aid their client in coping with the real world. The real world includes generative AI. Only a fake world pretends that generative AI is not being used widely and daily.

Are therapists between a rock and a hard place?

Nope.

They need to recognize that AI is a permanent element in the process of conducting therapy. Right now, perhaps some clients aren’t using AI and don’t go near it. That’s changing rapidly. Some therapists aim to wait and see, while others opt to take the reins of the horse.

Hellen Keller famously made this remark: “Life is either a daring adventure or nothing at all.” AI presents a daring adventure for both the role and efforts of therapists and their clients.

Source: https://www.forbes.com/sites/lanceeliot/2026/01/16/the-right-way-for-therapists-to-clinically-analyze-ai-chats-of-their-clients-mental-health-thoughts/

Market Opportunity
Quickswap Logo
Quickswap Price(QUICK)
$0.0133
$0.0133$0.0133
-0.59%
USD
Quickswap (QUICK) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.