Hey Team,
A lot to unpack today. More teenagers are turning to AI chatbots for emotional support and even therapy, but is it actually safe, especially when we know AI still hallucinates and gets critical things wrong? Therefore, we’re launching a new segment highlighting recent AI mistakes so we can all stay alert to what these systems get right and wrong. On the enterprise side, OpenAI just released a study showing massive growth, but the real question is whether that momentum is sustainable as competition becomes stronger. Plus, we’ve got our Jobs Corner, and more key news and trends. Let’s dive in and stay curious.
Get 60% off for 1 year
I’ve been building an AEO checker app using Vibe coding with Lovable, and I’m benchmarking it against Cursor and Gemini 3. One core feature sends users an email report with their site analysis. For that, I’m using Resend, which lets you connect your domain via DNS records, perfect for fast MVP testing and deployment.
I ran into DNS issues and asked ChatGPT for help. It confidently told me I didn’t need MX records. That advice was completely wrong. After ChatGPT doubled and tripled down on it, I added the MX records anyway, and everything worked instantly.
Just another real-world reminder:
LLMs still make confident, time-wasting errors that can distort your understanding of systems.
Trust your own thinking, verify with docs, and always cross-check when something doesn’t feel right.
Share
Apply Today — Open Positions.
OpenAI released its State of Enterprise AI with a few key stats. This comes out when the model seems to be losing shine due to hefty competition from Anthropic and Gemini models. Adoption in Enterprise AI is still early, but the winners are being defined today.
Enterprise AI is delivering measurable time savings and accelerating into deeper technical work, but most companies are still only scratching the surface. Altman, knows the urgency and instructed employees to boost ChatGPT through the better use of user signals. OpenAI is set to release a new model this week, and it also plans to release another model in January with better images, improved speed, and a better personality, after which it will end the ‘code red’.
Share
A new UK study of 11,000+ youths shows that 1 in 4 teenagers (13–17) and nearly 40% of those affected by youth violence now turn to AI chatbots like ChatGPT for mental health support, driven by long NHS waiting lists, privacy concerns, and 24/7 access.
Usage is twice as high among Black teens, and victims and perpetrators of violence are significantly more likely to rely on AI than their peers. Teens describe chatbots as non-judgmental, always available, and safer than adults, especially when fearing school or police involvement. But experts warn this creates serious risks, citing lawsuits linked to suicide cases and weak regulation, with youth leaders stressing that “children need a human, not a bot.” The findings expose a growing gap between mental health demand and real-world access, which AI is rapidly filling by default.
Major AI players, including Anthropic, OpenAI, Google, and Microsoft, are set to form a new open-source standards group called the Agentic Artificial Intelligence Foundation, organized by the Linux Foundation, to standardize how AI agents connect to enterprise software.
The goal is to make AI agents interoperable across apps — similar to how banks standardized electronic payments, by aligning on tools like Anthropic’s Model Context Protocol (MCP), OpenAI’s Agents.md, and Block’s Goose local AI agent. MCP is already gaining real traction across products like ChatGPT and Google Workspace, enabling agents to connect tools like Slack to automate workplace tasks. However, CIOs warn that security risks like prompt-injection attacks remain unresolved, especially as companies plug agents into sensitive systems like PagerDuty and internal financial platforms. Despite the risks and the tech industry’s long history of open-standards disputes, the formation of this group signals a major push toward shared infrastructure for enterprise AI automation.
🛟Is ChatGPT safe for Mental Support? was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story.


