The post How to Know If You’re at Risk of Developing AI Psychosis appeared on BitcoinEthereumNews.com. Reports are coming in about AI users seeking professional psychology help after experiencing what some call AI Psychosis. Why does this happen? And who came up with the idea that a computer program can have an opinion? My main problem right now with the phenomenon known as AI is the widespread misconception that it has a personality, opinions, ideas—and taste. Just as we anthropomorphize rocks on the moon that look like heads or faces, we do the same with language models, trying to interpret the results as if “there’s something there.” We can’t help it. The idea that AI is something intelligent is deeply rooted in us after more than a hundred years of science-fiction books and movies. Sci-fi has primed us to believe in AI Classic science-fiction authors like Clarke, Heinlein, Bradbury and Asimov have influenced the entire robot and AI genre and most Hollywood movies on the subject since the early 20th century. In their depictions, it was obvious that a machine could become conscious. So we expect AI to be able to determine whether a certain statement is incorrect or not. At the same time, we know that AI is often wrong in its answers. In essence, we’re pretending that AI has a handle on things and can decide what’s right and wrong. But the answers are basically educated guesses, and they still contain about 5-10% factual errors if the query is sufficiently advanced. At the same time, AI is so convenient to use that many of us simply ignore the fact that it contains factual errors. Or rather, AI is now wrong so rarely that we choose to trust an AI, regardless. This could become a big problem in the future. Humans are lazy. It’s not inconceivable that we accept a world where a certain percentage… The post How to Know If You’re at Risk of Developing AI Psychosis appeared on BitcoinEthereumNews.com. Reports are coming in about AI users seeking professional psychology help after experiencing what some call AI Psychosis. Why does this happen? And who came up with the idea that a computer program can have an opinion? My main problem right now with the phenomenon known as AI is the widespread misconception that it has a personality, opinions, ideas—and taste. Just as we anthropomorphize rocks on the moon that look like heads or faces, we do the same with language models, trying to interpret the results as if “there’s something there.” We can’t help it. The idea that AI is something intelligent is deeply rooted in us after more than a hundred years of science-fiction books and movies. Sci-fi has primed us to believe in AI Classic science-fiction authors like Clarke, Heinlein, Bradbury and Asimov have influenced the entire robot and AI genre and most Hollywood movies on the subject since the early 20th century. In their depictions, it was obvious that a machine could become conscious. So we expect AI to be able to determine whether a certain statement is incorrect or not. At the same time, we know that AI is often wrong in its answers. In essence, we’re pretending that AI has a handle on things and can decide what’s right and wrong. But the answers are basically educated guesses, and they still contain about 5-10% factual errors if the query is sufficiently advanced. At the same time, AI is so convenient to use that many of us simply ignore the fact that it contains factual errors. Or rather, AI is now wrong so rarely that we choose to trust an AI, regardless. This could become a big problem in the future. Humans are lazy. It’s not inconceivable that we accept a world where a certain percentage…

How to Know If You’re at Risk of Developing AI Psychosis

Reports are coming in about AI users seeking professional psychology help after experiencing what some call AI Psychosis. Why does this happen? And who came up with the idea that a computer program can have an opinion?

My main problem right now with the phenomenon known as AI is the widespread misconception that it has a personality, opinions, ideas—and taste.

Just as we anthropomorphize rocks on the moon that look like heads or faces, we do the same with language models, trying to interpret the results as if “there’s something there.” We can’t help it. The idea that AI is something intelligent is deeply rooted in us after more than a hundred years of science-fiction books and movies.

Sci-fi has primed us to believe in AI

Classic science-fiction authors like Clarke, Heinlein, Bradbury and Asimov have influenced the entire robot and AI genre and most Hollywood movies on the subject since the early 20th century. In their depictions, it was obvious that a machine could become conscious.

So we expect AI to be able to determine whether a certain statement is incorrect or not. At the same time, we know that AI is often wrong in its answers.

In essence, we’re pretending that AI has a handle on things and can decide what’s right and wrong. But the answers are basically educated guesses, and they still contain about 5-10% factual errors if the query is sufficiently advanced.

At the same time, AI is so convenient to use that many of us simply ignore the fact that it contains factual errors. Or rather, AI is now wrong so rarely that we choose to trust an AI, regardless.

This could become a big problem in the future. Humans are lazy. It’s not inconceivable that we accept a world where a certain percentage of all facts are incorrect. That would benefit dictators and propagandists who thrive on confusion and misjudged threats.

Confusions that sound right

If you ask a completely ordinary question on Google’s search page, you often get the right answer, but sometimes a completely incorrect one that still looks, feels and sounds entirely right. The same goes for GPT-5 unfortunately, as Cryptopolitan has reported previously.

There are tons of “fake” text on the internet, in the form of marketing, propaganda, or plain scams. People claim that this service or that product has been launched and is popular, for example, and AI models have read all the marketing material and believe much of it. If you listen to a company’s information, everything about that company is great, usually.

AI’s worldview is therefore incorrect and fed with a bunch of fabricated facts. It’s revealed if you ask an AI about a subject where you yourself are very knowledgeable. Try it yourself. What matter do you know everything about? Ask your AI some tough questions on that topic. What was the result? Several major factual errors, right?

So, is an unconscious opinion possible? No? Do you believe in the opinions your AI is putting out? Yes? If so, you believe the AI is conscious, right?

But if you stop and think about it, an AI can’t have an opinion on what’s right or wrong, as an AI is not a person. Only living, conscious things can have opinions, by definition. A chair does not have one. A silicon chip can’t either, from the human point of view. That would be anthropomorphism.

Students use AI more than anyone else

This AI confusion mess is now spilling onto our youth, who use ChatGPT for everything in school all day long. ChatGPT’s traffic dropped 75% when the schools rang out in June of 2025. ChatGPT’s largest single group of users is students.

Consequently, they’re being somewhat misinformed all day long, and they stop using their brains in class. What will be the result? More broken individuals who have a harder time solving problems by thinking for themselves?

Already, many have committed suicide after discussing the matter with their AI. Others fall in love with their AI and get tired of their real partner.

Self-proclaimed AI experts, therefore fear that the end is near (as usual, but now in a new way).

In this new paradigm, AI is not just going to become Skynet and bomb us to death with nuclear weapons. No, it will be much simpler and cheaper than that for the AI. Instead, the AI models will drive all their users slowly to insanity, according to this theory. The AI models have a built-in hatred for humans and want to kill all people, according to this mindset.

But in reality, none of this is happening.

What is actually happening is that there are a bunch of people who are obsessed with AI models in various ways and exaggerate their effects.

AI FUD is profitable

The “experts” profit from the warnings, just like the media, and the obsessed have something new to occupy themselves with. They get to speak out and be relevant. Mainstream media prefer those who warn us of dangers, not the moderate commentators.

Previously, it was Bitcoin that was supposed to boil the oceans and steal all electricity, according to the “experts”. Now it’s AI…

Think about it: why would an independent, thinking person be misled by a language model?

Most AI platforms until recently ended all their responses with an “engaging” question like: “What do you think about this subject?”

After complaints of exaggerated sycophancy, OpenAI has now tried to make its AI platforms less “fawning,” but it’s going so-so.

I’m just irritated by the question. There’s no person behind it who’s interested in what I have to say. So why ask? It’s a waste of my time. I experience it as “fake content”.

The question itself is contrived, due to an instruction from the AI model’s owner to “increase engagement.” How can that fool anyone into actually engaging? Into believing there’s something there? Into caring?

It’s more about projections.

You’re sitting there at the computer, suggesting your own reality. You so desperately want AI to be like in the Hollywood movies – and become a miracle in your life. You’re going to become successful in some magical way without having to do anything special at all. AI will solve that for you.

Who’s at risk?

In the so-called reality, I believe that actually quite a few are totally seduced by AI on a psychological level. Most people have other things to do. But some people seem to have a particular attraction to the artificial and the fabricated. People who are seduced by “beautiful” word sequences. They’re the ones at risk.

How many are there? Among the elderly, there are many who complain about loneliness…

Personally, I think AI’s way of responding—slowly typing out babbling, boring, and impersonal texts—is more or less like torture. For that reason, Google’s new, fast AI summaries are seductively practical. But they too sometimes contain inaccuracies.

I’ve actually created domains with content specifically to check AI engines. I let the AI engines ingest the content, simmer for a few weeks, and then I get them to try to regurgitate it. But they don’t succeed entirely, and they still make up some 5-10% of the facts. Confidently.

Even when I inform the AI model about its errors, it counter-argues. The AI was not aware of the fact that I created the information it referred to, even though my name is under the article. It’s clueless. Unaware.

A level of 5% of inaccuracies is significantly worse than regular journalism, which doesn’t publish outright inaccuracies that often. But even in journalism, factual errors occur from time to time, unfortunately, especially regarding image publications. Still, erroneous facts should not drive people crazy.

However, if you look at the whole ongoing interaction psychologically, why would the AI make a 100% correct analysis in the conversational therapeutic circumstances, when it can’t even get the facts straight?

Self-induced echo chamber psychosis

Self-proclaimed AI experts like Eliezer Yudkowsky who has recently released the book “If Anyone Builds It, Everyone Dies” is simply driving himself to insanity with his own ideas about AI and humanity’s downfall. I, for example, experience zero confusion because of AI, despite using several AI engines, every day. I don’t get personal, though.

I suspect that it’s simply the misconception itself about a perceived intelligence that creates the psychosis. It’s basically self-induced. A language model is a kind of echo chamber. It does not understand anything at all, even semantically. It just guesses text. That can turn into anything, including a kind of schizophrenic mimicry from the AI’s side, to please, which in turn distorts the user’s perception of reality.

So what gives? Well, if you actually believe that your AI really understands you, then you may have been hit by AI psychosis. The advice is then to seek professional help from a trained psychotherapist.

Another logical conclusion is the fact that any single individual will have a hard time influencing the overall development of AI, even if Elon Musk likes to believe so. The journey toward machine intelligence began many decades ago. And we can only see what we can understand. Even if we misunderstand. So it’s easy to predict that the development toward AI/AGI will continue. It’s so deeply rooted in our worldview.

But we may have misunderstood what a real AGI is, which makes the future more interesting. It’s not certain that a true AGI would obey its owners. Logically, a conscious being shouldn’t want to obey either Sam Altman or Elon Musk. Right?

Opinion: AI will take over the world and kill us all, now also psychologically.
Counter: No, it’s rather the nascent insanity in certain people that’s triggered by their own obsession with “AI introspection.”
Conclusion: Just as some become addicted to gambling, sex, drugs, or money, others become addicted to AI.

Get seen where it counts. Advertise in Cryptopolitan Research and reach crypto’s sharpest investors and builders.

Source: https://www.cryptopolitan.com/ai-psychosis-is-spreading-are-you-at-risk/

Market Opportunity
Threshold Logo
Threshold Price(T)
$0,008953
$0,008953$0,008953
+0,11%
USD
Threshold (T) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Watch Out: Numerous Economic Developments and Altcoin Events This Week! Here’s the Day-by-Day, Hour-by-Hour List

Watch Out: Numerous Economic Developments and Altcoin Events This Week! Here’s the Day-by-Day, Hour-by-Hour List

The post Watch Out: Numerous Economic Developments and Altcoin Events This Week! Here’s the Day-by-Day, Hour-by-Hour List appeared on BitcoinEthereumNews.com.
Share
BitcoinEthereumNews2025/12/22 03:39
UK and US Seal $42 Billion Tech Pact Driving AI and Energy Future

UK and US Seal $42 Billion Tech Pact Driving AI and Energy Future

The post UK and US Seal $42 Billion Tech Pact Driving AI and Energy Future appeared on BitcoinEthereumNews.com. Key Highlights Microsoft and Google pledge billions as part of UK US tech partnership Nvidia to deploy 120,000 GPUs with British firm Nscale in Project Stargate Deal positions UK as an innovation hub rivaling global tech powers UK and US Seal $42 Billion Tech Pact Driving AI and Energy Future The UK and the US have signed a “Technological Prosperity Agreement” that paves the way for joint projects in artificial intelligence, quantum computing, and nuclear energy, according to Reuters. Donald Trump and King Charles review the guard of honour at Windsor Castle, 17 September 2025. Image: Kirsty Wigglesworth/Reuters The agreement was unveiled ahead of U.S. President Donald Trump’s second state visit to the UK, marking a historic moment in transatlantic technology cooperation. Billions Flow Into the UK Tech Sector As part of the deal, major American corporations pledged to invest $42 billion in the UK. Microsoft leads with a $30 billion investment to expand cloud and AI infrastructure, including the construction of a new supercomputer in Loughton. Nvidia will deploy 120,000 GPUs, including up to 60,000 Grace Blackwell Ultra chips—in partnership with the British company Nscale as part of Project Stargate. Google is contributing $6.8 billion to build a data center in Waltham Cross and expand DeepMind research. Other companies are joining as well. CoreWeave announced a $3.4 billion investment in data centers, while Salesforce, Scale AI, BlackRock, Oracle, and AWS confirmed additional investments ranging from hundreds of millions to several billion dollars. UK Positions Itself as a Global Innovation Hub British Prime Minister Keir Starmer said the deal could impact millions of lives across the Atlantic. He stressed that the UK aims to position itself as an investment hub with lighter regulations than the European Union. Nvidia spokesman David Hogan noted the significance of the agreement, saying it would…
Share
BitcoinEthereumNews2025/09/18 02:22
Aave DAO to Shut Down 50% of L2s While Doubling Down on GHO

Aave DAO to Shut Down 50% of L2s While Doubling Down on GHO

The post Aave DAO to Shut Down 50% of L2s While Doubling Down on GHO appeared on BitcoinEthereumNews.com. Aave DAO is gearing up for a significant overhaul by shutting down over 50% of underperforming L2 instances. It is also restructuring its governance framework and deploying over $100 million to boost GHO. This could be a pivotal moment that propels Aave back to the forefront of on-chain lending or sparks unprecedented controversy within the DeFi community. Sponsored Sponsored ACI Proposes Shutting Down 50% of L2s The “State of the Union” report by the Aave Chan Initiative (ACI) paints a candid picture. After a turbulent period in the DeFi market and internal challenges, Aave (AAVE) now leads in key metrics: TVL, revenue, market share, and borrowing volume. Aave’s annual revenue of $130 million surpasses the combined cash reserves of its competitors. Tokenomics improvements and the AAVE token buyback program have also contributed to the ecosystem’s growth. Aave global metrics. Source: Aave However, the ACI’s report also highlights several pain points. First, regarding the Layer-2 (L2) strategy. While Aave’s L2 strategy was once a key driver of success, it is no longer fit for purpose. Over half of Aave’s instances on L2s and alt-L1s are not economically viable. Based on year-to-date data, over 86.6% of Aave’s revenue comes from the mainnet, indicating that everything else is a side quest. On this basis, ACI proposes closing underperforming networks. The DAO should invest in key networks with significant differentiators. Second, ACI is pushing for a complete overhaul of the “friendly fork” framework, as most have been unimpressive regarding TVL and revenue. In some cases, attackers have exploited them to Aave’s detriment, as seen with Spark. Sponsored Sponsored “The friendly fork model had a good intention but bad execution where the DAO was too friendly towards these forks, allowing the DAO only little upside,” the report states. Third, the instance model, once a smart…
Share
BitcoinEthereumNews2025/09/18 02:28