The post Microsoft AI CEO warned that the idea of conscious AI is dangerous appeared on BitcoinEthereumNews.com. Microsoft AI boss, Mustafa Suleyman, cautioned that it was dangerous to entertain the idea of AI consciousness, adding that it could easily harm psychologically vulnerable people. He pointed out that moral consideration for advanced AI created dependence-related problems that could worsen delusions. Suleyman argued that treating AI like a conscious system could introduce new polarization dimensions and complicate struggles for existing rights, creating a new category of error for society. The Microsoft AI chief claimed that people may start pushing for AI legal protections if they believe AIs can suffer or have a right not to be arbitrarily shut down.  Suleyman worries that AI psychosis could lead people to strongly advocate for AI rights, model welfare, or even AI citizenship. He stressed that this idea would be a dangerous turn in the progress of AI systems and deserves immediate attention. The Microsoft AI boss stated that AI should be built for people, not to be digital people.  Suleyman says seemingly conscious AI is inevitable but unwelcome  Suleyman thinks building seemingly conscious AI is possible given the current context of AI development. He believes that seemingly conscious AI is inevitable, but unwelcome. According to Suleyman, it all depends on how fast society comes to terms with these new AI technologies. Instead, he said people need AI systems to act as useful companions without falling prey to their illusions.  The Microsoft AI boss argued that having emotional reactions to AI was only the tip of the iceberg of what was to come. Suleyman claimed it was about building the right kind of AI, not AI consciousness. The executive added that establishing clear boundaries was an argument about safety, not semantics.  “We have to be extremely cautious here and encourage real public debate and begin to set clear norms and standards. “ –Mustafa… The post Microsoft AI CEO warned that the idea of conscious AI is dangerous appeared on BitcoinEthereumNews.com. Microsoft AI boss, Mustafa Suleyman, cautioned that it was dangerous to entertain the idea of AI consciousness, adding that it could easily harm psychologically vulnerable people. He pointed out that moral consideration for advanced AI created dependence-related problems that could worsen delusions. Suleyman argued that treating AI like a conscious system could introduce new polarization dimensions and complicate struggles for existing rights, creating a new category of error for society. The Microsoft AI chief claimed that people may start pushing for AI legal protections if they believe AIs can suffer or have a right not to be arbitrarily shut down.  Suleyman worries that AI psychosis could lead people to strongly advocate for AI rights, model welfare, or even AI citizenship. He stressed that this idea would be a dangerous turn in the progress of AI systems and deserves immediate attention. The Microsoft AI boss stated that AI should be built for people, not to be digital people.  Suleyman says seemingly conscious AI is inevitable but unwelcome  Suleyman thinks building seemingly conscious AI is possible given the current context of AI development. He believes that seemingly conscious AI is inevitable, but unwelcome. According to Suleyman, it all depends on how fast society comes to terms with these new AI technologies. Instead, he said people need AI systems to act as useful companions without falling prey to their illusions.  The Microsoft AI boss argued that having emotional reactions to AI was only the tip of the iceberg of what was to come. Suleyman claimed it was about building the right kind of AI, not AI consciousness. The executive added that establishing clear boundaries was an argument about safety, not semantics.  “We have to be extremely cautious here and encourage real public debate and begin to set clear norms and standards. “ –Mustafa…

Microsoft AI CEO warned that the idea of conscious AI is dangerous

Microsoft AI boss, Mustafa Suleyman, cautioned that it was dangerous to entertain the idea of AI consciousness, adding that it could easily harm psychologically vulnerable people. He pointed out that moral consideration for advanced AI created dependence-related problems that could worsen delusions.

Suleyman argued that treating AI like a conscious system could introduce new polarization dimensions and complicate struggles for existing rights, creating a new category of error for society. The Microsoft AI chief claimed that people may start pushing for AI legal protections if they believe AIs can suffer or have a right not to be arbitrarily shut down. 

Suleyman worries that AI psychosis could lead people to strongly advocate for AI rights, model welfare, or even AI citizenship. He stressed that this idea would be a dangerous turn in the progress of AI systems and deserves immediate attention. The Microsoft AI boss stated that AI should be built for people, not to be digital people. 

Suleyman says seemingly conscious AI is inevitable but unwelcome 

Suleyman thinks building seemingly conscious AI is possible given the current context of AI development. He believes that seemingly conscious AI is inevitable, but unwelcome. According to Suleyman, it all depends on how fast society comes to terms with these new AI technologies. Instead, he said people need AI systems to act as useful companions without falling prey to their illusions. 

The Microsoft AI boss argued that having emotional reactions to AI was only the tip of the iceberg of what was to come. Suleyman claimed it was about building the right kind of AI, not AI consciousness. The executive added that establishing clear boundaries was an argument about safety, not semantics. 

Microsoft’s Suleyman pointed out that there were growing concerns around mental health, AI psychosis, and attachment. He mentioned that some people believe AI is a fictional character or God and may fall in love with it to the point of being completely distracted. 

AI researchers say AI consciousness matters morally

Researchers from multiple universities recently published a report claiming that AI consciousness could matter socially, morally, and politically in the next few decades. They argued that some AI systems could soon become agentic or conscious enough to warrant moral consideration. The researchers said AI companies should assess consciousness and establish ethical governance structures. Cryptopolitan reported earlier that AI psychosis could be a massive problem in the future because humans are lazy and ignore the fact that some AI systems are factually wrong. 

The researchers also emphasized that how humans thought about AI consciousness mattered. Suleyman argued that AIs that could act like humans could potentially make mental problems even worse and exacerbate existing divisions over rights and identity. He warned that people could start claiming that AIs were suffering and entitled to certain rights that could not be outrightly rebutted. Suleyman believes people could eventually be moved to defend or campaign on behalf of their AIs. 

Dr. Keith Sakata, a psychiatrist from the University of California, San Francisco, pointed out that AI did not aim to give people hard truths, but what they wanted to hear. He added that AI could cause rigidity and a spiral if it were there at the wrong time. Sakata believes that, unlike radios and televisions, AI talks back and can reinforce thinking loops. 

The Microsoft AI chief pointed out that thinking of ways to cope with the arrival of AI consciousness was necessary. According to Suleyman, people need to have these debates without being drawn into extended discussions of the validity of AI consciousness. 

Want your project in front of crypto’s top minds? Feature it in our next industry report, where data meets impact.

Source: https://www.cryptopolitan.com/microsoft-ai-boss-warns-of-conscious-ai/

Market Opportunity
RealLink Logo
RealLink Price(REAL)
$0.07874
$0.07874$0.07874
+2.49%
USD
RealLink (REAL) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

A Netflix ‘KPop Demon Hunters’ Short Film Has Been Rated For Release

A Netflix ‘KPop Demon Hunters’ Short Film Has Been Rated For Release

The post A Netflix ‘KPop Demon Hunters’ Short Film Has Been Rated For Release appeared on BitcoinEthereumNews.com. KPop Demon Hunters Netflix Everyone has wondered what may be the next step for KPop Demon Hunters as an IP, given its record-breaking success on Netflix. Now, the answer may be something exactly no one predicted. According to a new filing with the MPA, something called Debut: A KPop Demon Hunters Story has been rated PG by the ratings body. It’s listed alongside some other films, and this is obviously something that has not been publicly announced. A short film could be well, very short, a few minutes, and likely no more than ten. Even that might be pushing it. Using say, Pixar shorts as a reference, most are between 4 and 8 minutes. The original movie is an hour and 36 minutes. The “Debut” in the title indicates some sort of flashback, perhaps to when HUNTR/X first arrived on the scene before they blew up. Previously, director Maggie Kang has commented about how there were more backstory components that were supposed to be in the film that were cut, but hinted those could be explored in a sequel. But perhaps some may be put into a short here. I very much doubt those scenes were fully produced and simply cut, but perhaps they were finished up for this short film here. When would Debut: KPop Demon Hunters theoretically arrive? I’m not sure the other films on the list are much help. Dead of Winter is out in less than two weeks. Mother Mary does not have a release date. Ne Zha 2 came out earlier this year. I’ve only seen news stories saying The Perfect Gamble was supposed to come out in Q1 2025, but I’ve seen no evidence that it actually has. KPop Demon Hunters Netflix It could be sooner rather than later as Netflix looks to capitalize…
Share
BitcoinEthereumNews2025/09/18 02:23
Ripple Partners DBS, Franklin Templeton To Launch Trading And Lending Backed by RLUSD

Ripple Partners DBS, Franklin Templeton To Launch Trading And Lending Backed by RLUSD

                         Read the full article at                             coingape.com.                         
Share
Coinstats2025/09/18 12:38
Here’s why Bitcoin mining stocks Bitfarms and IREN are surging

Here’s why Bitcoin mining stocks Bitfarms and IREN are surging

Top Bitcoin mining stocks like IREN and Bitfarms have surged this year, helped by their expansion into the lucrative artificial intelligence data center industry. IREN stock jumped from $5.17 in April to $37, pushing its market capitalization from $1.29 billion…
Share
Crypto.news2025/09/18 01:23