The post Over 200 leaders, experts demand global ‘red lines’ for AI use appeared on BitcoinEthereumNews.com. Homepage > News > Business > Over 200 leaders, experts demand global ‘red lines’ for AI use More than 200 prominent politicians, public figures, and scientists released a letter calling for urgent binding international “red lines” to prevent dangerous artificial intelligence (AI) use. The letter was released to coincide with the 80th session of the United Nations General Assembly (UNGA). The illustrious list of signees included ten Nobel Prize winners, eight former heads of state and ministers, and several leading AI researchers. They were joined by over 70 organizations worldwide, including Taiwan AI Labs, the Foundation for European Progressive Studies, AI Governance and Safety Canada, and the Beijing Academy of Artificial Intelligence. “AI holds immense potential to advance human wellbeing, yet its current trajectory presents unprecedented dangers,” read the letter. “We urgently call for international red lines to prevent unacceptable AI risks.” Among the concerned figures putting their name to the call for AI caution was Nobel Peace Prize laureate Maria Ressa, who announced the letter in her opening speech at the UN General Assembly’s High-Level Week on Monday. She warned that “without AI safeguards, we may soon face epistemic chaos, engineered pandemics, and systematic human rights violation.” Ressa added that “history teaches us that when confronted with irreversible, borderless threats, cooperation is the only rational way to pursue national interests.” The brief letter, published on a dedicated site called ‘red-lives.ai’, raised fears that AI could soon “far surpass human capabilities” and, in so doing, escalate risks such as widespread disinformation and manipulation of individuals. This, it claimed, could lead to national and international security concerns, mass unemployment, and systematic human rights violations. “Some advanced AI systems have already exhibited deceptive and harmful behavior, and yet these systems are being given more autonomy to take actions and make decisions in… The post Over 200 leaders, experts demand global ‘red lines’ for AI use appeared on BitcoinEthereumNews.com. Homepage > News > Business > Over 200 leaders, experts demand global ‘red lines’ for AI use More than 200 prominent politicians, public figures, and scientists released a letter calling for urgent binding international “red lines” to prevent dangerous artificial intelligence (AI) use. The letter was released to coincide with the 80th session of the United Nations General Assembly (UNGA). The illustrious list of signees included ten Nobel Prize winners, eight former heads of state and ministers, and several leading AI researchers. They were joined by over 70 organizations worldwide, including Taiwan AI Labs, the Foundation for European Progressive Studies, AI Governance and Safety Canada, and the Beijing Academy of Artificial Intelligence. “AI holds immense potential to advance human wellbeing, yet its current trajectory presents unprecedented dangers,” read the letter. “We urgently call for international red lines to prevent unacceptable AI risks.” Among the concerned figures putting their name to the call for AI caution was Nobel Peace Prize laureate Maria Ressa, who announced the letter in her opening speech at the UN General Assembly’s High-Level Week on Monday. She warned that “without AI safeguards, we may soon face epistemic chaos, engineered pandemics, and systematic human rights violation.” Ressa added that “history teaches us that when confronted with irreversible, borderless threats, cooperation is the only rational way to pursue national interests.” The brief letter, published on a dedicated site called ‘red-lives.ai’, raised fears that AI could soon “far surpass human capabilities” and, in so doing, escalate risks such as widespread disinformation and manipulation of individuals. This, it claimed, could lead to national and international security concerns, mass unemployment, and systematic human rights violations. “Some advanced AI systems have already exhibited deceptive and harmful behavior, and yet these systems are being given more autonomy to take actions and make decisions in…

Over 200 leaders, experts demand global ‘red lines’ for AI use

More than 200 prominent politicians, public figures, and scientists released a letter calling for urgent binding international “red lines” to prevent dangerous artificial intelligence (AI) use. The letter was released to coincide with the 80th session of the United Nations General Assembly (UNGA).

The illustrious list of signees included ten Nobel Prize winners, eight former heads of state and ministers, and several leading AI researchers. They were joined by over 70 organizations worldwide, including Taiwan AI Labs, the Foundation for European Progressive Studies, AI Governance and Safety Canada, and the Beijing Academy of Artificial Intelligence.

“AI holds immense potential to advance human wellbeing, yet its current trajectory presents unprecedented dangers,” read the letter. “We urgently call for international red lines to prevent unacceptable AI risks.”

Among the concerned figures putting their name to the call for AI caution was Nobel Peace Prize laureate Maria Ressa, who announced the letter in her opening speech at the UN General Assembly’s High-Level Week on Monday.

She warned that “without AI safeguards, we may soon face epistemic chaos, engineered pandemics, and systematic human rights violation.”

Ressa added that “history teaches us that when confronted with irreversible, borderless threats, cooperation is the only rational way to pursue national interests.”

The brief letter, published on a dedicated site called ‘red-lives.ai’, raised fears that AI could soon “far surpass human capabilities” and, in so doing, escalate risks such as widespread disinformation and manipulation of individuals. This, it claimed, could lead to national and international security concerns, mass unemployment, and systematic human rights violations.

“Some advanced AI systems have already exhibited deceptive and harmful behavior, and yet these systems are being given more autonomy to take actions and make decisions in the world,” the letter warned. “Left unchecked, many experts, including those at the forefront of development, warn that it will become increasingly difficult to exert meaningful human control in the coming years.”

In order to meet this challenge, the various public figures and organizations who signed the letter called on governments to act decisively, “before the window for meaningful intervention closes.”

Specifically, they suggested that an international agreement on clear and verifiable red lines, that build upon and enforce existing global frameworks and voluntary corporate commitments, is necessary for preventing these “unacceptable” risks.

“We urge governments to reach an international agreement on red lines for AI — ensuring they are operational, with robust enforcement mechanisms — by the end of 2026,” said the letter.

This not-too-distant date was chosen because, according to the letter, the pace of AI development means that risks once seen as speculative are already emerging.

“Waiting longer could mean less room, both technically and politically, for effective intervention, while the likelihood of cross-border harm increases sharply,” said the signees. “That is why 2026 must be the year the world acts.”

Former President of the UN General Assembly, Csaba Kőrösi, one of the notable signatures on the letter, argued that “humanity in its long history has never met intelligence higher than ours. Within a few years, we will. But we are far from being prepared for it in terms of regulations, safeguards, and governance.”

This sentiment was echoed by Ahmet Üzümcü, former Director General of the Organization for the Prohibition of Chemical Weapons, another signee of the letter, who said, “it is in our vital common interest to prevent AI from inflicting serious and potentially irreversible damages to humanity, and we should act accordingly.

Former President of Ireland Mary Robinson and former President of Colombia Juan Manuel Santos also put their names to the call. In addition to these international leaders were Nobel Prize recipients in chemistry, economics, peace and physics, as well as popular and award-winning authors such as Stephen Fry and Yuval Noah Harari.

“For thousands of years, humans have learned—sometimes the hard way—that powerful technologies can have dangerous as well as beneficial consequences,” said Harari, author of the 2011 book ‘Sapiens: A Brief History of Humankind,’ that spent 182 weeks in The New York Times best-seller list. “With AI, we may not get a chance to learn from our mistakes, because AI is the first technology that can make decisions by itself, invent new ideas by itself, and escape our control.”

He added that “humans must agree on clear red lines for AI before the technology reshapes society beyond our understanding and destroys the foundations of our humanity.”

As well as being timed for the opening of latest UN General Assembly, the letter’s release fortuitously fell the same day that OpenAI and Nvidia (NASDAQ: NVDA) announced a “landmark strategic partnership” for the deployment of at least 10 gigawatts of Nvidia systems and a $100 billion investment from the company to help power OpenAI’s next-generation of AI infrastructure.

This deal between two of the world’s largest players in the AI space served to underscore the urgency of the AI red line letter.

Possible red lines

The website for the letter also provided a few examples of what these hypothetical red lines might look like, in the context of AI, suggesting that they could focus either on AI behaviors (what the AI systems can do) or on AI uses (how humans and organizations are allowed to use such systems).

The site emphasized that the campaign did not endorse any specific red lines, but provided several examples related to the areas of most concern. This included prohibiting: the delegation of nuclear launch authority, or critical command-and-control decisions, to AI systems; the deployment and use of weapon systems used for killing a human without meaningful human control and accountability; the use of AI systems for social scoring and mass surveillance; and the uncontrolled release of cyber offensive agents capable of disrupting critical infrastructure.

In terms of the feasibility of any of these controls, the site noted that certain red lines on AI behaviors are already being operationalized in the ‘Safety and Security’ frameworks of AI companies, such as Anthropic’s Responsible Scaling Policy, OpenAI’s Preparedness Framework, and DeepMind’s Frontier Safety Framework.

Back to the top ↑

A realistic goal

In order to further demonstrate that the letter’s goals are reasonable, the site gave a few more real-world examples from history that shows “international cooperation on high-stakes risks is entirely achievable.”

Two such cases were the Treaty on the Non-Proliferation of Nuclear Weapons (1970) and the Biological Weapons Convention (1975), which were negotiated and ratified at the height of the Cold War, “proving that cooperation is possible despite mutual distrust and hostility.”

More recently, it also pointed to the 2025 ‘High Seas Treaty’, which “provided a comprehensive set of regulations for high seas conservation and serves as a sign of optimism for international diplomacy.”

Back to the top ↑

If controlled, AI can be a force for good

The concerns raised by the public figures, along with calls for increased rules and protections, came the same day that the UN’s climate chief, Simon Stiell, gave an interview to U.K. broadsheet The Guardian, in which he said governments must step in to regulate AI technology.

Steill argued that if governments and authorities control AI, it could prove a “gamechanger” when it comes to combatting the climate crisis.

“AI is not a ready-made solution, and it carries risks. But it can also be a gamechanger,” the UN climate chief told The Guardian. “Done properly, AI releases human capacity, not replaces it. Most important is its power to drive real-world outcomes: managing microgrids, mapping climate risk, guiding resilient planning.”

Stiell’s comments demonstrate that there is a desire from current international leaders—at least at the UN—to see appropriate laws, regulation and controls for AI, as much as to utilize the technology’s potential for positive change.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Back to the top ↑

Watch: Demonstrating the potential of blockchain’s fusion with AI

title=”YouTube video player” frameborder=”0″ allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share” referrerpolicy=”strict-origin-when-cross-origin” allowfullscreen=””>

Source: https://coingeek.com/over-200-leaders-experts-demand-global-red-lines-for-ai-use/

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Tron Founder Justin Sun Demoted? Here’s What We Know

Tron Founder Justin Sun Demoted? Here’s What We Know

The post Tron Founder Justin Sun Demoted? Here’s What We Know appeared on BitcoinEthereumNews.com. Justin Sun, Tron founder and crypto billionaire, has revealed his new role, and it looks like a demotion. In a post on X, Sun announced that he will be taking on the role of “chief customer support.” This marks a significant shift from his daily role as the creator of the Tron blockchain. Justin Sun invites feedback as chief support agent Notably, the chief customer support role is for SunPerp, a decentralized perpetual contract trading platform. SunPerp makes its public beta debut today, and to ensure a seamless transition while handling any issues that might arise, Sun will provide customer support. The Tron founder is known for unconventionally promoting his projects. His “demotion” to chief customer support might just be a strategy to draw attention to SunPerp and get it off on a sound footing. Today https://t.co/FrvjQXSUCy is rotating its chief customer support role, and I’ll be taking it on for a day. Sunperp has just entered public beta, so feel free to use it as you like. If you run into any issues, just throw them my way. @SunPerp_DEX — H.E. Justin Sun 👨‍🚀 (Astronaut Version) (@justinsuntron) September 19, 2025 Although SunPerp is still being tested and undergoing fine-tuning, Sun’s post could be a way to create awareness so users will try it out. The goal is to subject it to real-world scenario tests and see how it will perform when it fully launches. This period of public beta launch will allow SunPerp to gather feedback from users that could improve the functionality of the decentralized exchange. Tron’s founder, now acting as chief customer support, has encouraged users to try out SunPerp while welcoming feedback.  “Feel free to use it as you like. If you run into any issues, just throw them my way ” he wrote. Sun is assuring…
Share
BitcoinEthereumNews2025/09/20 10:02
YouTube Plans AI Expansion in 2026 While Promising Crackdown on ‘AI Slop’

YouTube Plans AI Expansion in 2026 While Promising Crackdown on ‘AI Slop’

The post YouTube Plans AI Expansion in 2026 While Promising Crackdown on ‘AI Slop’ appeared on BitcoinEthereumNews.com. In brief YouTube says it will step up detection
Share
BitcoinEthereumNews2026/01/22 10:40
Trump reverses planned Feb 1 tariffs on NATO nations after Greenland talks

Trump reverses planned Feb 1 tariffs on NATO nations after Greenland talks

The post Trump reverses planned Feb 1 tariffs on NATO nations after Greenland talks appeared on BitcoinEthereumNews.com. Donald Trump has reversed his plan to impose
Share
BitcoinEthereumNews2026/01/22 10:07