The post US group rallies media, tech figures in call to check artificial superintelligence appeared on BitcoinEthereumNews.com. A coalition of US right-wing media personalities, scientists, and tech leaders are calling for a global ban on the development of superintelligent artificial intelligence (AI) until science ensures it can be controlled safely, in tandem with the public’s support. According to a Wednesday report by Reuters, the plea coordinated by the Future of Life Institute (FLI) was announced through a joint statement signed by more than 850 public figures.  The document is asking governments and AI companies to suspend all superintelligence work, AI systems that supposedly tower over human cognitive abilities, until publicly approved safety mechanisms are imposed. Allies converge to halt superintelligence AI development  The signatories in the coalition are led by right-wing media members Steve Bannon and Glenn Beck, alongside leading AI researchers Geoffrey Hinton and Yoshua Bengio. Other figures include Virgin Group founder Richard Branson, Apple cofounder Steve Wozniak, and former US military and political officials.  The list also features former Chairman of the Joint Chiefs of Staff Mike Mullen, former National Security Advisor Susan Rice, and the Duke and Duchess of Sussex, Prince Harry and Meghan Markle, with former President of Ireland Mary Robinson. Renowned computer scientist Yoshua Bengio spoke about the coalition’s fears in a statement on the initiative’s website, saying AI systems may soon outperform most humans in cognitive tasks. Bengio reiterated that technology could help solve global problems, but it poses immense dangers if developed recklessly. “To safely advance toward superintelligence, we must scientifically determine how to design AI systems that are fundamentally incapable of harming people, whether through misalignment or malicious use,” he said. “We also need to make sure the public has a much stronger say in decisions that will shape our collective future.” The Future of Life Institute, a nonprofit founded in 2014 with early backing from Tesla CEO Elon… The post US group rallies media, tech figures in call to check artificial superintelligence appeared on BitcoinEthereumNews.com. A coalition of US right-wing media personalities, scientists, and tech leaders are calling for a global ban on the development of superintelligent artificial intelligence (AI) until science ensures it can be controlled safely, in tandem with the public’s support. According to a Wednesday report by Reuters, the plea coordinated by the Future of Life Institute (FLI) was announced through a joint statement signed by more than 850 public figures.  The document is asking governments and AI companies to suspend all superintelligence work, AI systems that supposedly tower over human cognitive abilities, until publicly approved safety mechanisms are imposed. Allies converge to halt superintelligence AI development  The signatories in the coalition are led by right-wing media members Steve Bannon and Glenn Beck, alongside leading AI researchers Geoffrey Hinton and Yoshua Bengio. Other figures include Virgin Group founder Richard Branson, Apple cofounder Steve Wozniak, and former US military and political officials.  The list also features former Chairman of the Joint Chiefs of Staff Mike Mullen, former National Security Advisor Susan Rice, and the Duke and Duchess of Sussex, Prince Harry and Meghan Markle, with former President of Ireland Mary Robinson. Renowned computer scientist Yoshua Bengio spoke about the coalition’s fears in a statement on the initiative’s website, saying AI systems may soon outperform most humans in cognitive tasks. Bengio reiterated that technology could help solve global problems, but it poses immense dangers if developed recklessly. “To safely advance toward superintelligence, we must scientifically determine how to design AI systems that are fundamentally incapable of harming people, whether through misalignment or malicious use,” he said. “We also need to make sure the public has a much stronger say in decisions that will shape our collective future.” The Future of Life Institute, a nonprofit founded in 2014 with early backing from Tesla CEO Elon…

US group rallies media, tech figures in call to check artificial superintelligence

2025/10/22 21:37

A coalition of US right-wing media personalities, scientists, and tech leaders are calling for a global ban on the development of superintelligent artificial intelligence (AI) until science ensures it can be controlled safely, in tandem with the public’s support.

According to a Wednesday report by Reuters, the plea coordinated by the Future of Life Institute (FLI) was announced through a joint statement signed by more than 850 public figures. 

The document is asking governments and AI companies to suspend all superintelligence work, AI systems that supposedly tower over human cognitive abilities, until publicly approved safety mechanisms are imposed.

Allies converge to halt superintelligence AI development 

The signatories in the coalition are led by right-wing media members Steve Bannon and Glenn Beck, alongside leading AI researchers Geoffrey Hinton and Yoshua Bengio. Other figures include Virgin Group founder Richard Branson, Apple cofounder Steve Wozniak, and former US military and political officials. 

The list also features former Chairman of the Joint Chiefs of Staff Mike Mullen, former National Security Advisor Susan Rice, and the Duke and Duchess of Sussex, Prince Harry and Meghan Markle, with former President of Ireland Mary Robinson.

Renowned computer scientist Yoshua Bengio spoke about the coalition’s fears in a statement on the initiative’s website, saying AI systems may soon outperform most humans in cognitive tasks. Bengio reiterated that technology could help solve global problems, but it poses immense dangers if developed recklessly.

“To safely advance toward superintelligence, we must scientifically determine how to design AI systems that are fundamentally incapable of harming people, whether through misalignment or malicious use,” he said. “We also need to make sure the public has a much stronger say in decisions that will shape our collective future.”

The Future of Life Institute, a nonprofit founded in 2014 with early backing from Tesla CEO Elon Musk and tech investor Jaan Tallinn, is also among groups campaigning for responsible AI governance. 

The organization warns that the race to build superintelligent AI or artificial superintelligence (ASI) could create irreversible risks for humanity if not properly regulated.

In its latest statement, the group noted superintelligence could lead to “human economic obsolescence, disempowerment, losses of freedom, civil liberties, dignity, and control, and national security threats and even the potential extinction of humanity.”

FLI is asking policymakers to ban superintelligence research and development fully until there is “strong public support” and “scientific consensus that such systems can be safely built and controlled.”

Tech industry split on AI development

Tech giants are still trying to push the boundaries of AI capabilities, even though some groups are against how it has affected jobs and product development. Elon Musk’s xAI, Sam Altman’s OpenAI, and Meta are all racing to develop powerful large language models (LLMs). 

In July, Meta CEO Mark Zuckerberg said during a conference that the development of superintelligent systems was “now in sight.” However, some AI experts claim the Meta CEO is using marketing tactics to scare competitors about how his company is “ahead” in a sector expected to see hundreds of billions of dollars in the coming years.

The US government and technology industry have resisted demands for moratoriums, propounding that fears of an “AI apocalypse” are vehemently exaggerated. Naysayers of a development pause say it would stifle innovation, slow economic growth, and the potential benefits AI could bring to medicine, climate science, and automation.

Yet, according to a national poll commissioned by FLI, the American public is largely in favor of stricter oversight. The survey of 2,000 adults found that three-quarters of respondents support more regulation of advanced AI, and six in ten believe that superhuman AI should not be developed until it is proven controllable. 

Before becoming OpenAI’s chief executive, Sam Altman warned in a 2015 blog post that “superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.”

Similarly, Elon Musk, who has simultaneously funded and fought against AI advancement, said earlier this year in Joe Rogan’s podcast that there was “only a 20% chance of annihilation” from AI surpassing human intelligence. 

Sharpen your strategy with mentorship + daily ideas – 30 days free access to our trading program

Source: https://www.cryptopolitan.com/media-tech-figures-halt-superintelligent-ai/

Market Opportunity
WING Logo
WING Price(WING)
$0.08771
$0.08771$0.08771
+1.69%
USD
WING (WING) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

The Channel Factories We’ve Been Waiting For

The Channel Factories We’ve Been Waiting For

The post The Channel Factories We’ve Been Waiting For appeared on BitcoinEthereumNews.com. Visions of future technology are often prescient about the broad strokes while flubbing the details. The tablets in “2001: A Space Odyssey” do indeed look like iPads, but you never see the astronauts paying for subscriptions or wasting hours on Candy Crush.  Channel factories are one vision that arose early in the history of the Lightning Network to address some challenges that Lightning has faced from the beginning. Despite having grown to become Bitcoin’s most successful layer-2 scaling solution, with instant and low-fee payments, Lightning’s scale is limited by its reliance on payment channels. Although Lightning shifts most transactions off-chain, each payment channel still requires an on-chain transaction to open and (usually) another to close. As adoption grows, pressure on the blockchain grows with it. The need for a more scalable approach to managing channels is clear. Channel factories were supposed to meet this need, but where are they? In 2025, subnetworks are emerging that revive the impetus of channel factories with some new details that vastly increase their potential. They are natively interoperable with Lightning and achieve greater scale by allowing a group of participants to open a shared multisig UTXO and create multiple bilateral channels, which reduces the number of on-chain transactions and improves capital efficiency. Achieving greater scale by reducing complexity, Ark and Spark perform the same function as traditional channel factories with new designs and additional capabilities based on shared UTXOs.  Channel Factories 101 Channel factories have been around since the inception of Lightning. A factory is a multiparty contract where multiple users (not just two, as in a Dryja-Poon channel) cooperatively lock funds in a single multisig UTXO. They can open, close and update channels off-chain without updating the blockchain for each operation. Only when participants leave or the factory dissolves is an on-chain transaction…
Share
BitcoinEthereumNews2025/09/18 00:09
SOLANA NETWORK Withstands 6 Tbps DDoS Without Downtime

SOLANA NETWORK Withstands 6 Tbps DDoS Without Downtime

The post SOLANA NETWORK Withstands 6 Tbps DDoS Without Downtime appeared on BitcoinEthereumNews.com. In a pivotal week for crypto infrastructure, the Solana network
Share
BitcoinEthereumNews2025/12/16 20:44
Why The Green Bay Packers Must Take The Cleveland Browns Seriously — As Hard As That Might Be

Why The Green Bay Packers Must Take The Cleveland Browns Seriously — As Hard As That Might Be

The post Why The Green Bay Packers Must Take The Cleveland Browns Seriously — As Hard As That Might Be appeared on BitcoinEthereumNews.com. Jordan Love and the Green Bay Packers are off to a 2-0 start. Getty Images The Green Bay Packers are, once again, one of the NFL’s better teams. The Cleveland Browns are, once again, one of the league’s doormats. It’s why unbeaten Green Bay (2-0) is a 8-point favorite at winless Cleveland (0-2) Sunday according to betmgm.com. The money line is also Green Bay -500. Most expect this to be a Packers’ rout, and it very well could be. But Green Bay knows taking anyone in this league for granted can prove costly. “I think if you look at their roster, the paper, who they have on that team, what they can do, they got a lot of talent and things can turn around quickly for them,” Packers safety Xavier McKinney said. “We just got to kind of keep that in mind and know we not just walking into something and they just going to lay down. That’s not what they going to do.” The Browns certainly haven’t laid down on defense. Far from. Cleveland is allowing an NFL-best 191.5 yards per game. The Browns gave up 141 yards to Cincinnati in Week 1, including just seven in the second half, but still lost, 17-16. Cleveland has given up an NFL-best 45.5 rushing yards per game and just 2.1 rushing yards per attempt. “The biggest thing is our defensive line is much, much improved over last year and I think we’ve got back to our personality,” defensive coordinator Jim Schwartz said recently. “When we play our best, our D-line leads us there as our engine.” The Browns rank third in the league in passing defense, allowing just 146.0 yards per game. Cleveland has also gone 30 straight games without allowing a 300-yard passer, the longest active streak in the NFL.…
Share
BitcoinEthereumNews2025/09/18 00:41