The post Vitalik calls out double standards in AI safety regulation dilemma appeared on BitcoinEthereumNews.com. Vitalik Buterin has shared concerns regarding theThe post Vitalik calls out double standards in AI safety regulation dilemma appeared on BitcoinEthereumNews.com. Vitalik Buterin has shared concerns regarding the

Vitalik calls out double standards in AI safety regulation dilemma

For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

Vitalik Buterin has shared concerns regarding the increasingly controversial uses of the theoretical concept of “AI safety” by companies and governments. 

Buterin explained on the social media platform X that leading companies within the AI space, like Anthropic, cannot dictate what measures are suitable or not for safety, as that leads to a system where the rules are crafted by the strongest.

Can ‘AI safety’ be used as a global dominance tool?

Vitalik Buterin recently took to the social media platform X to share his concerns about the concept of AI safety being appropriated by large corporations and national interests.

For example, Anthropic recently received praise for refusing to allow the Department of Welfare (DoW) or other government entities to use its Claude models for mass surveillance or fully autonomous weaponry.m.

However, the company also canceled its pause-on-risk safety pledge that compelled the company to unconditionally halt all training and deployment until safety measures caught up if it ever developed an AI model whose capabilities outpaced the company’s ability to prove the model was safe.

Vitalik pointed out that Anthropic’s previous criticism of its competitors for learning from Claude’s outputs drew sharp backlash from critics, particularly in China, who argued that Claude itself trained its models on the vast, public knowledge of the internet.

Anthropic claims that its problem with open-source competitors is that they lack the necessary safety guardrails and pose risks, but why does Anthropic get to decide which safety measures are suitable?

Buterin stated that Anthropic’s actions suggest a system where “rules are crafted by the strongest.”

He expressed a fear that if AI safety becomes indistinguishable from a “our company/our country deserves to run the world” mentality, it will create a more dangerous world.

He argues that if safety regulations inevitably exempt national security organizations, the regulations will become fragile. This is especially relevant as recent news confirms that major AI labs are increasingly seeking multi-billion-dollar partnerships with defense contractors to provide secure AI environments for military use.

Is restricting AI dangerous?

Years ago, Vitalik became one of the Future of Life Institute’s (FLI’s) largest donors. In 2021, he was gifted a massive supply of Shiba Inu (SHIB) tokens by the token’s creators. When the dog coin bubble was at its peak, the book value was over $1 billion. Vitalik scrambled to donate the funds before interest declined and sent roughly $500 million in SHIB to FLI.

At the time, the FLI was focused on risks like bio-threats and nuclear war. However, FLI has since shifted its focus toward aggressive political action and lobbying, often pushing for regulations that Vitalik finds worrying. Specifically, he disagrees with their focus on putting guards into AI models to make them refuse “bad stuff.”

Vitalik views these restrictions as fragile solutions because they can be easily bypassed by jailbreaking or fine-tuning.

More importantly, he fears these strategies lead to a dark place where open-source AI is banned to maintain a good-guy monopoly.

Vitalik is instead advocating for a system called defensive accelerationism (d/acc). This philosophy suggests that the best way to handle dangerous technology is to build and open-source the shields first.

He recently allocated $40 million toward projects like secure hardware, biodefense, and cybersecurity to support his ideology.

Secure hardware makes computer chips unhackable, so they cannot be used for mass spying. Biodefense involves developing advanced air filtering and passive PCR testing to detect and stop pandemics early. Investments in cybersecurity will improve software verifiability so that AI-driven attacks cannot easily take down critical infrastructure.

Source: https://www.cryptopolitan.com/vitalik-calls-out-double-standards-ai-safety/

Market Opportunity
Spacecoin Logo
Spacecoin Price(SPACE)
$0.007411
$0.007411$0.007411
+1.24%
USD
Spacecoin (SPACE) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.