The post AI Is Making Cybercrime Easier For Unsophisticated Criminals appeared on BitcoinEthereumNews.com. The Claude by Anthropic app logo appears on the screen of a smartphone in Reno, United States, on November 21, 2024. (Photo by Jaque Silva/NurPhoto via Getty Images) NurPhoto via Getty Images For some time, the business model of the relatively few cybercriminal geniuses who create sophisticated malware such as ransomware has been to offer their malware and services to less sophisticated cybercriminals on the Dark Web , sometimes even providing the delivery systems in return generally for a percentage of the ransom received. However, AI has rapidly changed this model as even less sophisticated criminals are able to leverage artificial intelligence to perpetrate a variety of scams. AI tools can harvest vast amounts of data from social media and publicly available sources to enable cybercriminals to create specifically targeted phishing emails called spear phishing emails that are more likely to be trusted by their targeted victims, often luring them into providing personal information that can lead to identity theft or making a payment under some pretext. Readily available deepfake video and voice cloning technology has enabled cybercriminals to perpetrate a variety of scams including the grandparent scam or family emergency scam and the Business Email Compromise scam which in the past relied on social engineering tactics primarily done through emails in which the scammer would pose as a company executive and convince lower level employees to authorize a payment as directed by the scammer under some pretense. This scam, which largely began in 2018, accounted for losses worldwide of more than $55 billion between October 2023 and December 2023 according to the FBI. Now with the advent of AI and the use of deepfake technology and voice cloning technology, scammers have upped the stakes. In 2024 the engineering firm Arup lost $25 million to cybercriminals who posed as the… The post AI Is Making Cybercrime Easier For Unsophisticated Criminals appeared on BitcoinEthereumNews.com. The Claude by Anthropic app logo appears on the screen of a smartphone in Reno, United States, on November 21, 2024. (Photo by Jaque Silva/NurPhoto via Getty Images) NurPhoto via Getty Images For some time, the business model of the relatively few cybercriminal geniuses who create sophisticated malware such as ransomware has been to offer their malware and services to less sophisticated cybercriminals on the Dark Web , sometimes even providing the delivery systems in return generally for a percentage of the ransom received. However, AI has rapidly changed this model as even less sophisticated criminals are able to leverage artificial intelligence to perpetrate a variety of scams. AI tools can harvest vast amounts of data from social media and publicly available sources to enable cybercriminals to create specifically targeted phishing emails called spear phishing emails that are more likely to be trusted by their targeted victims, often luring them into providing personal information that can lead to identity theft or making a payment under some pretext. Readily available deepfake video and voice cloning technology has enabled cybercriminals to perpetrate a variety of scams including the grandparent scam or family emergency scam and the Business Email Compromise scam which in the past relied on social engineering tactics primarily done through emails in which the scammer would pose as a company executive and convince lower level employees to authorize a payment as directed by the scammer under some pretense. This scam, which largely began in 2018, accounted for losses worldwide of more than $55 billion between October 2023 and December 2023 according to the FBI. Now with the advent of AI and the use of deepfake technology and voice cloning technology, scammers have upped the stakes. In 2024 the engineering firm Arup lost $25 million to cybercriminals who posed as the…

AI Is Making Cybercrime Easier For Unsophisticated Criminals

The Claude by Anthropic app logo appears on the screen of a smartphone in Reno, United States, on November 21, 2024. (Photo by Jaque Silva/NurPhoto via Getty Images)

NurPhoto via Getty Images

For some time, the business model of the relatively few cybercriminal geniuses who create sophisticated malware such as ransomware has been to offer their malware and services to less sophisticated cybercriminals on the Dark Web , sometimes even providing the delivery systems in return generally for a percentage of the ransom received. However, AI has rapidly changed this model as even less sophisticated criminals are able to leverage artificial intelligence to perpetrate a variety of scams.

AI tools can harvest vast amounts of data from social media and publicly available sources to enable cybercriminals to create specifically targeted phishing emails called spear phishing emails that are more likely to be trusted by their targeted victims, often luring them into providing personal information that can lead to identity theft or making a payment under some pretext.

Readily available deepfake video and voice cloning technology has enabled cybercriminals to perpetrate a variety of scams including the grandparent scam or family emergency scam and the Business Email Compromise scam which in the past relied on social engineering tactics primarily done through emails in which the scammer would pose as a company executive and convince lower level employees to authorize a payment as directed by the scammer under some pretense. This scam, which largely began in 2018, accounted for losses worldwide of more than $55 billion between October 2023 and December 2023 according to the FBI.

Now with the advent of AI and the use of deepfake technology and voice cloning technology, scammers have upped the stakes. In 2024 the engineering firm Arup lost $25 million to cybercriminals who posed as the CFO of the company in deepfaked video calls and persuaded a company employee to transfer the money.

But things aren’t as bad as you think. They are far worse.

Anthropic, the company that developed the Claude chabot recently released a report in which it detailed how its chatbot had been used to develop and implement sophisticated cybercrimes. The report described the evolution of the use of AI by cybercriminals to not only use AI as a tool to develop malware, but to use it as an active operator of a cyberattack, which they referred to as “Vibe-hacking.” The report gave the example of one cybercriminal based in the UK described as GTG-5004 who used Claude to find companies that would be vulnerable to a ransomware attack by scanning thousands of VPN endpoints to find vulnerable systems, determine how to best penetrate the companies networks, create malware with evasion capabilities to steal sensitive data, deliver the malware, steal the data and sift through the data to determine which data could be best used to extort the hacked company and even use psychology to craft emails with their ransom demands. Claude was also utilized to steal financial records of the targeted company to determine the amount of Bitcoin to be demanded in exchange for not publishing the stolen material.

In one month GTG-5004 used Claude to attack 17 organizations involved in government, healthcare, emergency services and religious institutions making demands of between $75,000 and more than $500,000

GTG-5004 then began selling ransomware on demand services to other cybercriminals on the Dark Web with different levels of packages including encryption capabilities, and methods designed to help hackers avoid detection. It is important to note that the report indicated that unlike in the past when technologically sophisticated criminals would sell or lease on the Dark Web the malware they personally created, the report indicated that “This operator does not appear capable of implementing encryption algorithms, anti-analysis techniques or Windows internals manipulation without Claude’s assistance.”

The result is that a single cybercriminal could now do what previously would take an entire team skilled in cryptography, windows internals and evasion techniques to create ransomware and automatically make both strategic and tactical decisions regarding targeting, exploitation and monetization as well as adapt to defensive measures encountered. All of this lowers the bar for criminals seeking to commit cybercrimes.

The report also detailed how its AI capabilities were being misused by North Korean operatives who used it to obtain remote jobs at tech companies. According to the report, “Traditional North Korean IT worker operations relied on highly skilled individuals recruited and trained from a young age within North Korea. Our investigation reveals a fundamental shift: AI has become the primary enabler allowing operators with limited technical skills to successfully infiltrate and maintain positions at Western technology companies.”

The report described how using AI, Korean operators who could not write basic code on their own or communicate in English are now able to successfully pass interviews and get jobs at tech companies earning hundreds of millions of dollars annually that funds North Korea’s weapons programs. Making the matter even worse, through AI, each operator can maintain multiple job positions in American tech companies that would have been impossible without the use of AI

Anthropic has responded to the threats they identified by banning the accounts associated with these operations and developing a tailored classifier to specifically identify this type of activity and institute detection measures into its already existing safety enforcement systems. Additionally, Anthropic shared its findings with other companies as well as the security and safety community to help them recognize and defend against the threats posed by criminals using AI platforms. However, the threat of the use of AI for cybercrimes looms large.

The Anthropic report is indeed a wakeup call to the entire AI industry.

Source: https://www.forbes.com/sites/steveweisman/2025/09/02/ai-is-making-cybercrime-easier-for-unsophisticated-criminals/

Market Opportunity
Seed.Photo Logo
Seed.Photo Price(PHOTO)
$0.29668
$0.29668$0.29668
-4.75%
USD
Seed.Photo (PHOTO) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

A Netflix ‘KPop Demon Hunters’ Short Film Has Been Rated For Release

A Netflix ‘KPop Demon Hunters’ Short Film Has Been Rated For Release

The post A Netflix ‘KPop Demon Hunters’ Short Film Has Been Rated For Release appeared on BitcoinEthereumNews.com. KPop Demon Hunters Netflix Everyone has wondered what may be the next step for KPop Demon Hunters as an IP, given its record-breaking success on Netflix. Now, the answer may be something exactly no one predicted. According to a new filing with the MPA, something called Debut: A KPop Demon Hunters Story has been rated PG by the ratings body. It’s listed alongside some other films, and this is obviously something that has not been publicly announced. A short film could be well, very short, a few minutes, and likely no more than ten. Even that might be pushing it. Using say, Pixar shorts as a reference, most are between 4 and 8 minutes. The original movie is an hour and 36 minutes. The “Debut” in the title indicates some sort of flashback, perhaps to when HUNTR/X first arrived on the scene before they blew up. Previously, director Maggie Kang has commented about how there were more backstory components that were supposed to be in the film that were cut, but hinted those could be explored in a sequel. But perhaps some may be put into a short here. I very much doubt those scenes were fully produced and simply cut, but perhaps they were finished up for this short film here. When would Debut: KPop Demon Hunters theoretically arrive? I’m not sure the other films on the list are much help. Dead of Winter is out in less than two weeks. Mother Mary does not have a release date. Ne Zha 2 came out earlier this year. I’ve only seen news stories saying The Perfect Gamble was supposed to come out in Q1 2025, but I’ve seen no evidence that it actually has. KPop Demon Hunters Netflix It could be sooner rather than later as Netflix looks to capitalize…
Share
BitcoinEthereumNews2025/09/18 02:23
Bitmine Immersion Technologies (BMNR) stock :soars 5% as $13.4B Crypto Treasury Propels Ethereum Supercycle Vision

Bitmine Immersion Technologies (BMNR) stock :soars 5% as $13.4B Crypto Treasury Propels Ethereum Supercycle Vision

TLDR Bitmine surges 5.18% as $13.4B ETH treasury cements crypto dominance. Bitmine’s $12.6B Ethereum trove fuels bold 5% market ownership goal. Bitmine rebounds strong—ETH hoard drives record treasury valuation. Bitmine’s ETH empire grows to 3M coins, powering stock’s sharp rally. With record ETH and cash reserves, Bitmine solidifies crypto supremacy. Bitmine Immersion Technologies closed 5.18% [...] The post Bitmine Immersion Technologies (BMNR) stock :soars 5% as $13.4B Crypto Treasury Propels Ethereum Supercycle Vision appeared first on CoinCentral.
Share
Coincentral2025/10/14 02:40
Headwind Helps Best Wallet Token

Headwind Helps Best Wallet Token

The post Headwind Helps Best Wallet Token appeared on BitcoinEthereumNews.com. Google has announced the launch of a new open-source protocol called Agent Payments Protocol (AP2) in partnership with Coinbase, the Ethereum Foundation, and 60 other organizations. This allows AI agents to make payments on behalf of users using various methods such as real-time bank transfers, credit and debit cards, and, most importantly, stablecoins. Let’s explore in detail what this could mean for the broader cryptocurrency markets, and also highlight a presale crypto (Best Wallet Token) that could explode as a result of this development. Google’s Push for Stablecoins Agent Payments Protocol (AP2) uses digital contracts known as ‘Intent Mandates’ and ‘Verifiable Credentials’ to ensure that AI agents undertake only those payments authorized by the user. Mandates, by the way, are cryptographically signed, tamper-proof digital contracts that act as verifiable proof of a user’s instruction. For example, let’s say you instruct an AI agent to never spend more than $200 in a single transaction. This instruction is written into an Intent Mandate, which serves as a digital contract. Now, whenever the AI agent tries to make a payment, it must present this mandate as proof of authorization, which will then be verified via the AP2 protocol. Alongside this, Google has also launched the A2A x402 extension to accelerate support for the Web3 ecosystem. This production-ready solution enables agent-based crypto payments and will help reshape the growth of cryptocurrency integration within the AP2 protocol. Google’s inclusion of stablecoins in AP2 is a massive vote of confidence in dollar-pegged cryptocurrencies and a huge step toward making them a mainstream payment option. This widens stablecoin usage beyond trading and speculation, positioning them at the center of the consumption economy. The recent enactment of the GENIUS Act in the U.S. gives stablecoins more structure and legal support. Imagine paying for things like data crawls, per-task…
Share
BitcoinEthereumNews2025/09/18 01:27