Must Read
We’re all aware of how powerful artificial intelligence is.
The problem is, bad actors are aware of this too.
Whereas vibe coding has democratized people’s ability to create computer programs, acting essentially like a human-language-to-computer-language translator for non-coders, it has also had this democratizing effect for threat actors or hackers. (READ: Vibe coding: What it means to have AI write computer code, and the risks it entails)
Through the use of large language models (LLMs) and other AI-enabled automation tools, the firm said, “In 2026, we are witnessing the total industrialization of cyber threats, where the barrier to entry has vanished…”
“For example, rather than spending millions to develop a custom exploit, a 2026 adversary might use a low-cost GenAI subscription to automate credential harvesting across thousands of targets” allowing attacks to achieve “frictionless scale.”
The problem is compounded by the fact that both enterprise and individual targets make use of stacks of cloud services and SaaS (software-as-a-service) products, with each connected service potentially acting as a point of entry for an attacker.
How do they do it?
In one key example, the firm investigated a threat actor called GRUB1, wherein a SaaS firm Salesforce’s app called Drift (used for customer lead generation), was compromised, exposing “hundreds of corporate tenants simultaneously.”
The attacker made use of “automated secret-scanning tools like TruffleHog” which scoured for “high value credentials buried in code.” With the general ability of AI to parse through troves of data, attackers were able to find the information they need.
The firm found that GRUB1 used AI “to pinpoint specific database tables that contained the most valuable information just moments before gaining unauthorized access to production instances” or the live environment with which end users interact.
Once the attackers harvested the “keys to the kingdom,” they used generative AI or LLMs in order to “navigate unfamiliar, complex SaaS environments.”
What does this mean? It means that even if an attacker is not that familiar with a specific system architecture, they have a chance at effectively navigating it — and exploiting it — because an LLM can guide them. That’s the democratization of hacking.
Or as Cloudforce puts it, “The GRUB1 campaign demonstrates that unsophisticated, individual actors can now execute high-impact breaches.”
“Using LLMs to bridge knowledge gaps in specialized software like Salesforce,” attackers can now “locate and exfiltrate sensitive data with surgical precision.”
“An actor who previously lacked the skills to craft a convincing phishing email or write customer malware can now leverage an LLM to generate them rapidly and at scale, significantly lowering the barrier to entry for highly effective operations,” it added.
Likewise, the security of data in this AI shift is now “only as strong as the most over-privileged integration in your tech stack.”
That means, if a company allows a connected service too much access — whether unwittingly or not — that’s going to be a point of vulnerability.
We shouldn’t forget that LLMs are basically a form of SaaS too. As we speak, it’s being integrated into workflows in many offices.
What do ChatGPT and Google Gemini or other chatbots do? They “converse” with you, and you respond to them. Whether you’re asking it to assist with customer data, asking for writing suggestions, or asking for code, your response is data for them. Over time, with the memory capability of LLMs, that becomes a huge amount of data — a potentially exploitable data honeypot for hackers.
Cloudflare said, “The unprecedented adoption rate among consumers and enterprises means that vast quantities of proprietary source code, financial details, and personally identifiable information are being routinely funneled into these systems.”
“This creates a massive aggregation of sensitive data” and in turn, the “AI system itself becomes the most lucrative target for future exfiltration.”
Workers feeding data into computer systems aren’t new. But the scale at which they are being deployed and adopted, and the current very centralized nature of AI systems, make them larger honeypots.
For example, a worker might traditionally input some data in a Word file, and then a different set of data in a spreadsheet. Essentially, there’s a siloed environment. Now, a worker might feed or upload both the Word file and spreadsheet into an AI system, breaking the silo.
And when you combine data sets, it leads to more informed actions. The same goes for threat actors.
“In other words,” Cloudforce explained, “the risk is no longer just a single leaked document, but the potential for a determined adversary to compromise the ‘corporate brain’…”
In a workplace where our personal data and work-related data often co-mingle, the acceleration of attacks affect not just one’s company, but ultimately, the individual themselves. Whether one’s data is breached via the office or outside it matters little to attackers.
At the end of 2025, we compiled various reports documenting how cyber attackers have used LLMs in order to craft more convincing fake personas, and more effective messages in their phishing and social engineering strategies. (That same year, an International Data Corporation report also found that a total of 78% of the organizations surveyed in the Philippines said that they have faced AI-powered threats over from 2024 to 2025.)
At the basic level, LLMs allowed attackers to create messages that were mostly free of grammatical errors, which was formerly an indicator of a good percentage of phishing emails. They took it a step forward by having an LLM create a fake persona (i.e. “create a message in the tone of a high-level professional”) that is more believable to the target.
In 2026, Cloudflare documented another leveling up as it found North Korean hackers that further augmented their fake personas with AI-driven deepfakes. They did it through real-time rendering that allowed it to “bypass video interviews, ultimately funneling hundreds of dollars in revenue back to the regime.”
Real-time rendering means that the deepfake is being rendered while the operative is talking, donning what is essentially a digital, deepfake mask for their face in order to trick the target.
This “critical evolution” is powering the “industrialization of North Korean remote IT worker schemes.” It is sophisticated in that they make use of US-based “laptop farms” that are being controlled through remote software from abroad, and create “comprehensive digital personas” on platforms such as LinkedIn and code repository GitHub.
This is done, again, all in maintaining the illusion of domestic residency in the US — and with the ultimate goal of gaining “unauthorized access to sensitive data and secure environments.”
Identifying the threat group as “PutridSlug,” Cloudflare said that through deepfake video and audio “to impersonate company executives during Zoom calls to target tech firm employees,” it takes advantage of a victim’s established trust.
Another group called “PatheticSlug” posed as journalists from legitimate news outlets to conduct interviews with policy experts and gather “off-the-record insights to provide the regime with critical visibility into the diplomatic and military strategies of perceived global and regional adversaries.” The group also targeted global embassies, and created “high-fidelity impersonations of trusted diplomatic contacts” in what has become AI- and deepfake-assisted cyber espionage.
While the report also discussed impersonation attacks coming from Russia, Iran, and China, it is North Korea that was explicitly found to have adopted clear, specific deepfake and LLM-enabled tactics. It also warned: “Major 2026 diplomatic events (ASEAN in the Philippines, APEC in China) are prime targets for statebacked intelligence gathering.”
The report also noted several ways to immediately counter the said threats.
Companies need to establish clear rules on how workers use chatbots at work. Pasting sensitive documents, code, or customer data into AI systems can unintentionally expose valuable information if accounts are hacked or devices are compromised.
Add real human verification to remote hiring. Deepfakes and stolen identities mean companies can no longer rely on purely online hiring. Stronger checks — such as biometric verification and confirming a person’s physical location — help ensure applicants are who they claim to be.
Company-issued laptops can also be restricted to approved locations, preventing foreign operatives from secretly controlling devices from abroad. Upgrade email defenses for AI-generated attacks. Phishing messages are now more convincing and constantly changing. Traditional spam filters often miss them.
Newer defenses use AI to analyze behavior and patterns, allowing organizations to detect suspicious activity or compromised accounts in real time — even when attacks originate from inside the network. – Rappler.com

