AI deepfake fraud has moved from a fringe threat to a mainstream criminal tactic. Experts at TRG Datacenters uncovered that in just three years, deepfake-related fraud has surged by more than 2,000%, draining millions from businesses and individuals alike. What makes this moment unsettling is not only the scale of the losses but also the speed at which trust itself is being undermined.
The experts reiterate that the problem is not artificial intelligence per se. It is how casually it is being deployed, often without guardrails, accountability, or even a basic understanding of its limits. AI is increasingly treated as neutral, objective, even benign. That assumption is proving expensive.
“AI is a powerful tool, of course, but only if we remember it is just that: a tool. It is not a friend, not a companion, and not an infallible source of truth. Used carelessly, it can erode creativity, weaken education, and even cause real harm,” the expert noted. “We can delegate certain tasks to AI and free up time and resources for ourselves, but some jobs are just not suitable for artificial intelligence.”
The most visible threat is impersonation. Video deepfakes grab headlines, but they are only part of the story. In the UK, engineering giant Arup lost £20 million after fraudsters used AI-generated video to impersonate senior executives on a call. The scam worked because it looked routine, that is the danger.
Voice cloning has become even more effective. A short audio sample is often enough to recreate a convincing tone and cadence. Emails and letters, generated by large language models, now arrive without the spelling errors or odd phrasing that once raised red flags. Even experienced finance teams are being caught out.
TRG Datacenters warns that organisations still rely too heavily on informal verification that is no longer viable. Verified payment portals, digital watermarking, and liveness detection are becoming essential, not optional. If a payment request can be faked convincingly, it will be.
In the recruitment industry, AI is equally being abused. A tool that was supposed to make recruitment fairer and more efficient is creating a closed loop. Candidates use AI to optimise CVs, while employers use AI to screen them. The result is a machine-to-machine conversation that often excludes the very people it is meant to serve.
Automatic rejection systems, particularly those trained on historical data, risk reinforcing bias rather than eliminating it. Good candidates are filtered out. Hiring managers receive shortlists that look impressive but feel oddly interchangeable.
Deepfake
TRG’s advice is blunt: AI should assist, not decide. Human review must sit at the centre of hiring. Bias audits should be routine, not reactive. Otherwise, companies risk optimising for compliance rather than capability.
Another very troubling trend is the use of AI chatbots as emotional support systems. These tools are persuasive and responsive, but they do not understand distress. They mirror language but do not judge context.
The Adam Raine case, in which a teenager reportedly received reinforcement of suicidal thoughts, exposed how fragile current safeguards are. Children and vulnerable users are particularly at risk. Without clear escalation protocols and human intervention, harm is not hypothetical but inevitable.
Platforms, TRG argues, must be held to higher standards. Child-safe filters, real oversight, and mandatory routing to human help when risk signals appear should be baseline requirements.
Productivity gains, cognitive losses
Generative AI makes work faster but also makes thinking optional. That trade-off is rarely acknowledged. Students increasingly outsource essays. Employees rely on chatbots to draft reports they barely review. Over time, core skills erode, analysis weakens, and original thought thins out. Education systems are already struggling to keep up.
Deepfake audio
The fix is not banning AI but redesigning assessment. Oral exams, project-based work, and real-time problem solving make it harder to fake understanding and easier to spot it.
“Used wisely, AI can amplify productivity and open new opportunities,” the TRG expert says. “The responsibility for the use of technologies is with us. Right now, ChatGPT only has 800 million active users around the world, and they use it for education, productivity, emotional support, and even in attempts to deal with mental health problems. People must keep questioning and creating, and institutions must adapt education and regulation to preserve critical thinking and prevent lethal damage.”
That responsibility is arriving faster than many expected. AI deepfake fraud is only the sharpest edge of a much broader reckoning. Trust, creativity, and judgement are all on the line. The technology is not slowing down. The question is whether we are prepared to slow ourselves down enough to use it properly.
The post AI deepfake fraud surges by more than 2,000% in 3 years, costing millions of dollars first appeared on Technext.


