TikTok has announced a sweeping restructuring plan that could see hundreds of jobs in the UK trust and safety division cut or relocated, as the company leans more heavily on artificial intelligence (AI) to moderate content.
The platform, which employs over 2,500 staff in the UK, revealed that certain roles in content moderation may be shifted to other European hubs or outsourced to third-party providers. At the same time, TikTok emphasized its increasing reliance on automated systems to detect harmful content more quickly and efficiently.
According to company figures, over 85% of content taken down for community guideline violations is flagged by AI tools, while 99% of problematic posts are removed before being reported by users. These statistics underscore the rapid rise of algorithmic moderation and its growing influence on how social media platforms manage harmful or inappropriate material.
The move has not gone uncontested. The Communication Workers Union (CWU) has strongly criticized TikTok’s decision, arguing that the restructuring not only puts hundreds of livelihoods at risk but also undermines the effectiveness of moderation.
Union representatives noted that the announcement coincided with a scheduled staff vote on union recognition, raising suspicions that the layoffs are partly aimed at curbing organized labor efforts within the company.
The criticism reflects a growing debate within the tech sector. While AI can process significantly larger volumes of content faster than human moderators, concerns remain about accuracy, fairness, and potential bias in automated systems.
The timing of TikTok’s restructuring also coincides with the rollout of the UK’s Online Safety Act, which came into force earlier this year. The law requires social media companies to swiftly remove harmful content or face steep financial penalties.
Analysts suggest that regulatory compliance costs may be driving platforms like TikTok to accelerate automation, as AI systems can more consistently meet the law’s demand for rapid response times. However, this efficiency comes with trade-offs. Critics argue that overreliance on AI could lead to mistakes, including the wrongful removal of harmless content or the failure to identify harmful posts that require deeper human assessment.
TikTok’s restructuring reflects a larger industry shift. Studies show that AI systems can process up to 20 times more content than human moderators, giving companies strong financial incentives to automate.
Reports also suggest that automation has reduced graphic content exposure for remaining human staff by as much as 60%, lowering psychological risks associated with moderation work.
The global content moderation market is projected to grow at a rate of 10.7% annually through 2027, with automation playing a central role in shaping its future. TikTok’s decision to restructure in the UK is just one part of this transformation, signaling what could soon become the new standard across social media platforms worldwide.
The post AI Replaces Human Moderators in TikTok’s UK Restructuring appeared first on CoinCentral.


