TikTok’s AI Moderation Gamble Faces German Union Resistance

Key takeaways
  • TikTok is cutting 150 Berlin trust and safety roles, replacing them with AI systems and outsourced moderation.
  • The layoffs remove nearly 40% of its German workforce and affect moderation for 32 million German-speaking users.
  • The shift follows similar cuts in the Netherlands and Malaysia, reflecting an industry-wide move toward automated content moderation.
  • Union ver.di demands extended notice, higher severance, and warns of immigration risks for non-German employees.
  • Workers cite AI misclassifications, including false flags on harmless content and missed violations, as proof that human oversight remains critical.
  • The EU’s Digital Services Act imposes strict moderation obligations, making accuracy and transparency essential to avoid fines.
  • Outsourced labor may lack access to in-house mental health resources for moderators exposed to graphic material.
  • For advertisers, changes raise brand safety considerations and underscore the need to monitor platform content integrity.

Platform replaces human moderators with AI despite EU content safety rules and accuracy concerns.

TikTok’s decision to dismantle its Berlin-based trust and safety team marks one of the most consequential shifts in its European operations to date. The move will eliminate 150 positions—nearly 40% of its German workforce—affecting moderation for the 32 million-user German-speaking market.

In place of the existing human team, TikTok will rely on artificial intelligence systems and outsourced labor to review content for policy violations, a pivot that has triggered strikes, union demands, and growing scrutiny from policymakers.

Inside the Decision to Cut Berlin’s Trust and Safety Team

According to company statements, the layoffs are part of a plan to “streamline workflows and improve efficiency” by consolidating moderation operations into fewer locations. The Berlin team’s responsibilities extended beyond traditional trust and safety work—covering content review for harmful or illegal material, as well as oversight of the Live department, which manages relationships with content creators.

Moderators reviewed up to 1,000 videos per day, often working alongside AI tools, but the new structure will remove this embedded human layer almost entirely.

The affected work will now be split between algorithmic systems trained by ByteDance, TikTok’s Chinese parent company, and external contractors. While TikTok says these changes will enhance the speed of harmful content removal and reduce the psychological toll on in-house staff, they also raise operational and compliance questions—particularly in the EU, where content safety obligations are legally binding.

A Global Pattern of AI-First Moderation

Germany is not an isolated case. Over the past year, TikTok has laid off moderation teams across multiple markets, replacing hundreds of roles in the Netherlands and Malaysia with AI-powered processes. This pattern mirrors a broader industry shift: Meta, X, and Snap have all reduced their trust and safety headcounts, increasingly leaning on automated tools to filter content.

These moves reflect a strategic calculus that automated moderation is now “good enough” to justify large-scale human reduction. However, internal reports and union statements challenge this assumption, citing persistent errors in classification, especially in non-English languages and culturally specific contexts.

In Germany, workers have documented cases where TikTok’s automated systems flagged harmless content—such as symbols of social identity—as violations, while failing to catch actual policy breaches.

Union Pushback and Worker Demands

The ver.di trade union, representing the Berlin staff, has staged multiple strikes after TikTok refused to negotiate severance terms and extended notice periods. Workers are seeking a 12-month extension and severance worth up to three years’ salary, citing the high-intensity and distressing nature of moderation work. They argue that removing trained moderators erodes the platform’s ability to detect nuanced harmful content, increasing the risk of manipulative campaigns and disinformation.

The union also points out that many affected employees are non-German citizens, meaning layoffs could jeopardize their residency status. Additionally, outsourcing to external contractors—often in jurisdictions with fewer workplace protections—raises concerns that moderators handling graphic material may not have access to the mental health resources available to in-house teams.

Regulatory Risks Under the Digital Services Act

The EU’s Digital Services Act (DSA), which came into force in 2022, imposes strict obligations on large platforms to prevent the spread of illegal and harmful content. Failure to meet these standards can result in substantial fines. While TikTok insists that AI-driven workflows will maintain compliance, content safety experts warn that over-reliance on automation could create enforcement gaps.

The DSA also requires transparency in moderation decisions, a standard that can be difficult to meet when automated systems make classification calls without clear, interpretable reasoning.

For platforms with large advertiser bases, the combination of regulatory exposure and potential brand safety issues makes moderation accuracy not just a compliance issue, but a commercial one.

Strategic Implications for Platforms and Advertisers

For TikTok, the Berlin cuts are part of a larger operational strategy to centralize and automate trust and safety. While this may reduce costs and increase speed, it shifts risk onto the platform’s AI capabilities and its ability to maintain consistent, context-aware moderation.

For advertisers and brand managers, the change underscores the importance of closely monitoring a platform’s content safety infrastructure when evaluating campaign placements. Brand reputation can be quickly compromised if ads appear alongside harmful or unmoderated material. In highly regulated markets like the EU, where consumer trust is shaped by platform integrity, alignment with a platform’s moderation quality is not optional—it’s a core part of risk management.

The Road Ahead

Union representatives have signaled that strikes will continue unless TikTok agrees to negotiations, and the possibility of long-term industrial action remains on the table. Meanwhile, EU regulators are likely to pay close attention to whether the AI-first model can deliver on both the speed and accuracy that compliance demands.

Whether TikTok’s gamble delivers efficiency without eroding trust will depend on its ability to refine automated moderation to match the cultural and contextual awareness that human teams have historically provided.

For now, the move illustrates a defining tension in the social media industry: the push for operational efficiency through AI versus the necessity of safeguarding users—and brands—in an increasingly scrutinized content environment.

About the Author
Nadica Naceva writes, edits, and wrangles content at Influencer Marketing Hub, where she keeps the wheels turning behind the scenes. She’s reviewed more articles than she can count, making sure they don’t go out sounding like AI wrote them in a hurry. When she’s not knee-deep in drafts, she’s training others to spot fluff from miles away (so she doesn’t have to).