In recent months, the BBC has delved into the unsettling realm of online content moderation, revealing the chilling realities faced by moderators who sift through the most distressing and horrific content available on the internet. These dreadful images and videos—ranging from beheadings and mass killings to child abuse and hate speech—are part of a grim task taken on by an often unseen and underappreciated workforce. Content moderators are tasked with reviewing material flagged by users or identified through automated tech tools, ultimately deciding whether or not the content should remain visible to the public.
The contemporary conversation surrounding online safety has gained significant traction, with technology firms facing increasing pressure to eradicate harmful content from their platforms more swiftly. However, despite advancements in technology aimed at finding solutions to these issues, it remains predominantly human moderators who make the final call about what stays or goes online. Most moderators are contracted by third-party companies but work closely with significant social media platforms such as Instagram, TikTok, and Facebook, to ensure that harmful content is removed.
Through experiences shared in a BBC series called “The Moderators,” produced for Radio 4 and BBC Sounds, the harrowing stories of these individuals come to light. The individuals interviewed, largely from East Africa, had left the industry and found it difficult to navigate the psychological toll of their experiences. In some cases, the recordings produced during the interviews were deemed too brutal for broadcast, illustrating the severe impact of the work on their mental health.
One former moderator named Mojez shared the stark contrast between the seemingly joyful content popular on platforms like TikTok and the horrific videos he was required to filter through. “If you take your phone and then go to TikTok, you will see a lot of activities, dancing, you know, happy things,” Mojez explained, “but in the background…I personally was moderating, in the hundreds, horrific and traumatizing videos.” This candid expression mirrors the struggle many moderators face, often sacrificing their mental health to provide a semblance of safety for mainstream users interacting on these platforms.
As discussions of the detrimental mental effects of content moderation intensify, legal actions have emerged, seeking accountability from tech companies for the harm endured by their moderators. In one notorious case, Meta (formerly Facebook) reached a $52 million settlement in 2020 for claims made by a moderator affected by their profession. Selena Scola, a key figure in the legal action, harshly referenced the role of moderators as “keepers of souls,” emphasizing the heavy burden they carry as they witness graphic images depicting the last moments of victims.
Across varied personal testimonies, “trauma” was a recurring theme among ex-moderators. Some reported struggles with insomnia and anxiety due to the distressing nature of their work. Others found it hard to interact with loved ones after being exposed to graphic instances of child abuse.
Interestingly, many moderators expressed pride in their roles, identifying themselves as part of a vital emergency service. One individual even compared his work to the roles of emergency service professionals, stating he felt a sense of accomplishment and meaning in the service they provided to society. He expressed eagerness for better support, shared camaraderie, and working conditions, suggesting the potential creation of a union among moderators could pave the way for necessary reforms in the industry.
While there’s an ongoing conversation about introducing artificial intelligence (AI) tools into the moderation processes to ease the psychological burden on human workers, skepticism remains. Some believe that while AI can assist in identifying and removing harmful content, it lacks the nuanced understanding necessary to replace human moderators completely. It can inadvertently suppress free speech or overlook content that may require human discernment to understand its contextual depth.
As tech firms grapple with the challenges of moderation, responses to critics have highlighted efforts to provide support systems for moderators, including clinical assistance and creating favorable working environments. The tech giants have also acknowledged the value this human workforce brings to refining their algorithms, as human interaction remains essential in tackling the complexities of content moderation.
In summary, the conversation regarding the often-overlooked challenges faced by content moderators is critical in understanding the broader implications of online safety and mental health. The insights provided by moderators emphasize the need for greater awareness of their profound duties and the psychological toll they endure, pushing for advocacy toward improved working conditions and mental health support within the industry.









