Building a Solidarity Ecosystem for AI
The digital economy’s story often centers on stock prices and initial public offerings, but the processes and people behind it reveal a very different reality. Across outsourcing hubs like Nairobi, Manila, and Hyderabad, content moderators working for Facebook, OpenAI, and their subcontractors spend hours each day reviewing beheadings, sexual violence, child abuse, and hate speech to train and police AI systems. This form of labor has led many to report severe psychological harm, including depression, anxiety, and post-traumatic stress disorder.
Investigations have documented suicide attempts among moderators in Kenya and the Philippines, alongside widespread reports of suicidal ideation linked to relentless exposure to traumatic content, low pay, and a lack of mental-health support. These incidents are not isolated tragedies, but rather symptoms of an industry structured to offload risk downward through opaque contracting chains while concentrating profit and control at the top.

