
Wed Sep 17 03:00:00 UTC 2025: Okay, here’s a summary of the article, followed by a rewritten version as a news article:
**Summary:**
The article highlights the often-overlooked human labor involved in training AI and Machine Learning models. While AI systems are presented as automated and efficient, they heavily rely on data annotation performed by workers, often located in developing countries, who label raw data (images, audio, video, text) to create training sets. This work, often outsourced and poorly paid, is crucial for AI functions like recognizing objects in self-driving cars or providing accurate responses from large language models. The article points out that these “ghost workers” face harsh working conditions, low wages, exposure to disturbing content, and lack of recognition, essentially fueling the AI revolution through exploitation. The article urges for stricter laws to regulate AI companies and digital platforms, ensuring transparency, fair pay, and ethical treatment of these crucial, yet invisible, workers.
**News Article:**
**AI’s “Automated Economy” Built on the Backs of Exploited Data Annotators, Critics Say**
**New Delhi, September 17, 2025** – As the world races toward an “automated economy” driven by Artificial Intelligence, a new report exposes the hidden human cost behind the seemingly effortless capabilities of AI systems. While tech giants tout the efficiency and accuracy of AI, the reality is that these systems are heavily reliant on a vast, often invisible, workforce of data annotators, primarily located in developing nations, working under exploitative conditions.
According to a report in The Hindu, these data annotators are the unsung heroes – or rather, the “ghost workers” – training AI and Machine Learning (ML) models. They meticulously label raw data, including images, audio, video, and text, to teach AI to perform tasks from recognizing objects in self-driving cars to providing contextually relevant responses from large language models like ChatGPT. Without their painstaking work, AI systems would be unable to function.
“A machine cannot process the meaning behind raw data,” the report stated. “For example, an LLM cannot recognise the colour ‘yellow’ unless the data has been labelled as such.”
The work is often outsourced by Silicon Valley tech companies to countries like Kenya, India, Pakistan, China, and the Philippines, where workers face low wages and long hours. Concerns are rising about the employment of non-experts for technical subjects.
“In Kenya, these US companies are undermining the local labor laws, the country’s justice system and violating international labor standards. Our working conditions amount to modern-day slavery,” a letter from AI tech workers from Kenya sent to former U.S. President Joe Biden read.
The work itself can be harrowing. Data annotators may be exposed to graphic and disturbing content, leading to mental health issues like PTSD, anxiety, and depression. Strict deadlines for completing a task within seconds or minutes, and constant surveillance add to the stress.
Because the work is often outsourced through intermediary digital platforms and offered as “microtasks”, many workers are unaware of the company they ultimately serve, and lack the rights afforded to formal employees. This arrangement allows AI companies to minimise costs and perpetuate a system of labour exploitation.
Critics are calling for stricter laws and regulations to govern AI companies and digital platforms, demanding transparency in their labor supply chains, fair pay, and ethical treatment of data annotators. The future of AI, they argue, cannot be built on the exploitation of those who make it possible.