This article is from big technologyNewsletter by Alex Kantrowitz.
Richard Massenzi felt He had the perfect role when he started training GPT models for OpenAI in 2021. After years of working in his customer service jobs in Nairobi, Kenya, he finally landed a job that felt meaningful and had a future for him. But the position left a scar on him. Mathenge led a team that spent nine hours a day, five days a week, teaching AI models about explicit content. The goal was to train users to be able to keep such things away. It still haunts him.
During work, Mathenge and his team repeatedly reviewed explicit text and labeled it for the model. They can classify content of unclear provenance as child sexual abuse material, erotic sexual content, illegal, non-sexual, or other alternatives. Much of what they read terrified them. One passage, Matenji said, depicted his father having sex with an animal in front of his children. Others included scenes of child rape. Some were very aggressive, but Matenji refused to talk about them. “I can’t imagine,” he told me.
The kind of work Mathenge did was critical to making bots like ChatGPT and Google’s Bard work and feel like magic. However, the human cost of that effort is widely ignored. In a process called “reinforcement learning from human feedback” (RLHF), bots get smarter as humans label content and teach them how to optimize based on that feedback. AI leaders, including OpenAI’s Sam Altman, praise the technical effectiveness of this practice, but say little about the cost some humans pay to align AI systems with our values. . Masenji and his colleagues took that reality as business.
Matenge got his degree from Nairobi’s African Nazarene University in 2018 and immediately started working in the city’s technology sector. In 2021, he applied for a job at Sama, his AI annotation service with proven results at companies such as OpenAI. After hiring his Mathenge, Sama got a job labeling his LiDAR images of self-driving cars. He reviewed the images and extracted people, other vehicles, and objects to help the model better understand what it encountered on the road.
After that project ended, Mathenge moved on to work on models for OpenAI. And there he encountered disturbing writings. OpenAI told me it believed it paid Sama contractors $12.50 an hour, but Matenji and his colleagues were earning about $1 an hour, sometimes less. . The team began to withdraw as their days filled with incest, bestiality and other explicit depictions.
“I know when my team is not doing well. I know when my team is not interested in reporting to work,” Masenji told me. “My team was just sending a signal that they weren’t ready to work on such language.”
Moffat Okini, a quality assurance analyst on Matenji’s team, is still dealing with the impact. He said repeated exposure to explicit text caused insomnia, anxiety, depression and panic attacks. Okini’s wife has seen a change in him, he said. Shortly thereafter she left him. “What a joy it is to see ChatGPT become famous and used by so many people around the world,” said O’Keney. “Making ChatGPT safe has destroyed my family. It has destroyed my mental health. I am still battling trauma as we speak.”
OpenAI knew these workers should receive counseling on a regular basis, but Okini and Matenge found it inadequate. “At some point, I got a report from a counselor. [to duty]’” said Matenji, “but I could see that he was not a professional. Unfortunately he was not qualified. increase. “How do you find a job?”
In a statement to me, OpenAI said it takes the mental health of its employees and contractors very seriously. “One of the reasons we signed Sama in the first place was because they are committed to good practices,” the spokesperson said. “Our previous understanding was that health programs and one-on-one counseling were provided, workers could opt out of any job without penalty, there were limits on exposure to explicit content, and confidential information was was to be handled by specially trained workers.”
An OpenAI spokesperson said the company asked Sama for more information about working conditions in early 2022. Later, Sama said he told OpenAI that he was exiting the content moderation space. Samah did not respond to a request for comment.
For Masenji, the idea of evaluating trade-offs before proceeding with this task sounded like a luxury. With Kenya’s economy destabilized amid the economic turmoil caused by the pandemic, he was just happy to be hired. “It’s coronavirus season now,” he said. “Getting a job in a developing country is a blessing in itself.”
Despite all this, Matenji and his colleagues are proud of the work they have done. And it certainly worked. ChatGPT now refuses to produce explicit scenes that the team helped remove and warns against potentially illegal sexual practices. “It’s very proud for me and for us,” Matenji said.They are proud, but still pain.
Listen to the full conversation between Alex Kantrowitz and Richard Massenzi on this week’s episode of the Big Technology Podcast.
