Internet safety campaigners urged the UK Communications Watchdog to limit the use of artificial intelligence in critical risk assessments after reports that Mark Zuckerberg's meta had planned to automate checks.
After reporting last month, Ofcom said it would “in light of concerns” and “in light of concerns” after reporting that up to 90% of all risk assessments in Facebook, Instagram and WhatsApp owners would soon be run by AI.
Social media platforms are necessary to measure how harm occurs on the service and how they mitigate those potential harm, with a particular focus on protecting children's users and preventing the display of illegal content, under the UK's online safety laws. The risk assessment process is considered an important aspect of the law.
In a letter to Ofcom's CEO Melanie Dawes, organizations such as the Molly Rose Foundation, NSPCC and Internet Watch Foundation described the outlook for AI-driven risk assessments as “a retrograde and extremely surprising step.”
They stated: “We recommend publicly arguing that risk assessments are not usually considered “appropriate and sufficient.”
The letter also urged the Watchdog to “challenge the assumption that the platform can choose to suspend the risk assessment process.”
A Ofcom spokesperson said, “Who has completed, reviewed or approved the risk assessment? We are considering the concerns raised in this letter and will respond in time.”
After the newsletter promotion
Mehta said the letter intentionally misleaded the company's safety approach, addressing high standards and complying with regulations.
“We have not used AI to make decisions about risk,” a Meta spokesperson said. “In fact, our experts have built tools that help teams identify when legal and policy requirements apply to a particular product. We have improved our ability to manage harmful content using human-supervised technology, and technological advances have significantly improved safety outcomes.”
The Molly Rose Foundation organized a letter after US broadcaster NPR reported last month that Meta's algorithms and new safety features updates were primarily approved by AI systems and no longer scrutinized by staff.
According to a former META executive who spoke to NPR anonymously, the change will allow companies to launch app updates and features more quickly on Facebook, Instagram and WhatsApp, but it is unlikely that potential issues will be prevented before new products are released, allowing users to create “higher risks.”
The NPR also reported that Meta is considering automating reviews in sensitive areas, including risks for young people, and monitoring the spread of falsehood.