According to a report from , Meta plans to shift the task of assessing its products’ potential harms away from human reviewers, instead leaning more heavily on AI to speed up the process. Internal documents seen by the publication note that Meta is aiming to have up to 90 percent of risk assessments fall on AI, NPR reports, and is considering using AI reviews even in areas such as youth risk and “integrity,” which covers violent content, misinformation and more. Unnamed current and former Meta employees who spoke with NPR warned AI may overlook serious risks that a human team would have been able to identify.
Updates and new features for Meta’s platforms, including Instagram and WhatsApp, have long been subjected to human reviews before they hit the public, but Meta has reportedly doubled down on the use of AI over the last two months. Now, according to NPR, product teams have to fill out a questionnaire about their product and submit this for review by the AI system, which generally provides an “instant decision” that includes the risk areas it’s identified. They’ll then have to address whatever requirements it laid out to resolve the issues before the product can be released.
A former Meta executive told NPR that reducing scrutiny “means you’re creating higher risks. Negative externalities of product changes are less likely to be prevented before they start causing problems in the world.” In a statement to NPR, Meta said it would still tap “human expertise” to evaluate “novel and complex issues,” and leave the “low-risk decisions” to AI. Read the full report over at .
It comes a few days after Meta released its — the first since and earlier this year. The amount of content taken down has unsurprisingly decreased in the wake of the changes, per the report. But there was a small rise in bullying and harassment, as well as violent and graphic content.
This story originally appeared on Engadget