Meta is restructuring its content enforcement approach across Facebook, Instagram and its other platforms by deploying new artificial intelligence systems and reducing reliance on external contractors.
The new AI models are being trained to detect and remove high-priority policy violations, including terrorism-related material, child exploitation, illicit drug sales, fraud and scams. Meta states that these systems will be rolled out at scale only after they consistently outperform current enforcement methods, which depend heavily on third-party human moderators.
Internal testing data reported by Meta indicates that the models can identify approximately twice as much adult sexual solicitation content as human review teams, while reducing error rates by more than 60 percent. The company also reports improved detection of impersonation accounts targeting public figures and enhanced identification of potential account takeovers by monitoring indicators such as atypical login locations, abrupt password changes and rapid profile modifications.
Meta reports that the AI systems are currently blocking about 5,000 scam attempts per day in which attackers attempt to obtain user login credentials. The company’s stated rationale is that automating repetitive, high-volume enforcement tasks can increase overall safety metrics and decrease exposure of human moderators to graphic or disturbing material.
Human involvement will continue in key areas. Meta indicates that specialists will design, train and audit the AI models and will retain responsibility for high-impact decisions, including handling appeals of account bans and determining when to refer cases to law enforcement.
This transition occurs amid increased political and regulatory scrutiny of Meta’s impact on children and young users, and broader oversight of how social platforms manage misinformation, extremism and other harmful content. In parallel, Meta has been relaxing some general moderation rules and testing more personalized approaches to political and news content, changes that have attracted criticism from safety advocates.
In addition to enforcement changes, Meta is launching a Meta AI support assistant, a virtual tool intended to provide continuous customer support within Facebook and Instagram on both mobile and desktop platforms.