Summary: Meta Platforms Inc. is moving away from reliance on outside content moderation vendors and toward internally driven artificial intelligence systems to detect and remove material that violates its terms of service. The company announced a global rollout of a Meta AI support assistant for Facebook and Instagram and said it will expand AI enforcement across its apps as the systems demonstrate consistently stronger performance than existing methods.
Meta said on Thursday that the newly introduced Meta AI support assistant is available worldwide on Facebook and Instagram to provide continuous help for account-related issues. The assistant is designed to respond to user requests in under five seconds and to handle a range of account tasks, including reporting scams, resetting passwords, managing privacy settings, and updating profile settings.
In early testing, Meta reported several performance measures that it considered encouraging. The company said the AI technology identifies and prevents about 5,000 scam attempts per day that had previously gone undetected by review teams. It also reported a reduction in user reports related to the most impersonated celebrities by more than 80%.
On content enforcement, Meta said the AI detects twice as much violating adult sexual solicitation content as review teams did while cutting the error rate by more than 60%. The company also highlighted language coverage improvements: the systems now operate in languages spoken by 98% of people online, up from prior coverage in roughly 80 languages.
Meta described the AI as adaptable to cultural nuance and rapid shifts in online communication, including evolving code words, emoji meanings, and slang. Over the next several years, the company plans to deploy these AI systems across its applications once they consistently outperform current content enforcement approaches.
As it reduces dependence on third-party vendors for content enforcement, Meta said it will retain human review for complex, sensitive, or high-stakes decisions. Examples specified by the company include appeals of account disablement and situations that may require reporting to law enforcement.
Key points:
- Meta is shifting content moderation from third-party vendors to advanced AI systems and will roll out AI across apps if performance benchmarks are met - impacts social media platforms and content moderation services.
- The Meta AI support assistant is now live globally on Facebook and Instagram, offering under-five-second responses for account tasks - relevant to user experience and customer support operations.
- Early tests show the AI prevents roughly 5,000 undetected scam attempts per day, reduces impersonation reports by over 80%, and catches twice as much violating adult sexual solicitation content while lowering error rates by over 60% - implications for platform safety metrics and moderation effectiveness.
Risks and uncertainties:
- The timeline for full deployment depends on the AI systems consistently outperforming current enforcement methods - uncertainty for moderation vendors and operational planning.
- Meta will continue to use human oversight for complex decisions, indicating potential limits to automation in high-stakes cases such as appeals and law enforcement reporting - ongoing dependence on human review processes.
- Performance claims are based on early tests; future outcomes across the full user base may vary - potential variability affecting content safety and trust metrics.