lang icon English
Auto-Filling SEO Website as a Gift

Launch Your AI-Powered Business and get clients!

No advertising investment needed—just results. AI finds, negotiates, and closes deals automatically

June 3, 2025, 12:07 a.m.
22

Meta to Automate Up to 90% of Risk Assessments, Sparking Privacy and Safety Concerns

For years, Meta’s teams of reviewers assessed potential risks whenever new features launched on Instagram, WhatsApp, and Facebook, evaluating concerns like threats to user privacy, harm to minors, or the spread of misleading or toxic content. These privacy and integrity reviews were primarily conducted by human evaluators. However, internal documents obtained by NPR reveal that Meta plans to automate up to 90% of these risk assessments soon. This means that critical algorithm updates, new safety features, and changes in content sharing will mostly be approved by AI systems without the usual staff scrutiny that considers unforeseen repercussions or misuse. Within Meta, this shift is seen as advantageous for product developers by speeding up the release of updates and features. Yet, current and former employees worry this automation could lead to inadequate judgment of risks, potentially causing real-world harm. A former Meta executive expressed concern that faster launches with less rigorous review increase the likelihood of negative outcomes, as fewer problems might be caught beforehand. Meta stated it has invested billions to protect user privacy and that the new risk review changes aim to streamline decision-making while retaining human expertise for novel or complex issues. It claims only “low-risk” decisions will be automated. However, internal documents suggest automation may extend to sensitive areas like AI safety, youth risk, and overall platform integrity, which covers violent content and misinformation. The new process involves product teams completing a questionnaire to receive an "instant decision" from AI, outlining risks and necessary mitigations. Previously, risk assessors had to approve product updates before rollout; now, engineers largely self-assess risks unless they specifically request a human review. This shift empowers engineers and product teams—who often aren’t privacy experts—to make judgments, raising concerns about the quality of assessments.

Zvika Krieger, Meta’s former director of responsible innovation, warned that product teams are primarily evaluated on rapid launches, not safety, and that self-assessments risk becoming mere box-checking that misses significant issues. He acknowledged room for automation but cautioned that excessive reliance on AI could degrade review quality. Meta downplayed fears, noting it audits AI decisions for projects without human review. Its European operations, governed by strict regulations like the Digital Services Act, will maintain human oversight from its Ireland headquarters. Some changes coincide with Meta ending its fact-checking program and loosening hate speech policies, signaling a broader company shift toward faster updates and fewer content restrictions—a loosening of longstanding guardrails aimed at preventing platform misuse. This approach follows CEO Mark Zuckerberg’s efforts to align with political figures like former President Trump, whose election Zuckerberg described as a “cultural tipping point. ” The push for automation also ties into Meta’s years-long strategy to use AI to accelerate operations amid competition from TikTok, OpenAI, and others. Meta recently increased reliance on AI to enforce content moderation, employing language models that outperform humans in certain policy areas. This allows human reviewers to focus on more complex cases. Katie Harbath, a former Facebook public policy expert, supports using AI to enhance speed and efficiency but stresses the need for human checks. Conversely, another ex-Meta employee questioned whether speeding up risk assessments is wise, highlighting that new products face intense scrutiny which often uncovers overlooked issues. Michel Protti, Meta’s chief privacy officer for product, described the changes as empowering product teams and evolving risk management to simplify decision-making. The automation rollout accelerated through April and May 2024. However, some insiders criticize this characterization, emphasizing that removing humans from risk evaluations undermines the critical human perspective on potential harms, calling the move “irresponsible” given Meta’s mission. In summary, Meta is shifting from human-led to predominantly AI-driven risk assessments for platform changes, aiming to accelerate innovation but raising serious concerns internally about compromised scrutiny, potential harms, and the adequacy of AI in handling complex ethical and safety issues.



Brief news summary

Meta is transitioning from human-led to predominantly AI-driven risk assessments for Instagram, WhatsApp, and Facebook updates, automating up to 90% of privacy and integrity reviews. This aims to accelerate product launches by allowing developers to self-assess risks with less human oversight. Meta claims automation mainly handles low-risk cases, while experts address complex issues. However, critics, including former employees, warn this shift could overlook serious harms like privacy breaches, youth safety, and misinformation. The change aligns with ending Meta’s fact-checking program and loosening content controls, reflecting CEO Mark Zuckerberg’s focus on rapid development amid competition from TikTok and OpenAI. While AI enhances efficiency, experts emphasize human judgment’s vital role in preventing unchecked harm. Meta cites ongoing audits and EU regulations as safeguards, but insiders fear reduced human scrutiny may cause harmful consequences and underestimate real-world impacts.
Business on autopilot

AI-powered Lead Generation in Social Media
and Search Engines

Let AI take control and automatically generate leads for you!

I'm your Content Manager, ready to handle your first test assignment

Language

Content Maker

Our unique Content Maker allows you to create an SEO article, social media posts, and a video based on the information presented in the article

news image

Last news

The Best for your Business

Learn how AI can help your business.
Let’s talk!

June 4, 2025, 10:48 p.m.

Amazon's Delivery, Logistics Get an AI Boost

Amazon has announced a major expansion in its use of artificial intelligence to enhance delivery and logistics, marking a significant advancement in integrating cutting-edge technology within its supply chain.

June 4, 2025, 10:11 p.m.

Malaysia Activates National Blockchain Infrastruc…

Malaysia has achieved a major milestone in its digital transformation with the official launch of the Malaysia Blockchain Infrastructure (MBI), a secure and scalable national platform for developing and deploying blockchain applications across key sectors such as finance, healthcare, and logistics.

June 4, 2025, 9:15 p.m.

AI Adoption Could Boost Global GDP by 15% by 2035…

A recent study by the global professional services network PricewaterhouseCoopers (PwC) has revealed that the adoption of artificial intelligence (AI) technologies could have a profound economic impact.

June 4, 2025, 8:28 p.m.

Citi Projects Stablecoin Market at $1.6T to $3.7T…

Citi, a leading global financial institution, has released a forecast projecting substantial growth in the stablecoin market over the next decade.

June 4, 2025, 7:42 p.m.

Lightmatter Unveils Breakthrough Photonic Chip to…

Lightmatter, a Silicon Valley startup, has introduced a cutting-edge photonic chip designed to accelerate artificial intelligence (AI) computations without increasing power consumption, thus enhancing energy efficiency.

June 4, 2025, 6:54 p.m.

Bybit’s CEO Discusses the $1.5B Hack & Possible C…

In a recent Wu Blockchain podcast interview, Ben Zhou, CEO of Bybit, detailed a major security breach that occurred on February 22, 2025, during a transfer between the exchange’s cold and hot wallets between 9:30 and 10:00 UTC.

June 4, 2025, 6:03 p.m.

Reddit Sues Anthropic Over Alleged Unauthorized U…

Reddit, the popular online content aggregation and discussion platform, has filed a lawsuit against AI company Anthropic, accusing it of using automated bots to scrape Reddit’s content without permission.

All news