Meta Accused of Failing to Protect Kids on Instagram, Facebook in Europe

European authorities are intensifying scrutiny of Meta, accusing the tech giant of systemic failures in protecting children on Instagram and Facebook.

European authorities are intensifying scrutiny of Meta, accusing the tech giant of systemic failures in protecting children on Instagram and Facebook. Despite repeated commitments to user safety, growing evidence suggests Meta’s age verification systems are porous, content moderation inconsistent, and algorithmic recommendations often expose minors to harmful material. These allegations stem from formal investigations by Ireland’s Data Protection Commission (DPC), France’s CNIL, and Germany’s NetzDG enforcers—all pointing to the same conclusion: Meta’s safeguards are reactive, not preventative.

The core issue isn’t merely technical—it’s structural. Meta’s business model thrives on engagement, and children, with their developing brains and impressionable behaviors, represent high-engagement users. Yet, the company continues to deploy design features like infinite scroll, auto-play videos, and algorithmic content feeds on platforms widely used by underage users. Regulators argue these features, combined with weak age checks, create a perfect storm for harm.

Systemic Gaps in Age Verification

Meta claims to restrict Instagram and Facebook to users aged 13 and older. In practice, enforcement is minimal. No identity verification is required during account creation. A simple birthdate entry is the only gatekeeping mechanism—easily bypassed by any child.

European data regulators highlight this as a foundational flaw. In a 2023 investigation, the Irish DPC found that Meta’s reliance on self-declared age data violates the General Data Protection Regulation (GDPR), particularly Article 5’s principle of data minimization and accuracy. Without reliable age confirmation, Meta cannot ensure that minors aren’t exposed to content inappropriate for their age group—ranging from violent imagery to targeted advertising.

Worse, Meta has resisted implementing robust identity checks, citing privacy concerns. But regulators counter that privacy and child protection aren’t mutually exclusive. Solutions like document scanning, biometric verification, or third-party identity providers exist and are used in age-restricted online spaces, such as gambling sites in the UK. Meta’s hesitation suggests a prioritization of growth over compliance.

Algorithmic Amplification of Harmful Content

Even when children access platforms legitimately (e.g., via falsified birthdates), Meta’s algorithms often deepen the risk. Instagram’s recommendation engine, designed to maximize time-on-platform, frequently surfaces content related to self-harm, eating disorders, or extreme fitness regimes to young users.

Internal research leaked in 2021 showed that 13% of UK teens using Instagram said it worsened suicidal thoughts. Similar findings emerged in France, where a parliamentary inquiry revealed that TikTok and Instagram algorithms were actively promoting pro-anorexia content to minors searching for "healthy recipes."

Meta has since introduced features like “Take a Break” reminders and reduced visibility of certain hashtags. But these are opt-in tools—buried in settings and rarely used by teenagers. The default behavior remains unchanged: auto-play reels, algorithmic discovery, and engagement-driven content feeds.

Weak Enforcement of Community Guidelines

Meta’s Community Guidelines prohibit nudity, bullying, and sexual exploitation. Yet enforcement on content targeting minors remains inconsistent. The DPC’s 2024 preliminary report noted that automated detection systems failed to flag 64% of child sexual abuse material (CSAM) in test scenarios—a staggering shortcoming.

EU says Meta is failing to keep underage users off Facebook and Instagram
Image source: wplginc-wplg-prod.web.arc-cdn.net

Human moderation, while present, is overwhelmed. Contractors in low-wage countries review thousands of images daily, often without adequate mental health support. This model leads to both over-censorship and dangerous under-enforcement. A 2023 case in Belgium saw a predator using Instagram DMs to solicit minors for explicit content—reported multiple times before Meta intervened.

Moreover, Meta’s reporting tools are often ineffective. Users complain of generic responses, lack of follow-up, and no escalation path. One parent in Germany reported an account promoting self-harm to her 14-year-old daughter—Meta’s response took 11 days and resulted in no action.

Regulatory Pressure Mounts Across Europe

Europe is not a monolith, but regulators from Dublin to Berlin are aligning on one point: Meta must do better.

  • Ireland’s DPC is leading a cross-border GDPR investigation into Meta’s handling of children’s data on Instagram.
  • France’s CNIL issued a €50 million fine in 2022 for inadequate privacy settings for minors.
  • Germany’s Federal Network Agency has classified Facebook and Instagram as “services with significant reach,” triggering stricter transparency and risk assessment obligations under the NetzDG law.
  • The European Commission has signaled intent to invoke the Digital Services Act (DSA), which requires very large online platforms to conduct annual risk assessments for minors and mitigate identified harms.

Under the DSA, Meta could face fines up to 6% of global revenue—potentially billions of dollars—if found non-compliant. The first official DSA assessment of Instagram is expected in late 2024.

Design Choices Prioritize Engagement Over Safety

The deeper issue lies in Meta’s product design. Features like Reels, Stories, and Explore are optimized for virality, not safety. These are the same features most used by young people—and most likely to expose them to harmful content.

Consider the Explore page on Instagram. It’s a gateway to algorithmically curated content far outside a user’s follow list. A teen searching for “dance videos” might quickly be shown clips of dangerous challenges, substance use, or sexualized content. Meta knows this: internal documents show awareness that Explore drives “higher risk surface area” for minors.

Yet, instead of limiting access, Meta has expanded it. In 2023, it introduced “Suggested Posts” below the main feed—extending algorithmic reach even to users who don’t actively explore. Children are not offered simplified, safer interface modes by default.

Compare this to YouTube Kids, which, while imperfect, provides distinct content filtering, watch-time limits, and parental controls as standard. Meta offers similar tools, but they’re optional and often opt-out—not opt-in—meaning most families never use them.

Parental Controls Exist—But Are Hard to Find and Use

Meta does provide parental supervision tools. Parents can link their account to a teen’s, set screen time limits, and view activity logs. However, adoption is low—only 12% of parents in the EU use them, according to a 2023 Eurobarometer survey.

Why? Because the tools are difficult to discover and set up. To enable supervision, both parent and child must agree to link accounts. Many teens resist, viewing it as surveillance. Parents, unaware of the risks, often don’t initiate the process.

Even when activated, the tools have limitations. Parents can’t see direct messages—only that they’re being sent. They can’t filter specific hashtags or content types. And there’s no way to disable algorithmic recommendations entirely.

EU says Meta is failing to keep underage users off Facebook and ...
Image source: dailyjournal.net

A more effective model would be default safety settings for users under 18: restricted Explore access, no targeted ads, and algorithmic feeds limited to followed accounts. Meta could grandfather in existing users but apply stricter rules to new accounts registered with underage birthdates.

What Needs to Change—and Fast

Meta isn’t alone in struggling with child safety. But as the owner of two of the world’s most popular platforms among teens, it carries disproportionate responsibility.

Real change requires more than PR statements. It demands structural shifts:

  • Mandatory age verification using privacy-preserving identity tools.
  • Default safety settings for underage users: no algorithmic recommendations, no targeted ads, restricted DMs.
  • Transparent reporting on content moderation effectiveness, especially for CSAM and harmful mental health content.
  • Independent audits of risk assessments under the DSA.
  • Simpler, more effective parental tools—with opt-out rather than opt-in design.

Until then, European regulators have little reason to trust Meta’s self-policing. The evidence suggests a company that responds to pressure only when fines loom—rather than acting out of ethical responsibility.

The Path Forward: Accountability, Not Promises

Meta has spent years promising to keep children safe. But promises without enforcement are meaningless. European regulators now have the tools—GDPR, DSA, national laws—to hold Meta accountable. The question isn’t whether the company can protect kids, but whether it will.

Parents, educators, and policymakers must push for defaults that prioritize safety over engagement. Tech companies should not be allowed to profit from underage users while evading responsibility for their well-being.

The time for half-measures is over. Meta must redesign its platforms with children’s safety as a core principle—not an afterthought. If it won’t act voluntarily, Europe must force it to.

Frequently Asked Questions

Why can children easily access Instagram and Facebook? Because Meta only requires a self-reported birthdate during signup, with no identity verification. This makes it simple for minors to lie about their age and create accounts.

What is the Digital Services Act’s role in holding Meta accountable? The DSA requires large platforms like Instagram to assess risks to minors annually and implement mitigations. Non-compliance can lead to fines up to 6% of global revenue.

Does Meta have tools to protect children? Yes, it offers parental supervision tools and “Take a Break” reminders, but these are opt-in and underused. Safer defaults are not enabled automatically for underage users.

How do algorithms harm children on Meta’s platforms? Algorithms prioritize engagement, often pushing extreme or harmful content—like self-harm or eating disorder material—to minors who interact with related topics.

What have European regulators done so far? Ireland, France, and Germany have launched investigations and issued fines. The EU is using the DSA to demand transparency and risk mitigation from Meta.

Can parents monitor what their children see on Instagram? Partially. With parental supervision enabled, parents can see activity time and followed accounts, but not direct messages or specific content viewed.

Is Meta the only social media company failing children? No, but it faces particular scrutiny due to Instagram’s popularity with teens and its history of downplaying internal research on youth mental health risks.

FAQ

What should you look for in Meta Accused of Failing to Protect Kids on Instagram, Facebook in Europe? Focus on relevance, practical value, and how well the solution matches real user intent.

Is Meta Accused of Failing to Protect Kids on Instagram, Facebook in Europe suitable for beginners? That depends on the workflow, but a clear step-by-step approach usually makes it easier to start.

How do you compare options around Meta Accused of Failing to Protect Kids on Instagram, Facebook in Europe? Compare features, trust signals, limitations, pricing, and ease of implementation.

What mistakes should you avoid? Avoid generic choices, weak validation, and decisions based only on marketing claims.

What is the next best step? Shortlist the most relevant options, validate them quickly, and refine from real-world results.