Summary

A Danish study found Instagram’s moderation of self-harm content to be “extremely inadequate,” with Meta failing to remove any of 85 harmful posts shared in a private network created for the experiment.

Despite claiming to proactively remove 99% of such content, the platform’s algorithms were shown to promote self-harm networks by connecting users.

Critics, including psychologists and researchers, accuse Meta of prioritizing engagement over safety, with vulnerable teens at risk of severe harm.

The findings suggest potential non-compliance with the EU’s Digital Services Act.

  • brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    21 days ago

    I feel like we need a phrase for this.

    “It’s the algorithm, stupid!”

    And if they can be sued for algorithmic recommendations at all, I feel like their resort would be going back to pure “user” recommendation like community managed feeds, following other’s recs and such. Which would kill SO MANY birds with one stone.