A new common sense has emerged regarding the perils of predictive algorithms. As the groundbreaking work of scholars like Safiya Noble, Cathy O’Neil, Virginia Eubanks, and Ruha Benjamin has shown, big data tools—from crime predictors in policing to risk predictors in finance—increasingly govern our lives in ways unaccountable and often unknown to the public. They replicate bias, entrench inequalities, and distort institutional aims. They devalue much of what makes us human: our capacities to exercise discretion, act spontaneously, and reason in ways that can’t be quantified. And far from being objective or neutral, technical decisions made in system design embed the values, aims, and interests of mostly white, mostly male technologists working in mostly profit-driven enterprises. Simply put, these tools are dangerous; in O’Neil’s words, they are “weapons of math destruction.”
These arguments offer an essential corrective to the algorithmic solutionism peddled by Big Tech—the breathless enthusiasm that promises, in the words of Silicon Valley venture capitalist Marc Andreessen, to “make everything we care about better.” But they have also helped to reinforce a profound skepticism of this technology as such. Are the political implications of algorithmic tools really so different from those of our decision-making systems of yore? If human systems already entrench inequality, replicate bias, and lack democratic legitimacy, might data-based algorithms offer some promise in addition to peril? If so, how should we approach the collective challenge of building better institutions, both human and machine?
I think it isn’t just about the algorithm of moderation not being perfect, the nature of the moderation problem itself demands an answer of a fuzzy type.
Any single moderation system can never create an acceptable and stable solution. The distortion from edge cases, trolls that exploit them and outside bullshit inveitably leads to toxicity and centralization under a harsh group think (censorship of palestinian voices on bluesky being one example).
Any real solution to the problem must be of an architecture of diversity, with many different moderation policies and authorities that 99% of the time agree in spirit (until they don’t and then sometimes a difficult parting of ways occurs).
It is much more difficult to prey on a community as a nazi troll when every damn fediverse instance seems to draw the red line differently and not only does that make being a toxic edgey boi much harder especially if you have no functioning moral compass left… but also somehow even with all that disgusting lack of conformity every single progressive community immediately recognizes you for being a nazi.
I mean this isn’t actually that complicated, we just live in a moment where we have materially disempowered artists and creatives who understand this kind of thing implicitly.
“embed the values, aims, and interests of mostly white, mostly male technologists working in mostly profit-driven enterprises.”
This is the correct take, I think. Algorithms could (and can) produce much better outcomes than human driven systems. But when they’re wielded by capitalists, that’s not what they’re built to do. Humans wielded by capitalists suck, but they also (often) have compassion, conscience and some awareness of the outcomes they might be contributing to, and so they don’t always make decisions / take action exactly the way the capitalists would have them do so. Algorithms on the other hand do exactly what they’re told to do, exactly the way they’re told to do it, no matter who is telling them or why. A computer will NEVER have a problem sending children to death camps, if that’s what it’s programmed to do.
In todays world, the credo must become “Seize the means of computation!”