The best part of the fediverse is that anyone can run their own server. The downside of this is that anyone can easily create hordes of fake accounts, as I will now demonstrate.
Fighting fake accounts is hard and most implementations do not currently have an effective way of filtering out fake accounts. I’m sure that the developers will step in if this becomes a bigger problem. Until then, remember that votes are just a number.
It could be implemented on both the server and the client, with the client trusting the server most of the time and spot checking occasionally to keep the server honest.
The origins of upvotes and downvotes are already revealed on objects on Lemmy and most other fediverse platforms. However, this is not an absolute requirement; there are cryptographic solutions that allow verifying vote aggregation without identifying vote origins, but they are mathematically expensive.
Given that Lemmy isn’t that popular yet, how big would that payload and computational cost be when considering the votes on the highly active threads of !fediverse@lemmy.world … 1.5k votes with 960 comments. Or the highly active https://lemmy.world/post/1033769 (3k votes, with 1081 comments) from earlier this week.
It’s nothing. You don’t recompute everything for each page refresh. Your sucks well the data, compute reputation total over time and discard old raw data when your local cache is full.
Historical daily data gets packaged, compressed, and cross signed by multiple high reputation entities.
When there are doubts about a user’s history, your client drills down those historical packages and reconstitute their history to recalculate their reputation
Whenever a client does that work, they publish the result and sign it with their private keys and that becomes a web of trust data point for the entire network.
Only clients and the network matter, servers are just untrustworthy temporary caches.
Any solution that only works because the platform is small and that doesn’t scale is a bad solution though.