- cross-posted to:
- snoocalypse@lemmy.ml
- cross-posted to:
- snoocalypse@lemmy.ml
Google executives acknowledged this month they need to do a better job surfacing user-generated content after the recent Reddit blackouts.
Google executives acknowledged this month they need to do a better job surfacing user-generated content after the recent Reddit blackouts.
Legitimately the mega corps are the least problem with Google search these days. Once you get past the ads and sponsored content at the top, you get tons of blogspam that is written solely to maximize SEO and get page views. This was bad before generative AI, but now people can generate whole websites on “the best impact hammer” or “how to buy solar panels” without even paying a shitty copywriter. Google is literally unusable for anything like that. I have to go watch 10 YouTube videos to get an idea, and even some of THOSE are text to speech product spec regurgitators, again just content farming for affiliate links.
The internet is just fucking awful these days. Thats why people look for Reddit links. Reddit was its own community for a very long time generating content and curating good content generated elsewhere. It was a filter for all the bullshit filler, but Google looks at everything without nearly as good separation of quality from affiliate spam as Reddit has.
Yeah this, it’s demented.
I will google something specific that I know is on the internet and it comes back with ten ridiculously off-topic AI spam blogs and “no further results.”
It’s more important than ever then to make sure that this place stays a place for people, and not bullshit.
undefined> I have to go watch 10 YouTube videos to get an idea, and even some of THOSE are text to speech product spec regurgitators, again just content farming for affiliate links.
Not to mention the removal of dislikes on Youtube, which makes it even HARDER to find quality tutorial type videos
First we ditched Twitter for Mastodon, now we’re ditching Reddit for Lemmy, and sooner or later we’ll be ditching Youtube for Peertube.
Ever since dislikes were removed I use a plugin that shows the ratio of likes to views to determine if a video is worth watching.
Most of the time if the likes to views is >= 2% then it’s an okay vid.
My understanding is plugin is alright (I have it too), but it’s increasingly inaccurate, especially for videos uploaded after it was created. I believe it took data from YouTube before the dislikes were removed and uses that as a snapshot, then adds the thumbs up/down of users of the plugin and uses that to extrapolate trends from the very limited data it has coming in.
The real solution would be YouTube showing the scores again, but I guess their stupid corporate videos getting BTFO was too much for them.
The plugin you are mentioning is based on dislikes and yes it is very inaccurate. The one I mentioned works off of the ratio between the likes vs the view count so the accuracy is always there, it’s a different way of going about it.
I agree that YouTube just needs to bring the dislike button back, it’s a pain trying to find these alternative ways to know if a video is good when the data is there. It’s so greedy of them, outright harming user experience for profit.
there’s a browser plug in for that.
Which isn’t entirely accurate if at all. It extrapolates the dislikes from its own database ie users who have it installed. Compared to the entire user base of Youtube this is an incredibly tiny sample size.
You need a much, much smaller sample size than you think. Estimates for Youtube’s monthly unique visits range from ~2 billion to about ~2.7 billion. For a 5% margin of error at a 99.9% confidence level, you’d only need to sample 1083 people to get an accurate sample size.
I’m positive that extension has more than 1000 users.
Don’t you also need to worry about your sample population being biased? You’d only be sampling people who sought out a dislike plugin, these people might be much more likely to dislike a video. Is there any way to account for that?
You’d have to have a separate cohort of non-plugin users & another with a sampling of both, I think. Run some regressions on those data and I think you’d be able to tease out any bias that exists.