Still an understatement, it deserves it and more.
I don’t even like turned based games. I don’t like most high fantasy. But holy moly, what a ride BG3 is.
I’m just gonna be pissed of their mixed support of modding (due to wotc) kills the modding community. If Skyrim and Rimworld can have a whole universe of fan content, BG3 should too.
It’s still everywhere in my news/internet diet.
It’s bleeding, for sure, but it’s big. Its gone bad. But I think its premature to say its collapse is a good thing, because it just won’t go away.
It’s not dead though, it’s still linked to everywhere, from big news to niche communities because it still has that critical mass and inertia.
And I have to be cynical of the Fediverse, but realistically, what replaces it, at least here in the US? Discord? No, thanks, I’d at least rather have information be public.
I’m speaking as someone who has never used Twitter, but I can’t ignore it, as much as I’d like to.
The behavior is configurable just like it is on linux, UAC can be set to require a password every time.
But I think its not set this way by default because many users don’t remember their passwords, lol. You think I’m kidding, you should meet my family…
Also, scripts can do plenty without elevation, on linux or Windows.
The problem is that splitting models up over a network, even over LAN, is not super efficient. The entire weights need to be run through for every half word.
And the other problem is that petals just can’t keep up with the crazy dev pace of the LLM community. Honestly they should dump it and fork or contribute to llama.cpp or exllama, as TBH no one wants to split up LLAMA 2 (or even llama 3) 70B, and be a generation or two behind for a base instruct model instead of a finetune.
Even the horde has very few hosts relative to users, even though hosting a small model on a 6GB GPU would get you lots of karma.
The diffusion community is very different, as the output is one image and even the largest open models are much smaller. Lora usage is also standardized there, while it is not on LLM land.
TBH this is a great space for modding and local LLM/LLM “hordes”
^
Futurama had it right, spammers are the ultimate destroyers.
Then the lemmy title is misleading, no? Isn’t that against the rules?
Please ask him, tape it, and don’t let the campaign managers talk him out of it.
+1
Never attribute to malice that which is adequately explained by wanting to make money.
You wouldn’t steal a car…
Hmm, what if the shadowbanning is ‘soft’? Like if bot comments are locked at a low negative number and hidden by default, that would take away most exposure but let them keep rambling away.
Top 50% of the population still.
After all, they wrote a review.
Trap them?
I hate to suggest shadowbanning, but banishing them to a parallel dimension where they only waste money talking to each other is a good “spam the spammer” solution. Bonus points if another bot tries to engage with them, lol.
Do these bots check themselves for shadowbanning? I wonder if there’s a way around that…
This. I’m surprised Lemmy hasn’t already done this, as it’s such a huge glaring issue in Reddit (that they don’t care about, because bots are engagement…)
GPT-4o
Its kind of hilarious that they’re using American APIs to do this. It would be like them buying Ukranian weapons, when they have the blueprints for them already.
Oh, and as for benchmarks, check the huggingface open llm leaderbard. The new one.
But take it with a LARGE grain of salt. Some models game their scores in different ways.
There are more niche benchmarks floating around, such as RULER for long context performance. Amazon ran a good array of models to test their mistral finetune: https://huggingface.co/aws-prototyping/MegaBeam-Mistral-7B-512k
Honestly I would get away from ollama. I don’t like it for a number of reasons, including:
Suboptimal quants
suboptimal settings
limited model selection (as opposed to just browsing huggingface)
Sometimes suboptimal performance compared to kobold.cpp, especially if you are quantizing cache, double especially if you are not on a Mac
Frankly a lot of attention squatting/riding off llama.cpp’'s development without contributing a ton back.
Rumblings of a closed source project.
I could go on and on, inclding some behavior I just didn’t like from the devs, but I think I’ll stop, as its really not that bad.
I hate turn based combat too, but it was super enjoyable in coop. And it’s quite good for being turn based.
It’s also real-time outside of combat, FYI.
For solo, I’d probably get the mod that automates your companions, and reduce the difficulty to your taste to compensate.