

I’m starting to have a sneaking suspicion that putting 24G of VRAM on a card isn’t happening because they don’t want people using AI models locally. The moment you can expect the modern gamers computer to have that kind of local computing power - is the moment they stop getting to slurp up all of your data.
Lossless Scaling (on Steam) has also shown HUGE promise from a 2-GPU standpoint as well. I’ve seen some impressive results from people piping their NVidia cards, into an Intel GPU (on-die or discreet) and using a dedicated GPU for the upscaling as well.