• taanegl@beehaw.org
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        Open source locally run LLM that runs on GPU or dedicated PCIe open hardware that doesn’t touch the cloud…

    • PixxlMan@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      To be fair - people don’t know what they want until they get it. In 2005 people would’ve asked for faster flip phones, not smartphones.

      I don’t have much faith in current gen AI assistants actually being useful though, but the fact that no one has asked for it doesn’t necessarily mean much.