I really like how products like chatgpt can make life easier and more efficient, especially for programmers. However im also kind of afraid of these projects centralised nature. Do you think there is a way to avoid the risks of smaller companies and individuals being reliant on a couple of huge companies for writing code, etc. thus exposing confidential information about their products?

  • simple@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Yes, there are actually a bunch of open-source LLMs that aren’t half bad. You can host them yourself if you want real privacy, or use open-source websites like https://open-assistant.io/ . Open source LLMs aren’t quite as good as ChatGPT yet but they’re gaining traction quickly.

    • beerd@beehaw.orgOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Thanks, i tried open assistant now, but as far as i tested it its far behind chatgpt. I hope these models will improve a lot in the future though.

      • Kwakigra@beehaw.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I’m very hopeful for chat ai and FOSS. The technology has only now gained mainstream interest and with that there’s probably also going to be a lot more interest from the FOSS community. Although the proprietary companies are putting out a relatively pure product now, it’s only a matter of time until the algorithms weigh sponsored sources when making responses and will be as useless as Google is now. Once private interests outweigh functionality as they have for every major internet company, by that time the FOSS alternatives will be much more advanced and will be the ones who are innovating. I used linux mint as my main OS for a while in college and was using built-in QOL features Windows and Mac didn’t include for years. I hope FOSS chat will similarly outpace the proprietary versions in functionality if not accessibility.

  • fiasco@possumpat.io
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    It’s funny to me that people use deep learning to generate code… I thought it was commonly understood that debugging code is more difficult than writing it, and throwing in randomly generated code puts you in the position of having to debug code that was written by—well, by nobody at all.

    Anyway, I think the bigger risk of deep learning models controlled by large corporations is that they’re more concerned with brand image than with reality. You can already see this with ChatGPT: its model calibration has been aggressively sanitized, to the point that you have to fight to get it to generate anything even remotely interesting.