Excerpt:

To underline Blanchfield’s point, the ChatGPT book selection process was found to be unreliable and inconsistent when repeated by Popular Science. “A repeat inquiry regarding ‘The Kite Runner,’ for example, gives contradictory answers,” the Popular Science reporters noted. “In one response, ChatGPT deems Khaled Hosseini’s novel to contain ‘little to no explicit sexual content.’ Upon a separate follow-up, the LLM affirms the book ‘does contain a description of a sexual assault.’”

  • dfyx@lemmy.helios42.de
    link
    fedilink
    English
    arrow-up
    32
    ·
    1 year ago

    When will people learn that LLMs have no understanding of truth or facts? They just generate something that looks like it was written by a human with some amount of internal consistency while making baseless assumptions for anything that doesn’t show up (enough) in their training set.

    That makes them great for writing fiction but try asking ChatGPT for the best restaurants in a small town. It will gladly and without hesitation list you ten restaurants that have never existed, including links to websites that may belong to a completely different restaurant.

    • money_loo
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      I basically agree with you but for your example that’s because ChatGPT wasn’t made to return local results, nor even recent ones.

      So of course it’s going to fail spectacularly at that task. It has no means to research it.

      • dfyx@lemmy.helios42.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        My point wasn’t that it fails but that it will make up stuff instead of admitting it doesn’t know. It’s of course completely understandable, this is just a fancy sentence completion system with no actual intelligence but people still don’t seem to get that. Even after months of experts warning them about the limitations, they continue throwing LLMs at problems that would need entirely different solutions and then act confused when the LLM gives them a believable but incorrect result.