Excerpt:

To underline Blanchfield’s point, the ChatGPT book selection process was found to be unreliable and inconsistent when repeated by Popular Science. “A repeat inquiry regarding ‘The Kite Runner,’ for example, gives contradictory answers,” the Popular Science reporters noted. “In one response, ChatGPT deems Khaled Hosseini’s novel to contain ‘little to no explicit sexual content.’ Upon a separate follow-up, the LLM affirms the book ‘does contain a description of a sexual assault.’”

  • money_loo
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    I basically agree with you but for your example that’s because ChatGPT wasn’t made to return local results, nor even recent ones.

    So of course it’s going to fail spectacularly at that task. It has no means to research it.

    • dfyx@lemmy.helios42.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      My point wasn’t that it fails but that it will make up stuff instead of admitting it doesn’t know. It’s of course completely understandable, this is just a fancy sentence completion system with no actual intelligence but people still don’t seem to get that. Even after months of experts warning them about the limitations, they continue throwing LLMs at problems that would need entirely different solutions and then act confused when the LLM gives them a believable but incorrect result.