• 0 Posts
  • 17 Comments
Joined 1 year ago
cake
Cake day: July 3rd, 2023

help-circle









  • They don’t reason, they’re stochastic parrots. Their internal mechanisms are well understood, no idea where you got the notion that the folks building these don’t know how they work. It can be hard to predict/understand how an LLM generated a given prompt because of the huge training corpus and statistical nature of neural nets in general.

    LLMs work the same as any other net, just with massive sample sets. They have no reasoning capabilities of any kind. We are naturally inclined to ascribe humanlike thought processes to them because they produce human-sounding outputs.

    If you would like the perspective of real scientists instead of a “tech-bro” like me I would recommend Emily Bender and Timnit Gebru. I’d recommend them as experts without a vested interest in the massively overblown hype about what LLMs are actually capable of.