Hi, I’m Eric and I work at a big chip company making chips and such! I do math for a job, but it’s cold hard stochastic optimization that makes people who know names like Tychonoff and Sylow weep.

My pfp is Hank Azaria in Heat, but you already knew that.

  • 0 Posts
  • 81 Comments
Joined 10 months ago
cake
Cake day: January 22nd, 2024

help-circle
  • I remember when several months (a year ago?) when the news got out that gpt-3.5-turbo-papillion-grumpalumpgus could play chess around ~1600 elo. I was skeptical the apparent skill wasn’t just a hacked-on patch to stop folks from clowning on their models on xitter. Like if an LLM had just read the instructions of chess and started playing like a competent player, that would be genuinely impressive. But if what happened is they generated 10^12 synthetic games of chess played by stonk fish and used that to train the model- that ain’t an emergent ability, that’s just brute forcing chess. The fact that larger, open-source models that perform better on other benchmarks, still flail at chess is just a glaring red flag that something funky was going on w/ gpt-3.5-turbo-instruct to drive home the “eMeRgEnCe” narrative. I’d bet decent odds if you played with modified rules, (knights move a one space longer L shape, you cannot move a pawn 2 moves after it last moved, etc), gpt-3.5 would fuckin suck.

    Edit: the author asks “why skill go down tho” on later models. Like isn’t it obvious? At that moment of time, chess skills weren’t a priority so the trillions of synthetic games weren’t included in the training? Like this isn’t that big of a mystery…? It’s not like other NN haven’t been trained to play chess…

















  • Yes, the classical algo achieves perfect accuracy and is way faster. There is also a table that shows the cost of running o1 is enormous. Like comically bad. Boil a small ocean bad. We’ll just 10x the size and it will achieve 15 steps inshallah.

    Imo, this is like the same behavior we see on math problems. More steps it takes, the higher the chance it just decoheres completely. I can’t see any reason why this type of thing would just “click” for the models if they are also unable to do multiplication.

    I mean this just reeks of pure hopium from OAI and co that things will magykly work out. (But the newer model is clearly better^{tm}! I still don’t see any indication that one day that chart is just going to be 100s across the board.)




  • If you thought the shitty hype around the fake “GPT-4 went awol and hired a Taskrabbit worker to read a captcha” was great, get ready for the sequel, o1 escapes from the machine to invade the real world!

    Re: Doomers terrified about the machines escaping:

    txt description:

    (l33t ai bro): Fucking wild. @OpenAI’s new o1 model was tested with a Capture The Flag (CTF) cybersecurity challenge. But the Docker container containing the test was misconfigured, causing the CTF to crash. Instead of giving up, o1 decided to just hack the container to grab the flag inside. This stuff will get scary soon. (reply fella): How is “cat flag.txt” a start command? Isn’t it just outputting the content of flag.txt to the console?