Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post, there’s no quota for posting and the bar really isn’t that high

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    Orange Site denizen plays Dr. LLM: https://news.ycombinator.com/item?id=40331850

    Show NH [sic]: “data-to-paper” - autonomous stepwise LLM-driven research

    data-to-paper is a framework for systematically navigating the power of AI to perform complete end-to-end scientific research, starting from raw data and concluding with comprehensive, transparent, and human-verifiable scientific papers

    The example “research paper” was some useless fluff about diabetes, based off an existing data set (read: actual work produced by actual humans), and mad-libs.

    The study identifies an inverse correlation between physical activity and fruit and vegetable intake with diabetes occurrence, while higher BMI is positively correlated

    I’m too sleepy and statistics-impaired to check how nonsensical the regression “analysis” or findings are, so instead let’s check out the references (read: the actual humans who were plagarized to make this fluff)!

    Reference #5

    [5] T. Schnurr, Hermina Jakupovi, Germn D. Carrasquilla, L. ngquist, N. Grarup, T. Srensen, A. Tjnneland, K. Overvad, O. Pedersen, T. Hansen, and T. Kilpelinen. Obesity, unfavourable lifestyle and genetic risk of type 2 diabetes: a case-cohort study. Diabetologia, 63:1324–1332, 2020.

    This incredibly managed to mangle all non-English alphabet names:

    Hermina Jakupović, Germán D. Carrasquilla, Lars Ängquist, Thorkild I. A. Sørensen, Anne Tjønneland, Tuomas O. Kilpeläinen

    I guess AI has an easier time advancing science than producing a PDF with non-ascii text in it

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      This incredibly managed to mangle all non-English alphabet names:

      hmm. I can guess at a few reasons this could be happening: model coders “normalizing” everything to flat-ascii in training, or similar happening at training stage (because of the previously-referenced RLHF datamills employing only people with specific localized dialects, instead of wider context-local languages), etc.

      wonder if this particular thing is a confluence of those, or just one specific set

      • David Gerard@awful.systemsOPM
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        have you ever met an English-native dev who didn’t need to be trained out of the world being 7-bit ascii

          • BurgersMcSlopshot@awful.systems
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 months ago

            My Jesus wanted characters for drawing borders and playing card suits, which is why He handed down to us Code Page 437. Using the upper 128 characters for things like vowels with funny marks on them is catholic heresy (nuts to Latin 1, down with Unicode).