• 1 Post
  • 1.09K Comments
Joined 2 years ago
cake
Cake day: September 7th, 2023

help-circle







  • Time to start a new business to distract people the previous one is not living up to expectations. Musk style. (A degree of vertical integration will also be involved)

    E: ‘"The Grok integration with X has made everyone jealous,” says someone working at another big AI lab. “Especially how people create viral tweets by getting it to say something stupid.”’ These are deeply unserious people, billions of dollars just to build AI Dril, which was already a thing. Is there some weird nerd somewhere who said something like ‘culture is downstream from viral memes’ or something just as dumb? Related to that, so I have basically quit twitter, and only visit there to look up if a quoted thing was real etc, but damn the site has gotten bad. How does anybody use it when so many replies are either bot replies, ai replies or people using their checkmark to push their one word replies to the top? Esp bigger accounts/viral tweets just get swarmed with shit. It has a bit of the ‘comment section of abandoned blog’ feeling to it. And this is the validation the AI company craves?



  • Yeah, it is fucking scary. My sympathies and solidarity. Small thing, perhaps a good reminder for everybody to check their opsec (esp if you are an administrator of things, check which data you do not need, or should not fall into the current (or future) us/other fascist administrations hands. And remember while spying on Americans by the various orgs is illegal, trading for information on them with other countries is not. Don’t forget backups). It is horrible that it has come to this. One small point of light is that they are fools and can’t shut up or be subtle. At least that has a chance to motivate more people to do something, which if it gets to the worst (and it is getting close) more people will actually resist (sadly a lot of people will have to realize that the point becomes not to win, but to impose costs/friction, and any wins are a bonus, which is a horrible realization in itself). Really hope this doesn’t make things worse mentally btw.







  • I don’t know how new it is, but it first dropped on my radar about a year ago due to listening to the Risky Business cybersecurity podcast, not to be confused with the recent (and baffingly named (*)) podcast called ‘Risky Business with Nate Silver and Maria Konnikova’ (**) by dweeb Nate Silver. So I don’t know how long it was going on the wild, and im talking about the windows button + r attack method and not the github comments, no idea how long they used comments as a vector. And yes that part is also good, like the addition of trust of github + quite an effective attack is clever. Shouldn’t work on Real Nerds however.

    *: The name means that at least one of they didn’t [know|care|google] about the decades old cybersecurity podcast before naming their podcast that is true. Any of those is odd.

    **: addition to above, the tagline of the podcast is ‘a weekly podcast about making better decisions’ Look inwards Nate, look inwards.


  • For a while now I jave wondered how much of those “the llms all fail at this very basic task” problems that suddenly get fixed are not fixed by the model getting better but just a bandaid solution which solves that specific problem. (Putting another llm in front of the input to detect the problem and then sending it to the llm that is trained on that specific problem would be a bandaid solution btw, it is just adding more under the trenchcoat). And even if somebody were to answer this question, the well is so poisoned im not sure if I could believe them.