• 7 Posts
  • 33 Comments
Joined 1 year ago
cake
Cake day: June 10th, 2023

help-circle


  • The mod has been consistently going since 2005, so they’ve had a lot of time to build up assets! There’s a lot of snazzy new features, but everything still aims to integrate with Freelancer’s original setting and lore. Mixed success, but it works more often than not. There’s a community Discord if you wanted to take a look around or ask questions.









  • I’d just suggest that this is a defacto ban based on the current requirements.

    If bots are going to be command triggered and require pre-approval by individual community moderators, I think it would be prudent to include an index of registered bots + commands in the community info pages.

    Currently I can’t think of any reasonable way for a Beehaw user to know which bots are operational and what their commands are. If bots need to be command triggered but there’s no way to find out which ones are functional, why approve them to begin with?










  • Bots can be extremely useful and the flexibility of where and how bots could work was one of the things that made Reddit popular. Before, well, y’know.

    Bespoke bots can also allow particular communities to develop local features or functionality. I assume Lemmy’s mod tools are fair bare bones right now too, so I suspect someone, somewhere is probably working on an automod toolkit.

    Bots should be allowed, but must be flagged. I don’t know if it’s a default lemmy option, but the app I use has a toggle to hide bot accounts if you don’t want to see them.

    That said, I would very much prefer if bots were restricted to just making comments rather than posts. Certain communities have bots that automatically post article links and they completely blanket feeds sorted by new until you block the account.




  • since C2PA relies on creators to opt in, the protocol doesn’t really address the problem of bad actors using AI-generated content. And it’s not yet clear just how helpful the provision of metadata will be when it comes to media fluency of the public. Provenance labels do not necessarily mention whether the content is true or accurate.

    Interesting approach, but I can’t help but feel the actual utility is fairly limited. For example, I could see it being useful for large corporate creative studios that have contractual / union agreements that govern AI content usage.

    If they’re using enterprise tools that build in C2PA, it’d give them a metadata audit trail showing exactly when and where AI was used.

    That’s completely useless in the context where AI content flagging is most useful though. As the quote says, this provenance data is applied at the point of creation, and in a world where there are open source branches of generation models, there’s no way to ensure provenance tagging is built in.

    This technology is most needed to combat AI powered misinformation campaigns, when that is the use case this is least able to address.



  • So conspiracy probably isn’t the right term, although there are common factors that are causing - or at least influencing - a lot of these trends.

    With inflation being a major issue, central banks are reacting by increasing interest rates. These rate hikes have the effect of making credit and borrowing more expensive.

    This is significant because central rates had been low (nearly 0%) since basically 2008, with quantitative easing (cash printing) pumping billions of additional dollars into the stock markets in particular.

    The effect of loaned cash being effectively free is an explosion of activity from investor hedge funds who were willing to take huge risks on speculative projects. This fuelled the massive boom in tech startups across the 00s.

    The trouble is, many of those startups weren’t profitable, they were ‘potentially profitable’ and fuelled by credit. Or they had the underpants gnome model of profit where the means and mechanism of the ‘???’ stage would be figured out later (WeWork).

    Investors were happy to fund those losses to create products that controlled markets (Uber) or amassed huge userbases that could be flipped from potential to profit in the future (Reddit).

    Only now the rates have gone up, and credit is suddenly expensive. Business models that rely on running at a loss suddenly aren’t viable, and those startup investors that owns chunks of those businesses are now insisting on actual returns on their investments.

    You can see the effects all over social media and tech, but Reddit (urgently need to get profitable for a stock launch, need the stock launch for funding) and Twitter (basketcase debt load at the worst possible time for debt) are the most obvious examples.

    Techbro austerity means worse products for consumers or aggressive monetization policies which users will likely dislike. So not a conspiracy, but decades of reckless investment by hedge funds that have been caught with their pants down by interest risk.


  • Data Protection shouldn’t be a relevant issue - at least not in the sense that it forcss them to delete accounts. When you process data under the GDPR, you have to identify a lawful basis.

    I assume that transactions through the eStore would be handled under the contract basis, with the hosting of the game in the library forming part of the contractual relationship. That would enable them to maintain an account for as long as the contractual relationship persisted.

    That basically means GDPR doesn’t force them to close an account, they close an account based on their policies because they choose to. That’ll be based on their T&Cs, so things will fundamentally circle back to whether their T&Cs are legitimate and lawful.

    It is possible that a data subject could potentially raise a claim for damages under the GDPR, on the grounds that the deletion of their account is a breach of contract that amounts to an availability data breach.