• 0 Posts
  • 15 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle
  • I agree, there would have to be measures in place to prevent the “promote to the level of incompetence” style of meritocracy that is prevalent already. There needs to be a system of recognizing that the person in any given position has the skills and abilities that make them awesome at that specific job, and rewarding them appropriately without requiring them to justify it by taking on tasks that they’re not suited for.

    The idea that workers should always be gunning for a promotion is one of the worst parts of what people think a meritocracy is. But how else do you determine how much they should be paid?










  • Thanks for the link, that sounds like exactly what I was asking for but gone way wrong!

    What do you think is missing to prevent these kinds of outcomes? Is AI simply incapable of categorizing topics as ‘harmful to humans’ on it’s own without a human’s explicit guidance? It seems like the philosophical nuances of things like consent or dependence or death would be difficult for a machine to learn if it isn’t itself sensitive to them. How do you train empathy in something so inherently unlike us?



  • Yeah I haven’t played with it much but it feels like ChatGPT is already getting pretty close to this kind of functionality. It makes me wonder what’s missing to take it to the next level over something like Siri or Alexa. Maybe it needs to be more proactive than just waiting for prompts?

    I’d be interested to know if current AI would be able to recognize the symptoms of different mental health issues and utilize the known strategies to deal with them. Like if a user shows signs of anxiety or depression, could the AI use CBT tools to conversationally challenge those thought processes without it really feeling like therapy? I guess just like self-driving cars this kind of thing would be legally murky if it went awry and it accidentally ended up convincing someone to commit suicide or something haha.