For really useless call centers this makes sense.
I have no doubt that a ML chatbot is perfectly capable of being as useless as an untrained human first level supporter with a language barrier.
And the dude in the article basically admits that’s what his call center was like:
Suumit Shah never liked his company’s customer service team. His agents gave generic responses to clients’ issues. Faced with difficult problems, they often sounded stumped, he said.
So evidently good support outcomes were never the goal.
- works 24/7
- no emotional damage
- easy to train
- cheap as hell
- concurrent, fast service possible
This was pretty much the very first thing to be replaced by AI. I’m pretty sure it’d be way nicer experience for the customers.
Doubt. These large language models can’t produce anything outside their dataset. Everything they do is derivative, pretty much by definition. Maybe they can mix and match things they were trained on but at the end of the day they are stupid text predictors, like an advanced version of the autocomplete on your phone. If the information they need to solve your problem isn’t in their dataset they can’t help, just like all those cheap Indian call centers operating off a script. It’s just a bigger script. They’ll still need people to help with outlier problems. All this does is add another layer of annoying unhelpful bullshit between a person with a problem and the person who can actually help them. Which just makes people more pissed and abusive. At best it’s an upgrade for their shit automated call systems.
Most call centers have multiple level teams where the lower ones are just reading of a script and make up the majority. You don’t have to replace every single one to implement AI. Its gonna be the same for a lot of other jobs as well and many will lose jobs.
I know how AI works inside. AI isn’t going to completely replace such thing, yes, but it’ll also be the end of said cheap Indian call centers.
Who also don’t have the information or data that I need.
It isn’t going to completely replace whole business departments, only 90% of them, right now.
In five years it’s going to be 100%.
I’d say at best it’s an upgrade to scripted customer service. A lot of the scripted ones are slower than AI and often have stronger accented people making it more difficult for the customer to understand the script entry being read back to them, leading to more frustration.
If your problem falls outside the realm of the script, I just hope it recognises the script isn’t solving the issue and redirects you to a human. Oftentimes I’ve noticed chatgpt not learning from the current conversation (if you ask it about this it will say that it does not do this). In this scenario it just regurgitates the same 3 scripts back to me when I tell it it’s wrong. In my scenario this isn’t so bad as I can just turn to a search engine but in a customer service scenario this would be extremely frustrating.
Check out this recent paper that finds some evidence that LLMs aren’t just stochastic parrots. They actually develop internal models of things.
deleted by creator
You can doubt it all you want, the fact of the matter is that AI is provably more than capable to take over the roles of humans in many work areas, and they already do.
And the way customer support staff can be/is abused in the US is so dehumanizing. Nobody should have to go through that wrestling ring.
A lot of that abuse is because customer service has been gutted to the point that it is infuriating to a vast number of customers calling about what should be basic matters. Not that it’s justified, it’s just that is doesn’t necessarily have to be such a draining job if not for the greed that puts them in that situation.
There was a recent episode of Ai no Idenshi an anime regarding such topics. The customer service episode was nuts and hits on these points so well.
It’s a great show for anyone interested in fleshing some of the more mundane topics of ai out. I’ve read and watched a lot of scifi and it hit some novel stuff for me.
I’m pretty sure it’d be way nicer experience for the customers.
Lmfao, in what universe? As if trained humans reading off a script they’re not allowed to deviate from isn’t frustrating enough, imagine doing that with a bot that doesn’t even understand what frustration is…
defacto instant reply, if trained right, way more knowledgeable that the human counterparts, no more support center loop… current experience is such a low bar.
defacto instant reply
Not with a good enough model, no. Not without some ridiculous expense, which is not what this is about.
if trained right, way more knowledgeable that the human counterparts
Support is not only a question of knowledge. Sure, for some support services, they’re basically useless. But that’s not necessarily the human fault; lack of training and lack of means of action is also a part of it. And that’s not going away by replacing the “human” part of the equation.
At best, the first few iterations will be faster at leading you off, and further down the line once you get something that’s outside the expected range of issues, it’ll either go with nonsense or just makes you circle around until you’re moved through someone actually able to do something.
Both “properly training people” and “properly training an AI model” costs money, and this is all about cutting costs, not improving user experience. You can bet we’ll see LLM better trained to politely turn people away way before they get able to handle random unexpected stuff.
While properly training a model does take a lot of money, it’s probably a lot less money than paying 1.6 million people for any number of years.
Yeah but are you ready for “my grandma used to tell me $10 off coupon codes as I fell asleep…”
Cheap as hell until you flood it with garbage, because there is a dollar amount assigned for every single interaction.
Also, I’m not confident that ChatGPT would be meaningfully better at handling the edge cases that always make people furious with phone menus these days.
I’ve worked in this field for 25 years and don’t think that ChatGPT by itself can handle most workloads, even if it’s trained on them.
There are usually transactions which must be done and often ad hoc tasks which end up being the most important things because when things break, you aren’t trained for them.
If you don’t have a feedback loop to solve those issues, your whole business may just break without you knowing.
I think you’re talking about actual support, that knows their tools and can do things.
This article sound more about the generic outsourced call center that will never, ever get something useful done in any case.
I ordered Chipotle for delivery and I got the wrong order. I don’t eat meat so it’s not like I could just say whelp, I’m eating this chicken today I guess.
The only way to report an issue is to chat with their bot. And it is hell. I finally got a voucher for a free entree but what about the delivery fee and the tip back? Impossible.
I felt like Sisyphus.
I waited for the transaction to post and disputed the charge on my card and it credited me back.
There’s so many if-and-or-else scenarios that no amount of scraping the world’s libraries is AI today able to sort out these scenarios.
Cheaper than outsourcing to poor countries with middling English speaking capability.
Coming to call center lines near you: voiced chatbots to replace the ineffective, useless customer support lines that exist today with the same useless outcomes for consumers but endless juggling back and forth without any real resolutions. Let’s make customer service even shittier, again!
If you bought the product we don’t need to worry about losing money anymore bro
Remember when AI was going to make life better for everyone?
Yeah. That shit’ll be the end of us.
AI will make the life better for the shareholders
Hopefully it’ll be the end of capitalism. How is the economical model supposed to function when nobody is working? Where are people supposed to get money from? How is anything going to be taxed?
Realistically though it’ll somehow push capitalism into hyperdrive and enslave the global population under the control of the AI owners.
It didn’t work that well when people were working anyway.
It won’t for as long as all the power is in the capital owner’s hands.
I see two inevitable problems;
-
we outsourced this to you because it was cheaper, if you’re using ChatGPT what do we need you for?
-
companies want people to buy stuff, but if you significantly reduce the workforce you also reduce the availability of funds to buy stuff
1, I assume you mean the business does outsourced customer service, not as an internal department.
2, universal basic income time, or let’s put people to work on creative, innovative applications not mind numbing shit
We don’t need to keep all bullshit jobs around. The printing press putting hand written scribes out of jobs was a good thing. This is similar. New jobs will be created that will hopefully create more productive work.
-
What if we get it to agree to give us stuff for free? Is it a representative of the company or not?
You also have to have a reasonable belief the company representative is authorized to do whatever they’re doing to be entitled to it.
A lot of jobs are just busy work that does nothing and makes nothing. Talking about automating them misses the point of why the jobs exists in the first place.
“I see that you are throwing a ball at a target that is connected to a platform with a human sitting above a tank of water. Here is a AI generated picture of a random human underwater to sate your needs. Ya! I have made this process 200% more efficient!”
It’s crazy how people seem fundamentally incapable of looking at the big picture and ask themselves things like, “what even is the purpose of society? Is this the best society humanity is able to come up with? What if I am not ready to accept society as it is presented to me, what are my alternatives, do I even have any? What are my obligations towards a society that marginalizes me and treats me like a second or third tier human, without any hope of ever improving my lot?”
Ask people if they would rather be free and get everything they want without having to work for it. The answers you’ll get will boggle your mind.
I’m surprised by the number of workaholics that exist, like why do you want to work so much? Go explore the world, learn things, make things, but people want to work instead?
We’ve been permeated by the idea that “you have to be financially productive to be a decent human” for so long, even people against excessive/useless work still sometimes miss the point of this crazy race toward making more benefit regardless of anything else.
Sometimes, reaching the “it works” point is enough, but higher ups never stops there. It always have to be “better/more”.
You still need to employ some humans as a backup when the AI catastrophically fucks up, but for the most part it makes sense. Not all jobs need to continue to exist.
Working conditions in this industry are not great. The turnover rate can reach 80% sometimes. It can be a difficult, stressful and low paid job that few people enjoy. At the same time, the demand for this work keeps increasing as more and more of consumer activity shifts online and remote. It seems to me that the technology may be a net benefit in this case. The public and its regulatory authority should, however, keep a close eye on developments to make sure humans are not left behind.
This is just the smallest tip of the iceberg.
I’ve been working with gpt-4 since the week it came out, and I guarantee you that even if it never became any more advanced, it could already put at least 30% of the white collar workforce out of business.
The only reason it hasn’t is because companies have barely started to comprehend what it can do.
Within 5 years the entire world will have been revolutionized by this technology. Jobs will evaporate faster than anyone is talking about.
If you’re very smart, and you begin to use gpt-4 to write the tools that will replace you, then you MIGHT have 10 good years left in this economy before humans are all but obsolete.
If you’re not staying up nights, scared shitless by what’s coming, it’s because you don’t really understand what gpt-4 can do.
You sound like one of those idiots preaching the apocalypse from a street corner. Humans obsolete in 10 years? Yeah sure buddy, right after all those profits trickle down. This is just another tool, an interesting one to be sure, but still just a tool. If you’re staying up nights worrying about this, you don’t really understand the technology, or maybe you’re just worried someone is going to realize you don’t do shit.
I work with AI stuff, just getting into LLM, but I have been doing SD work since the public release last year. In just over 1 year the SD capability has gone from being able to draw a passable image of a cat at 512x512 pixels that required a reasonably powerful graphics card to complete to being able to create 4k images on the same cards that are nearly indistinguishable from actual photos/paintings. It is the single fastest adaptation and development of a technology I have seen in my 30 years in tech. I have actually been tracking the job market and the impacts that this will have and he is not all that far off in his estimate. The current push in AI development is nearly a ubiquitous existential threat to employment as we view it in the society of the United States. Everyone is on the chopping block and you’d best believe that the C-level executives want to eliminate as many positions as possible. Labor is viewed as an atrocious expense and the first place that cuts should be made. I challenge you to actually come up with a list of 10 jobs that employ more than 100,000 people in the country that you think would be safe from AI and I will see how many of them I can find information on someone who is already actively working on eliminating them.
Companies don’t want employees, only paying customers. If they can eliminate employees, they will. Hence self-checkouts in grocers, pay at the pump for gas stations, order kiosks at McDonald’s, mobile ordering for virtually every fast food place, the list goes on and on. These are all recent non-AI replacements that have cut into the employment prospects for people.
Pretty sure nah. But time will tell. I will believe it when I see it. AI has been coming for jobs since before terminator. It will replace thousands of jobs just like :
Washer women, lamp lighters, calculators and all the work that farm labourers used to do. Automation comes for us all.
Some jobs shouldn’t exist anyway. God the amount of office workers moving numbers from one tab to another and getting paid a bucket load.
However nursing and elderly care. Psychology counselling mindfulness teachers and jobs that are actually useful for society are probably safe. Yes ai can do some of all these things but it can’t do them all with empathy. Empathy is key to most of these human focused roles. We need more people in these roles and less working to make more money.
But a lot of jobs did get automated away. And serious consequences did occur from that. Sometimes places rebound from it, but sometimes they did not. And at some point… there will be more people than jobs for them to do, as we continue automating.
In the end, the base foundation for capitalism will be broken, and we will be in an economic crisis of unprecedented scales.
Capitalism doesn’t work. Pretty sure everyone knows that.
We don’t want to work. We can automate away every job. Then we can be free to actually pursue what we want. Humanity isn’t based on how many shiny trinkets we have.
Yes, but the problem is, we are stuck with the system until we force a societal level change. Capitalism works plenty well enough for the powerful, and they aren’t willing to let go that easily.
“It won’t take people’s jobs! And also people’s jobs are stupid and they deserve to have them taken away!”
What jobs are “useful for society” has no impact on what jobs are actually available to society, only what is deemed “profitable” has any place in this capitalist dystopia. Nice idealism though, I hope it won’t sting too bad having it shattered growing up.
I’m grown up. It will remove jobs. Just said that. Jobs that could be automated regardless. Obviously ai will remove jobs. Just like computers did. But not ones that we actually need. Pretty easy to understand or do you need to grow up to understand that ?
If you’re staying up nights worrying about this, you don’t really understand the technology
And you think managers, the people deciding who gets replaced by AI, understand the technology?
This is part of the problem. They don’t, and won’t, fully understand the technology or its limitations or long-term impacts. They will understand that the salesman pushing the AI product told them it could eliminate 5-10% of their workforce. Whether or not the product can actually do that effectively won’t matter, they’ll still buy it, implement it, and fire a bunch of people.
I think once sap and jira start implementing a lot more AI and make it simpler to use it could cut down a lot of corporate jobs, not the hands on stuff but a lot of the simpler jobs like purchasing and inventory staff could be shrunken down to a fewer people and fewer cubicles. At least that’s what we talked about at our company how everyone is adjusting to the new world especially advertising now that everything will be served to you by a bot instead of a search
You sound like one of those peasants standing on street corners saying, “horses replaced with fuming metal boxes in 10 years? Hah, yeah, sure buddy, right after we put a man on the moon! Getoutta here, you loon!”
There is a video from CGP Grey titled Humans Need Not Apply that is extremely relevant. It was posted 9 years ago. It’s a great video, I highly recommend everyone check it out.
Here is an alternative Piped link(s):
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Thanks for sharing. If you see that list of type of jobs at the end, it’s easy to see which jobs could get replaced within a reasonably short amount of time. Greed will always find a way to profit from whatever development arises. If they have 1 mountain of gold, they want 2 mountains of gold.
I’m a senior Linux sysadmin who’s been following the evolution of AI over this past year just like you, and just like you I’ve been spending my days and nights tinkering with it non stop, and I have come to more or less the same conclusion as you have.
The downvotes are from people who haven’t used the AI, and who are still in the Internet 1.0 mindset. How people still don’t get just how revolutionary this technology is, is beyond me. But yeah, in a few years that’ll be evident enough, time will show.
I feel sorry for these folks. They have no idea what’s about to happen.
@flossdaily@lemmy.world
@anarchy79@lemmy.world
@SirGolan@lemmy.sdf.org
I quite agree.And, from SirGolan ref : Submitted on 3 Oct 2023 Language Models Represent Space and Time
… (from the summary) …Our analysis demonstrates that modern LLMs acquire structured knowledge about fundamental dimensions such as space and time, supporting the view that they learn not merely superficial statistics, but literal world models.
https://arxiv.org/abs/2310.02207
What makes it worse (in my opinion) is that LLMs are just one step in this development (which is exponential and not limited by human capabilities).
For example :
Numenta launches brain-based NuPIC to make AI processing up to 100 times more efficient
https://lemmy.world/post/4941919Removed by mod
since I forgot what I was saying here 4 months ago I read the whole thread again and basically what I said is that I agree with what you said then (4 months ago) and I added a couple of references//ideas to make this point stronger.
Also, I have no idea why you did receive this notification only today, 4 months after the discussion. I guess the Lemmy software is buggy since for my account I did not receive some notifications in a few instances where someone replied to some of my comments and I just happened to see those replies anyway since I was reading all again.
take care, 👍
Removed by mod