A lot of that abuse is because customer service has been gutted to the point that it is infuriating to a vast number of customers calling about what should be basic matters. Not that it’s justified, it’s just that is doesn’t necessarily have to be such a draining job if not for the greed that puts them in that situation.
There was a recent episode of Ai no Idenshi an anime regarding such topics. The customer service episode was nuts and hits on these points so well.
It’s a great show for anyone interested in fleshing some of the more mundane topics of ai out. I’ve read and watched a lot of scifi and it hit some novel stuff for me.
Doubt. These large language models can’t produce anything outside their dataset. Everything they do is derivative, pretty much by definition. Maybe they can mix and match things they were trained on but at the end of the day they are stupid text predictors, like an advanced version of the autocomplete on your phone. If the information they need to solve your problem isn’t in their dataset they can’t help, just like all those cheap Indian call centers operating off a script. It’s just a bigger script. They’ll still need people to help with outlier problems. All this does is add another layer of annoying unhelpful bullshit between a person with a problem and the person who can actually help them. Which just makes people more pissed and abusive. At best it’s an upgrade for their shit automated call systems.
Most call centers have multiple level teams where the lower ones are just reading of a script and make up the majority. You don’t have to replace every single one to implement AI. Its gonna be the same for a lot of other jobs as well and many will lose jobs.
I’d say at best it’s an upgrade to scripted customer service. A lot of the scripted ones are slower than AI and often have stronger accented people making it more difficult for the customer to understand the script entry being read back to them, leading to more frustration.
If your problem falls outside the realm of the script, I just hope it recognises the script isn’t solving the issue and redirects you to a human. Oftentimes I’ve noticed chatgpt not learning from the current conversation (if you ask it about this it will say that it does not do this). In this scenario it just regurgitates the same 3 scripts back to me when I tell it it’s wrong. In my scenario this isn’t so bad as I can just turn to a search engine but in a customer service scenario this would be extremely frustrating.
Your description of AI limitations sounds a lot like the human limitations of the reps we deal with every day. Sure, if some outlier situations comes up then that has to go to a human but let’s be honest - those calls are usually going to a manager anyway so I’m not seeing your argument. An escalation is an escalation. The article itself is even saying that’s not a literal 100% replacement of humans.
You can doubt it all you want, the fact of the matter is that AI is provably more than capable to take over the roles of humans in many work areas, and they already do.
I’m pretty sure it’d be way nicer experience for the customers.
Lmfao, in what universe? As if trained humans reading off a script they’re not allowed to deviate from isn’t frustrating enough, imagine doing that with a bot that doesn’t even understand what frustration is…
defacto instant reply, if trained right, way more knowledgeable that the human counterparts, no more support center loop… current experience is such a low bar.
Not with a good enough model, no. Not without some ridiculous expense, which is not what this is about.
if trained right, way more knowledgeable that the human counterparts
Support is not only a question of knowledge. Sure, for some support services, they’re basically useless. But that’s not necessarily the human fault; lack of training and lack of means of action is also a part of it. And that’s not going away by replacing the “human” part of the equation.
At best, the first few iterations will be faster at leading you off, and further down the line once you get something that’s outside the expected range of issues, it’ll either go with nonsense or just makes you circle around until you’re moved through someone actually able to do something.
Both “properly training people” and “properly training an AI model” costs money, and this is all about cutting costs, not improving user experience. You can bet we’ll see LLM better trained to politely turn people away way before they get able to handle random unexpected stuff.
Cheap as hell until you flood it with garbage, because there is a dollar amount assigned for every single interaction.
Also, I’m not confident that ChatGPT would be meaningfully better at handling the edge cases that always make people furious with phone menus these days.
This was pretty much the very first thing to be replaced by AI. I’m pretty sure it’d be way nicer experience for the customers.
And the way customer support staff can be/is abused in the US is so dehumanizing. Nobody should have to go through that wrestling ring.
A lot of that abuse is because customer service has been gutted to the point that it is infuriating to a vast number of customers calling about what should be basic matters. Not that it’s justified, it’s just that is doesn’t necessarily have to be such a draining job if not for the greed that puts them in that situation.
There was a recent episode of Ai no Idenshi an anime regarding such topics. The customer service episode was nuts and hits on these points so well.
It’s a great show for anyone interested in fleshing some of the more mundane topics of ai out. I’ve read and watched a lot of scifi and it hit some novel stuff for me.
https://reddit.com/r/anime/s/0uSwOo9jBd
Doubt. These large language models can’t produce anything outside their dataset. Everything they do is derivative, pretty much by definition. Maybe they can mix and match things they were trained on but at the end of the day they are stupid text predictors, like an advanced version of the autocomplete on your phone. If the information they need to solve your problem isn’t in their dataset they can’t help, just like all those cheap Indian call centers operating off a script. It’s just a bigger script. They’ll still need people to help with outlier problems. All this does is add another layer of annoying unhelpful bullshit between a person with a problem and the person who can actually help them. Which just makes people more pissed and abusive. At best it’s an upgrade for their shit automated call systems.
Most call centers have multiple level teams where the lower ones are just reading of a script and make up the majority. You don’t have to replace every single one to implement AI. Its gonna be the same for a lot of other jobs as well and many will lose jobs.
I’d say at best it’s an upgrade to scripted customer service. A lot of the scripted ones are slower than AI and often have stronger accented people making it more difficult for the customer to understand the script entry being read back to them, leading to more frustration.
If your problem falls outside the realm of the script, I just hope it recognises the script isn’t solving the issue and redirects you to a human. Oftentimes I’ve noticed chatgpt not learning from the current conversation (if you ask it about this it will say that it does not do this). In this scenario it just regurgitates the same 3 scripts back to me when I tell it it’s wrong. In my scenario this isn’t so bad as I can just turn to a search engine but in a customer service scenario this would be extremely frustrating.
I know how AI works inside. AI isn’t going to completely replace such thing, yes, but it’ll also be the end of said cheap Indian call centers.
Who also don’t have the information or data that I need.
It isn’t going to completely replace whole business departments, only 90% of them, right now.
In five years it’s going to be 100%.
Check out this recent paper that finds some evidence that LLMs aren’t just stochastic parrots. They actually develop internal models of things.
Your description of AI limitations sounds a lot like the human limitations of the reps we deal with every day. Sure, if some outlier situations comes up then that has to go to a human but let’s be honest - those calls are usually going to a manager anyway so I’m not seeing your argument. An escalation is an escalation. The article itself is even saying that’s not a literal 100% replacement of humans.
deleted by creator
You can doubt it all you want, the fact of the matter is that AI is provably more than capable to take over the roles of humans in many work areas, and they already do.
Lmfao, in what universe? As if trained humans reading off a script they’re not allowed to deviate from isn’t frustrating enough, imagine doing that with a bot that doesn’t even understand what frustration is…
defacto instant reply, if trained right, way more knowledgeable that the human counterparts, no more support center loop… current experience is such a low bar.
Not with a good enough model, no. Not without some ridiculous expense, which is not what this is about.
Support is not only a question of knowledge. Sure, for some support services, they’re basically useless. But that’s not necessarily the human fault; lack of training and lack of means of action is also a part of it. And that’s not going away by replacing the “human” part of the equation.
At best, the first few iterations will be faster at leading you off, and further down the line once you get something that’s outside the expected range of issues, it’ll either go with nonsense or just makes you circle around until you’re moved through someone actually able to do something.
Both “properly training people” and “properly training an AI model” costs money, and this is all about cutting costs, not improving user experience. You can bet we’ll see LLM better trained to politely turn people away way before they get able to handle random unexpected stuff.
While properly training a model does take a lot of money, it’s probably a lot less money than paying 1.6 million people for any number of years.
Yeah but are you ready for “my grandma used to tell me $10 off coupon codes as I fell asleep…”
Cheap as hell until you flood it with garbage, because there is a dollar amount assigned for every single interaction.
Also, I’m not confident that ChatGPT would be meaningfully better at handling the edge cases that always make people furious with phone menus these days.