• Kazumara@feddit.de
    link
    fedilink
    English
    arrow-up
    91
    arrow-down
    1
    ·
    edit-2
    1 year ago

    For really useless call centers this makes sense.

    I have no doubt that a ML chatbot is perfectly capable of being as useless as an untrained human first level supporter with a language barrier.

    And the dude in the article basically admits that’s what his call center was like:

    Suumit Shah never liked his company’s customer service team. His agents gave generic responses to clients’ issues. Faced with difficult problems, they often sounded stumped, he said.

    So evidently good support outcomes were never the goal.

    • mr_tyler_durden@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Agreed. Should we also mourn for the horse and buggy drivers? The gas station attendants? And the whole slew of jobs that have become obsolete over the centuries?

      I do think we need something like UBI and I’m not ignoring the lost jobs but shit jobs shouldn’t have to exist. I’ll mourn for the workers but not for the job. Continuing to employee people to do thankless/hard/dangerous/etc jobs is just silly.

  • Praise Idleness@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    74
    arrow-down
    12
    ·
    edit-2
    1 year ago
    • works 24/7
    • no emotional damage
    • easy to train
    • cheap as hell
    • concurrent, fast service possible

    This was pretty much the very first thing to be replaced by AI. I’m pretty sure it’d be way nicer experience for the customers.

    • GALM@lemmy.world
      link
      fedilink
      English
      arrow-up
      47
      arrow-down
      1
      ·
      1 year ago

      And the way customer support staff can be/is abused in the US is so dehumanizing. Nobody should have to go through that wrestling ring.

      • fluxion@lemmy.world
        link
        fedilink
        English
        arrow-up
        44
        arrow-down
        3
        ·
        1 year ago

        A lot of that abuse is because customer service has been gutted to the point that it is infuriating to a vast number of customers calling about what should be basic matters. Not that it’s justified, it’s just that is doesn’t necessarily have to be such a draining job if not for the greed that puts them in that situation.

        • BlanketsWithSmallpox@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 year ago

          There was a recent episode of Ai no Idenshi an anime regarding such topics. The customer service episode was nuts and hits on these points so well.

          It’s a great show for anyone interested in fleshing some of the more mundane topics of ai out. I’ve read and watched a lot of scifi and it hit some novel stuff for me.

          https://reddit.com/r/anime/s/0uSwOo9jBd

    • applebusch@lemmy.world
      link
      fedilink
      English
      arrow-up
      50
      arrow-down
      11
      ·
      1 year ago

      Doubt. These large language models can’t produce anything outside their dataset. Everything they do is derivative, pretty much by definition. Maybe they can mix and match things they were trained on but at the end of the day they are stupid text predictors, like an advanced version of the autocomplete on your phone. If the information they need to solve your problem isn’t in their dataset they can’t help, just like all those cheap Indian call centers operating off a script. It’s just a bigger script. They’ll still need people to help with outlier problems. All this does is add another layer of annoying unhelpful bullshit between a person with a problem and the person who can actually help them. Which just makes people more pissed and abusive. At best it’s an upgrade for their shit automated call systems.

      • RogueBanana@lemmy.zip
        link
        fedilink
        English
        arrow-up
        14
        ·
        1 year ago

        Most call centers have multiple level teams where the lower ones are just reading of a script and make up the majority. You don’t have to replace every single one to implement AI. Its gonna be the same for a lot of other jobs as well and many will lose jobs.

      • thetreesaysbark@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        I’d say at best it’s an upgrade to scripted customer service. A lot of the scripted ones are slower than AI and often have stronger accented people making it more difficult for the customer to understand the script entry being read back to them, leading to more frustration.

        If your problem falls outside the realm of the script, I just hope it recognises the script isn’t solving the issue and redirects you to a human. Oftentimes I’ve noticed chatgpt not learning from the current conversation (if you ask it about this it will say that it does not do this). In this scenario it just regurgitates the same 3 scripts back to me when I tell it it’s wrong. In my scenario this isn’t so bad as I can just turn to a search engine but in a customer service scenario this would be extremely frustrating.

      • Praise Idleness@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        I know how AI works inside. AI isn’t going to completely replace such thing, yes, but it’ll also be the end of said cheap Indian call centers.

        • Ann Archy@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 year ago

          It isn’t going to completely replace whole business departments, only 90% of them, right now.

          In five years it’s going to be 100%.

      • guacupado@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        Your description of AI limitations sounds a lot like the human limitations of the reps we deal with every day. Sure, if some outlier situations comes up then that has to go to a human but let’s be honest - those calls are usually going to a manager anyway so I’m not seeing your argument. An escalation is an escalation. The article itself is even saying that’s not a literal 100% replacement of humans.

      • Ann Archy@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        6
        ·
        edit-2
        1 year ago

        You can doubt it all you want, the fact of the matter is that AI is provably more than capable to take over the roles of humans in many work areas, and they already do.

    • DessertStorms@kbin.social
      link
      fedilink
      arrow-up
      16
      ·
      1 year ago

      I’m pretty sure it’d be way nicer experience for the customers.

      Lmfao, in what universe? As if trained humans reading off a script they’re not allowed to deviate from isn’t frustrating enough, imagine doing that with a bot that doesn’t even understand what frustration is

      • Praise Idleness@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        6
        ·
        1 year ago

        defacto instant reply, if trained right, way more knowledgeable that the human counterparts, no more support center loop… current experience is such a low bar.

        • cley_faye@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          1 year ago

          defacto instant reply

          Not with a good enough model, no. Not without some ridiculous expense, which is not what this is about.

          if trained right, way more knowledgeable that the human counterparts

          Support is not only a question of knowledge. Sure, for some support services, they’re basically useless. But that’s not necessarily the human fault; lack of training and lack of means of action is also a part of it. And that’s not going away by replacing the “human” part of the equation.

          At best, the first few iterations will be faster at leading you off, and further down the line once you get something that’s outside the expected range of issues, it’ll either go with nonsense or just makes you circle around until you’re moved through someone actually able to do something.

          Both “properly training people” and “properly training an AI model” costs money, and this is all about cutting costs, not improving user experience. You can bet we’ll see LLM better trained to politely turn people away way before they get able to handle random unexpected stuff.

          • testfactor@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            While properly training a model does take a lot of money, it’s probably a lot less money than paying 1.6 million people for any number of years.

    • philodendron@lemdro.id
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Yeah but are you ready for “my grandma used to tell me $10 off coupon codes as I fell asleep…”

    • gravitas_deficiency@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      1 year ago

      Cheap as hell until you flood it with garbage, because there is a dollar amount assigned for every single interaction.

      Also, I’m not confident that ChatGPT would be meaningfully better at handling the edge cases that always make people furious with phone menus these days.

  • realitista@lemm.ee
    link
    fedilink
    English
    arrow-up
    34
    ·
    1 year ago

    I’ve worked in this field for 25 years and don’t think that ChatGPT by itself can handle most workloads, even if it’s trained on them.

    There are usually transactions which must be done and often ad hoc tasks which end up being the most important things because when things break, you aren’t trained for them.

    If you don’t have a feedback loop to solve those issues, your whole business may just break without you knowing.

    • cley_faye@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      1
      ·
      1 year ago

      I think you’re talking about actual support, that knows their tools and can do things.

      This article sound more about the generic outsourced call center that will never, ever get something useful done in any case.

    • RalphFurley@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      I ordered Chipotle for delivery and I got the wrong order. I don’t eat meat so it’s not like I could just say whelp, I’m eating this chicken today I guess.

      The only way to report an issue is to chat with their bot. And it is hell. I finally got a voucher for a free entree but what about the delivery fee and the tip back? Impossible.

      I felt like Sisyphus.

      I waited for the transaction to post and disputed the charge on my card and it credited me back.

      There’s so many if-and-or-else scenarios that no amount of scraping the world’s libraries is AI today able to sort out these scenarios.

      • realitista@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Yes these kind of transactions really need to be hand coded to be handled well. LLM’s are very poorly suited to this kind of thing (though I doubt you were dealing with an LLM at Chipotle just yet).

    • guacupado@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Maybe you work at a decent place but in my experience you’re really overestimating the people who answer calls and give generic responses.

  • BobTheBoozer@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    ·
    1 year ago

    I see two inevitable problems;

    1. we outsourced this to you because it was cheaper, if you’re using ChatGPT what do we need you for?

    2. companies want people to buy stuff, but if you significantly reduce the workforce you also reduce the availability of funds to buy stuff

    • HappycamperNZ@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      1, I assume you mean the business does outsourced customer service, not as an internal department.

      2, universal basic income time, or let’s put people to work on creative, innovative applications not mind numbing shit

    • 👁️👄👁️@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      1 year ago

      We don’t need to keep all bullshit jobs around. The printing press putting hand written scribes out of jobs was a good thing. This is similar. New jobs will be created that will hopefully create more productive work.

  • Uriel238 [all pronouns]@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    1 year ago

    On one hand, they’re crap jobs. On the other hand, in most economies we have crap jobs not because they’re necessary for productivity, but to give us an excuse to pay people to live.

    Maybe if enough jobs are lost to automation, we’ll start to rethink the structure of a society that only allows people to live if they’re useful to a rich person.

    Essentially, we’re just still doing feudalism with extra steps, and it’s high time we cut that nonsense out.

    • 1984@lemmy.today
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      1 year ago

      I think once workers can be replaced, there will be some virus that wipes out most of humanity. No point keeping billions of people around if they aren’t needed.

      • Chaotic Entropy@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Username checks out… suffice to say that a time of increasing social unrest is on the way, when it’s even easier for the haves to sideline the have nots than it already was.

        • 1984@lemmy.today
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I don’t know, I just think its obvious that the rich guys views ordinary people as useless eaters.

    • Chaotic Entropy@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      We have crappy jobs because jobs need doing and it was still cheaper to get humans to do it without a substantial loss in functionality. They don’t exist because of some form of social altruism, as evidenced by the fact that as soon as a semi-viable alternative is offered then the jobs are gone.

      With the dynamic shifting to automation, prematurely I would add, then employers are seeing a much cheaper way to achieve 80% of what they currently offer.

  • Pratai@lemmy.ca
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    2
    ·
    1 year ago

    Remember when AI was going to make life better for everyone?

    Yeah. That shit’ll be the end of us.

    • Ann Archy@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      1 year ago

      Hopefully it’ll be the end of capitalism. How is the economical model supposed to function when nobody is working? Where are people supposed to get money from? How is anything going to be taxed?

      Realistically though it’ll somehow push capitalism into hyperdrive and enslave the global population under the control of the AI owners.

  • 👁️👄👁️@lemm.ee
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    1 year ago

    You still need to employ some humans as a backup when the AI catastrophically fucks up, but for the most part it makes sense. Not all jobs need to continue to exist.

    • locuester@lemmy.zip
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      Exactly. As the article ends:

      Not every customer service employee should worry about being replaced, but those who simply copy and paste responses are no longer safe, according to Shah.

      “That job is gone,” he said. “100 per cent.”

  • Corhen@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    Seems like a good way to get the “agent” to agree it’s in the wrong, and get 100% refund

    • Uriel238 [all pronouns]@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I’m interested in if the AI agent has the power to disseminate refunds or at least return authorizations.

      One of the things fascinating to me is that some of the problems humans are bad at handling (such as social engineering) AI tends to be even worse at.

      • Corhen@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I mean, if you go to your credit card provider with a copy of the log with their rep, and the rep says “i authorize a refund”, you can atleast make the argument.

        Any company scummy enough to trust an AI for this wouldnt give it the authority, though

  • xenomor@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    1 year ago

    Working conditions in this industry are not great. The turnover rate can reach 80% sometimes. It can be a difficult, stressful and low paid job that few people enjoy. At the same time, the demand for this work keeps increasing as more and more of consumer activity shifts online and remote. It seems to me that the technology may be a net benefit in this case. The public and its regulatory authority should, however, keep a close eye on developments to make sure humans are not left behind.

  • flossdaily@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    46
    ·
    edit-2
    1 year ago

    This is just the smallest tip of the iceberg.

    I’ve been working with gpt-4 since the week it came out, and I guarantee you that even if it never became any more advanced, it could already put at least 30% of the white collar workforce out of business.

    The only reason it hasn’t is because companies have barely started to comprehend what it can do.

    Within 5 years the entire world will have been revolutionized by this technology. Jobs will evaporate faster than anyone is talking about.

    If you’re very smart, and you begin to use gpt-4 to write the tools that will replace you, then you MIGHT have 10 good years left in this economy before humans are all but obsolete.

    If you’re not staying up nights, scared shitless by what’s coming, it’s because you don’t really understand what gpt-4 can do.

    • applebusch@lemmy.world
      link
      fedilink
      English
      arrow-up
      40
      arrow-down
      7
      ·
      1 year ago

      You sound like one of those idiots preaching the apocalypse from a street corner. Humans obsolete in 10 years? Yeah sure buddy, right after all those profits trickle down. This is just another tool, an interesting one to be sure, but still just a tool. If you’re staying up nights worrying about this, you don’t really understand the technology, or maybe you’re just worried someone is going to realize you don’t do shit.

      • BrianTheeBiscuiteer@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        If you’re staying up nights worrying about this, you don’t really understand the technology

        And you think managers, the people deciding who gets replaced by AI, understand the technology?

        • NaibofTabr@infosec.pub
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          This is part of the problem. They don’t, and won’t, fully understand the technology or its limitations or long-term impacts. They will understand that the salesman pushing the AI product told them it could eliminate 5-10% of their workforce. Whether or not the product can actually do that effectively won’t matter, they’ll still buy it, implement it, and fire a bunch of people.

      • variants@possumpat.io
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        I think once sap and jira start implementing a lot more AI and make it simpler to use it could cut down a lot of corporate jobs, not the hands on stuff but a lot of the simpler jobs like purchasing and inventory staff could be shrunken down to a fewer people and fewer cubicles. At least that’s what we talked about at our company how everyone is adjusting to the new world especially advertising now that everything will be served to you by a bot instead of a search

      • Ann Archy@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        1 year ago

        You sound like one of those peasants standing on street corners saying, “horses replaced with fuming metal boxes in 10 years? Hah, yeah, sure buddy, right after we put a man on the moon! Getoutta here, you loon!”

      • flossdaily@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        Yup. This is why it is vital that we all get behind Universal Basic Income.

        The jobs will leave and they won’t come back. UBI is inevitable, but if we don’t get there soon enough there will be years of suffering and poverty for hundreds of millions.

      • Papanca@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Thanks for sharing. If you see that list of type of jobs at the end, it’s easy to see which jobs could get replaced within a reasonably short amount of time. Greed will always find a way to profit from whatever development arises. If they have 1 mountain of gold, they want 2 mountains of gold.

    • Ann Archy@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      ·
      1 year ago

      I’m a senior Linux sysadmin who’s been following the evolution of AI over this past year just like you, and just like you I’ve been spending my days and nights tinkering with it non stop, and I have come to more or less the same conclusion as you have.

      The downvotes are from people who haven’t used the AI, and who are still in the Internet 1.0 mindset. How people still don’t get just how revolutionary this technology is, is beyond me. But yeah, in a few years that’ll be evident enough, time will show.

        • A_A@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 year ago

          @flossdaily@lemmy.world
          @anarchy79@lemmy.world
          @SirGolan@lemmy.sdf.org
          I quite agree.

          And, from SirGolan ref : Submitted on 3 Oct 2023 Language Models Represent Space and Time
          … (from the summary) …Our analysis demonstrates that modern LLMs acquire structured knowledge about fundamental dimensions such as space and time, supporting the view that they learn not merely superficial statistics, but literal world models.
          https://arxiv.org/abs/2310.02207


          What makes it worse (in my opinion) is that LLMs are just one step in this development (which is exponential and not limited by human capabilities).
          For example :
          Numenta launches brain-based NuPIC to make AI processing up to 100 times more efficient
          https://lemmy.world/post/4941919

            • A_A@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              9 months ago

              Hi @anarchy79@lemmy.world,

              since I forgot what I was saying here 4 months ago I read the whole thread again and basically what I said is that I agree with what you said then (4 months ago) and I added a couple of references//ideas to make this point stronger.

              Also, I have no idea why you did receive this notification only today, 4 months after the discussion. I guess the Lemmy software is buggy since for my account I did not receive some notifications in a few instances where someone replied to some of my comments and I just happened to see those replies anyway since I was reading all again.

              take care, 👍