

It’s not that obvious. Corporations are investing heavily in automation in customer relations. There are metrics for how much work had to fall back to humans, because it couldn’t be processed by the machine. Managers are motivated to improve on those metrics, and make the humans redundant.
Of course, LLMs are just pure garbage that produce more work for everyone and achieve nothing. Especially in business, they are a great way to reduce efficiency. The users dumb down, believe any bullshit, drop all critical thinking, and the people on the receiving end of their bullshit have to filter even more stupidity than ever.
But you don’t understand this as a manager. A piece of code by AI, that produces the same result as a piece of code by a human, or close enough, seem equivalent. Potential side effects are just noise that they don’t understand or want to hear about.
Managers also don’t understand that AI doesn’t scale. If it can write a Python program to calculate prime numbers, it can surely also write something like Netflix, or a payment processor, right?
Then there’s exactly what you point out. Other managers claim they’re doing it. So there must be something to it.
Once they wasted their budget on renting this technology temporarily, cuts have to be made to ensure the bottom line.
Maybe AI isn’t replacing your job, but the stupid investment might cost you the job anyway.
It’s also important to realize that you don’t require quality work or a quality product to be financially successful as a corporation. The AI industry is the best example itself.
That sounds concerning. Quick and easy are not necessarily attributes I want for medical evaluation. But maybe I’m just biased, because I missed out on the more efficient approach