Mount Sinai and other elite hospitals are pouring millions of dollars into chatbots and AI tools, as doctors and nurses worry the technology will upend their jobs.
Hospital bosses love AI. Doctors and nurses are worried.::undefined
All it is going to take for AI to go away is when it makes a serious mistake that injures or kills a patient. The wrongful death and negligence lawsuits would be in the high millions or even more if it is a young child. In my opinion, AI is a very bad idea. I would sooner put my trust in a human being than AI. I can see AI being a tool to assist a doctor in making a diagnosis, but certainly not to replace doctors or reduce their numbers.
I am kind of - no, I am really - anti-capitalist. In some evil sort of way, if the hospital ‘boss’ decides to replace enough medical professionals with AI and robots and this causes a patient to die from inadequate care, then not a small part of me believes that the hospital boss should pay with their own life. Yeah, I have an anger management problem when it comes to the wealthy.
AI won’t replace doctors, but doctors that use AI might replace doctors that don’t, and I’m ok with that. Keep the human in the loop, by all means, but make use of powerful tooling that might make things better.
I’m of the same mindset. A doctor equipped with all the latest technology will be able to offer a far more accurate diagnosis and custom treatment plan, rather than the traditional “make an educated guess and throw shit at it till something works” approach.
IMO it is a double edged sword. One the one hand a doctor that uses AI to notify them of something they might not have thought of and the doctor confirms what the AI says before treatment can be a big benefit. But on the flip side people leaning to much on it and not verifying the output at all and taking what it says at face value like it cannot be wrong will lead to some very bad situations.
I can see most people wanting to pull towards the former, but cost cutting, overworking employees and trying to maximise profits will pull things towards the latter. And ATM I don’t know which force is stronger - we really need to get the profit motives out of our healthcare systems.
I think it’s a more modern version of what we in EMS call “treat the patient, not the monitor.” AKA, if your patient looks like they’re in distress, is having trouble breathing, etc, but you throw them on the monitor to get vitals and it’s reading that everything is within normal levels, don’t just sit back and be like well clearly you are fine, stop saying you cant breathe because my little lifepack says otherwise. Either the monitor is wrong or they’re doing some hard-core compensation to keep themselves within normal ranges, so let’s treat them and not what the computer says.
Thing is, human doctors already make a lot of mistakes that cause wrongful deaths. It wouldn’t surprise me if it ends up being similar to the situation we’re seeing with Tesla’s self driving cars, where they clearly have safety issues, but still end up being twice as safe as human drivers.
There are already computer programs in place that assist doctors and nurses, like programs that check for drug interactions.
AI will probably get added to do things to assist practitioners instead of replacing them at first. A junior general doctor uses an AI as a diagnosis tool or a radiologist uses AI to help in diagnosing tumors.
Over time, costs go down the tools get better and you can use less time per doctor as the AI gets relied on to more of the grunt work and the doctor is just there to make sure nothing bad happens. By the time they propose outright replacing humans with AI, they are going to have a large body of evidence showing that AI has a better record than people.
All it is going to take for AI to go away is when it makes a serious mistake that injures or kills a patient. The wrongful death and negligence lawsuits would be in the high millions or even more if it is a young child. In my opinion, AI is a very bad idea. I would sooner put my trust in a human being than AI. I can see AI being a tool to assist a doctor in making a diagnosis, but certainly not to replace doctors or reduce their numbers.
I am kind of - no, I am really - anti-capitalist. In some evil sort of way, if the hospital ‘boss’ decides to replace enough medical professionals with AI and robots and this causes a patient to die from inadequate care, then not a small part of me believes that the hospital boss should pay with their own life. Yeah, I have an anger management problem when it comes to the wealthy.
AI won’t replace doctors, but doctors that use AI might replace doctors that don’t, and I’m ok with that. Keep the human in the loop, by all means, but make use of powerful tooling that might make things better.
I’m of the same mindset. A doctor equipped with all the latest technology will be able to offer a far more accurate diagnosis and custom treatment plan, rather than the traditional “make an educated guess and throw shit at it till something works” approach.
IMO it is a double edged sword. One the one hand a doctor that uses AI to notify them of something they might not have thought of and the doctor confirms what the AI says before treatment can be a big benefit. But on the flip side people leaning to much on it and not verifying the output at all and taking what it says at face value like it cannot be wrong will lead to some very bad situations.
I can see most people wanting to pull towards the former, but cost cutting, overworking employees and trying to maximise profits will pull things towards the latter. And ATM I don’t know which force is stronger - we really need to get the profit motives out of our healthcare systems.
I think it’s a more modern version of what we in EMS call “treat the patient, not the monitor.” AKA, if your patient looks like they’re in distress, is having trouble breathing, etc, but you throw them on the monitor to get vitals and it’s reading that everything is within normal levels, don’t just sit back and be like well clearly you are fine, stop saying you cant breathe because my little lifepack says otherwise. Either the monitor is wrong or they’re doing some hard-core compensation to keep themselves within normal ranges, so let’s treat them and not what the computer says.
deleted by creator
Meh the liability is on the doctor and when it becomes good enough it’ll move to the company once it’s proven.
Well. In hell you’ll find company. The greed of capitalism is a violence, it a fair world it would be punished accordingly.
Doctors aren’t held accountable for their mistakes. They cover shit up. It is only by .accident a patient finds out and are able to sue
Thing is, human doctors already make a lot of mistakes that cause wrongful deaths. It wouldn’t surprise me if it ends up being similar to the situation we’re seeing with Tesla’s self driving cars, where they clearly have safety issues, but still end up being twice as safe as human drivers.
It probably won’t look like that.
There are already computer programs in place that assist doctors and nurses, like programs that check for drug interactions.
AI will probably get added to do things to assist practitioners instead of replacing them at first. A junior general doctor uses an AI as a diagnosis tool or a radiologist uses AI to help in diagnosing tumors.
Over time, costs go down the tools get better and you can use less time per doctor as the AI gets relied on to more of the grunt work and the doctor is just there to make sure nothing bad happens. By the time they propose outright replacing humans with AI, they are going to have a large body of evidence showing that AI has a better record than people.
So, you’re saying we should invent a bad AI to kill children? Consider it done. 🤖