Let’s imagine the possibilities and theoretically demo the results based on current knowledge:
-
yes AI made the process fast and the patient did not die unnecessarily.
-
same but the patient died well.
-
same but the patient died.
-
same as either 1, 2, or 3 but AI made things slower.
Demo:
Pharmacy: Patient requires amoxicillin for a painful infection of the ear while allergic to penicillin:
AI: Sure! You will find penicillin in Isle 23 box number 5.
Pharmacy: the patient needs amoxicillin actually.
AI: Sure! The Patient must have an allergic reaction to more commonly used anti inflammatory medications.
Pharmacy: actually amoxicillin is more of an antibiotic, where can I find it?
AI: Sure! While you are correct that amoxicillin is an antibiotic, it is a well studied result that after an infection inflammation is reduced. You can find the inflammation through out the body including the region where the infection is located.
Pharmacy: amoxicillin location!
AI: Sure! Amoxicillin was invented in Beecham Research Laboratories.
I made the mistake of asking github copilot about a namespacing issue I was having today. It suggest a course of action that didn’t pan out but was at least semi-sensical but the accompanying example code did the opposite of what it had said. It read like it gave me the text of a stackoverflow accepted answer and then the code from the question.
is your demo real? I don’t like AI but this is not characteristically accurate for modern gen-ai.
Is this in regards to a specific recent event or article? Or just purely hypothetical?
In practice, an AI that’s trained on drug-drug interactions, duplicate therapies, and common dosings would be beneficial. We already have specialized models that are helping scientists discover groundbreaking technologies, such as recent advancements in discovering cancers years before we are used to with more traditional methods.
Let’s look at your hypothetical. Prescriber sends in an order to their in-house pharmacy for amoxicillin and the patient has a recorded penicillin allergy. Under ideal circumstances, the pharmacist would review the patients chart, note the potential for a reaction (While they are different antibiotics, there is still potential for a reaction due to the drugs being related), and contact the prescriber to verify therapy and discuss if a change to another antibiotic is in order. (This is all ignoring the fact that for an ear infection you’d likely get an otic ear drop, not an oral suspension. Something like Neo-Poly-Dex or ofloxacin).
Unfortunately the pharmacy hellscape we’re in today leads to rushed verifications, where therapies aren’t being checked too closely and many things get missed. Pharmacies already have systems in place to warn techs and pharmacists of any interactions with recorded allergies, but if you’re traveling or need to go to a new pharmacy or doctor, things get missed.
An AI that is trained on these specific things would help alleviate some of the pressure of the already overworked pharmacy staff, while giving consise and consistent information. If a pharmacist misses an allergy or interaction, the AI could send a warning to them and the prescriber.
Note that I’m referring to job specific AI, that are trained for specific purposes. A general LLM, which it sounds like you’re referring to, would not be able to work in these environments.
Source: I audit pharmacy claims, with training in retail, LTC, and PBM pharmacy settings. It’s literally my job to catch the errors (both billing and clinical) that pharmacies make.
ML has plenty of potential value as a supplement to a doctor/medical professional’s judgement.
LLMs do not.
“Pharmacists are finding they can see more patients a day by just prescribing the closest thing within arm’s reach”
Seeing more patients is not a good thing by itself
There’s a guy who works as a product owner at my employer. He has a PharmD. He got fed up with the metrics for how many prescriptions he had to fill. Now he does software.
It’s crazy to think that someone has a terminal degree in a really technical field and he nope’d out because of how bad it got.
Someone at Microsoft must’ve had a stroke while writing this