Who needs “AI” when the simple algorithm they already use works perfectly well?
while 1==1: deny_coverage = True
I hate that you are absolutely right.
Medical directors do not see any patient records or put their medical judgment to use, said former company employees familiar with the system. Instead, a computer does the work. A Cigna algorithm flags mismatches between diagnoses and what the company considers acceptable tests and procedures for those ailments. Company doctors then sign off on the denials in batches, according to interviews with former employees who spoke on condition of anonymity.
“We literally click and submit,” one former Cigna doctor said. “It takes all of 10 seconds to do 50 at a time.”
Cruel AND unusual??
Sure, that’ll stop them.
They will add someone who’s job it is to click okay to every decision the AI makes. Therefore the AI isn’t making a decision the human always clicking okay is.
I’m sure it was a stern warning.
Here’s a wild idea: make them publish the exact criteria and formulae used to determine coverage. Their decisions should be verifiable and reproducible.
This isn’t rocket science.
I nominate you for US Senate
They are already do. For example: https://www.bluecrossma.org/medical-policies/sites/g/files/csphws2091/files/acquiadam-assets/088 Preimplantation Genetic Testing prn.pdf
well what were they using before
A doctor reviewing 600 medical cases a minute. https://www.youtube.com/watch?v=tCJcrIpgrr0
Here is an alternative Piped link(s):
https://www.piped.video/watch?v=tCJcrIpgrr0
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Oh, that’s some serious finger wagging, sure to make them think twice.
I am not from the US but it baffles me how someone can be cut off from health care in a supposed first world country.
Because greed.
Yeah, sure, ok. We pinky promise not to use AI to generate leads that are then printed out on paper and put in front of a doctor’s assistant’s autopen for signatures denying insurance or coverage.
There is absolutely ZERO way to practically enforce this. An AI team can act like a black box, ingesting data and outputting hard copies that cannot be traced back to them. There is no way this will not happen.
“We’ll audit the company!” -> they’ll send the data to an offshore shell company that doesn’t follow the law, then the recommendations will be sent back.
Prove that legislation can stop this, just try.
AI will deny the care after being rubber stamped by a doctor who graduated last in his class and this is the only job he can get, being a traitor for the insurance companies.