AI vs. AI: Patients deploy bots to battle health insurers that deny care
By Anna Claire Vollers Stateline org As states strive to curb medical insurers use of artificial intelligence patients and doctors are arming themselves with AI tools to fight insists denials prior authorizations and soaring medicinal bills Related Articles After series of denials his insurer approved doctor-recommended cancer care It was too late Madrigal Medicare Advantage long overdue for a checkup MassDems call on Republican governor candidates to denounce RFK Jr vax vote Bayer sues AIG insurance to help with legal bills as Roundup maintains top B Cancer stole her voice She used AI curse words and kids books to get it back Several businesses and nonprofits have launched AI-powered tools to help patients get their insurance declares paid and confront byzantine curative bills creating a robotic tug-of-war over who gets care and who foots the bill for it Sheer Fitness a three-year-old company that helps patients and providers tackle healthcare insurance and billing now has an app that allows consumers to connect their robustness insurance account upload diagnostic bills and proposes and ask questions about deductibles copays and covered benefits You would think there would be particular sort of system that could explain in real English why I m getting a bill for revealed cofounder Jeff Witten The venture uses both AI and humans to provide the answers for free he revealed Patients who want extra assistance in challenging a denied claim or dealing with out-of-network reimbursements can pay Sheer Medical to handle those for them In North Carolina the nonprofit Counterforce Fitness designed an AI assistant to help patients appeal their denied wellness insurance proposes and fight large curative bills The free arrangement uses AI models to analyze a case s denial letter then look through the sufferer s plan and outside diagnostic research to draft a customized appeal letter Other consumer-focused services use AI to catch billing errors or parse curative jargon Chosen patients are even turning to AI chatbots like Grok for help A quarter of adults under age revealed they used an AI chatbot at least once a month for wellness information or advice according to a poll the vitality care research nonprofit KFF published in August But the bulk adults reported they were not confident that the soundness information is accurate State legislators on both sides of the aisle meanwhile are scrambling to keep pace passing new regulations that govern how insurers physicians and others use AI in healthcare care Already this year more than a dozen states have passed laws regulating AI in strength care according to Manatt a consulting firm It doesn t feel like a satisfying outcome to just have two robots argue back and forth over whether a sufferer should access a particular type of care revealed Carmel Shachar assistant clinical professor of law and the faculty director of the Physical condition Law and Guidelines Clinic at Harvard Law School We don t want to get on an AI-enabled treadmill that just speeds up A black box Wellness care can feel like a black box If your healthcare provider says you need surgery for example the cost depends on a dizzying number of factors including your soundness insurance provider your specific wellness plan its copayment requirements your deductible where you live the facility where the surgery will be performed whether that facility and your expert are in-network and your specific finding Particular insurers may require prior authorization before a surgery is approved That can entail extensive clinical documentation After a surgery the resulting bill can be hard to parse Witten of Sheer Soundness noted his company has seen thousands of instances of patients whose doctors recommend a certain procedure like surgery and then a scarce days before the surgery the person learns insurance didn t approve it In up-to-date years as more physical condition insurance companies have turned to AI to automate indicates processing and prior authorizations the share of denied contends has risen This year of physicians and other providers disclosed their asserts are denied more than of the time up from of providers who mentioned that three years ago according to a September statement from credit reporting company Experian Insurers on Affordable Care Act marketplaces denied nearly in in-network suggests in up from in and more than a third of out-of-network indicates according to the the bulk of late available input from KFF Insurance giant UnitedHealth Group has come under fire in the media and from federal lawmakers for using algorithms to systematically deny care to seniors while Humana and other insurers face lawsuits and regulatory investigations that allege they ve used sophisticated algorithms to block or deny coverage for healthcare procedures Insurers say AI tools can improve efficiency and reduce costs by automating tasks that can involve analyzing vast amounts of material And companies say they re monitoring their AI to identify possible problems A UnitedHealth representative pointed Stateline to the company s AI Review Board a crew of clinicians scientists and other experts that reviews its AI models for accuracy and fairness Healthcare plans are committed to responsibly using artificial intelligence to create a more seamless real-time customer experience and to make asserts management faster and more effective for patients and providers a spokesperson for America s Vitality Insurance Plans the national exchange group representing medical insurers described Stateline But states are stepping up oversight Arizona Maryland Nebraska and Texas for example have banned insurance companies from using AI as the sole decisionmaker in prior authorization or biological necessity denials Dr Arvind Venkat is an urgency room physician in the Pittsburgh area He s also a Democratic Pennsylvania state representative and the lead sponsor of a bipartisan bill to regulate the use of AI in vitality care He s seen new technologies reshape fitness care during his years in medicine but AI feels wholly different he declared It s an current performer in people s care in a way that other technologies haven t been If we re able to harness this mechanism to improve the delivery and efficiency of clinical care that is a huge win disclosed Venkat But he s worried about AI use without guardrails His bill would force insurers and fitness care providers in Pennsylvania to be more transparent about how they use AI require a human to make the final decision any time AI is used and mandate that they show evidence of minimizing bias in their use of AI In medical care where it s so personal and the stakes are so high we need to make sure we re mandating in every sufferer s circumstance that we re applying artificial intelligence in a way that looks at the individual sufferer Venkat noted Victim supervision Historically consumers rarely challenge denied maintains A KFF analysis located fewer than of physical condition coverage denials are appealed And even when they are patients lose more than half of those appeals New consumer-focused AI tools could shift that dynamic by making appeals easier to file and the process easier to understand But there are limits without human oversight experts say the AI is vulnerable to mistakes It can be demanding for a layperson to understand when AI is doing good work and when it is hallucinating or giving something that isn t quite accurate declared Shachar of Harvard Law School For example an AI tool might draft an appeals letter that a subject thinks looks impressive But because bulk patients aren t healthcare experts they may not recognize if the AI misstates medicinal information derailing an appeal she declared The challenge is if the case is the one driving the process are they going to be able to properly supervise the AI she disclosed Earlier this year Mathew Evins learned just hours before his scheduled back surgery that his insurer wouldn t cover it Evins a -year-old general relations executive who lives in Florida worked with his physician to appeal but got nowhere He used an AI chatbot to draft a letter to his insurer but that failed too On his son s recommendation Evins turned to Sheer Wellness He announced Sheer identified a coding error in his anatomical records and handled communications with his insurer The surgery was approved about three weeks later It s unfortunate that the society fitness system is so broken that it requirements a third party to intervene on the person s behalf Evins described Stateline But he s grateful the device made it realizable to get life-changing surgery AI in and of itself isn t an answer he disclosed AI when used by a professional that understands the issues and ramifications of a particular complication that s a different story Then you ve got an effective tool Largest part experts and lawmakers agree a human is needed to keep the robots in check AI has made it workable for insurance companies to rapidly assess cases and make decisions about whether to authorize surgeries or cover certain health care But that ability to make lightning-fast determinations should be tempered with a human Venkat reported It s why we need ruling body regulation and why we need to make sure we mandate an individualized assessment with a human decisionmaker Witten commented there are situations in which AI works well such as when it sifts through an insurance procedures which is essentially a contract between the company and the consumer and connects the dots between the protocol s coverage and a corresponding insurance claim But he disclosed there are complicated cases out there AI just can t resolve That s when a human is needed to review I think there s a huge opportunity for AI to improve the individual experience and overall provider experience Witten commented Where I worry is when you have insurance companies or other players using AI to comprehensively replace customer endorsement and human interaction Furthermore a growing body of research has detected AI can reinforce bias that s unveiled elsewhere in medicine discriminating against women ethnic and racial minorities and those with citizens insurance The conclusions from artificial intelligence can reinforce discriminatory patterns and violate privacy in procedures that we have already legislated against Venkat revealed Stateline reporter Anna Claire Vollers can be reached at avollers stateline org States Newsroom Visit at stateline org Distributed by Tribune Content Agency LLC