Attorney Bill Meyer joins producer/host Coralie Chun Matayoshi to discuss how the medical field lends itself to the use of AI, how insurance companies like HMSA are using AI, and why California consumer protection laws will likely apply to Hawaii residents and other Blue Shield customers throughout the country.
Attorney Bill Meyer’s practice focuses on intellectual property including copyright, trademark, entertainment, trade secret, music, art and advertising and he has represented many of the top recording artists, record labels and film makers in the world.
Q. How does the medical field lend itself to the use of AI?
The medical field lends itself unusually well to algorithmic decision making for several reasons, including the structure of medicine, the practice of medicine, the data richness, and the outcome orientation of medicine. Medicine is already algorithmic in nature because even before computers, medicine ran on structured decision trees. For example, if you have X and Y, you’re likely to have Z. These are explicit conditional rules derived from experience and statistical evidence, and machine learning simply scales and refines that logic using massive data sets instead of anecdotal or cohort-based reasoning. The medical field generates more immense, structured, codified data than almost any other human activity. This structured and continuously updated data landscape makes medicine fertile ground for pattern recognition, which is the core strength of algorithms. Medical decisions have measurable outcomes which can often be quantified for such things as recovery time, survival rate, lab improvement, or symptom reduction. Measurable outcomes allow for what’s known as feedback loops, enabling algorithms to self-correct and learn more each time, something that’s much harder to do in other fields where success is more subjective in nature. Moreover, a huge portion of clinical decision-making boils down to classifications (e.g. is a particular lab result normal or abnormal?) Machine learning is optimized for exactly this kind of work, classifying complex inputs and discrete outputs. Finally, the field’s complexity often exceeds human cognition limits, but algorithms can parse millions of cases, weigh multiple interactions, and rank probable best options, making them ideal decision-making support tools.
Q. HMSA recently announced partnering with healthcare tech company Stellarus, which uses artificial intelligence to combine over 60 data sets, ranging from clinical health and social demographic information to provider, billing and claims data, into a unified and secure source for health plans. It’s designed to help improve health care and reduce costs, but could AI affect what insurance covers?
The insurance business is integral to the rendition of medical services. Stellarus was created by Blue Shield, a national insurance provider, to improve operational efficiency (e.g. automating claims, evaluating pharmacy benefit workflows, and providing provider integration), enhance data transparency (e.g. centralizing and analyzing drug costs and utilization data), and serve as a member engagement tool (e.g. deploying digital portals and analytics that will help members understand and optimize their care choices). So at this point in time, Stellarus is positioned as an enabler and infrastructure layer, not as a utilization review body making yes or no decisions about individual treatments. However, the line between infrastructure and decision support can blur very quickly when algorithms enter the mix.
Q. Let’s discuss the two directions that AI tools such as Stellarus could go – tool or decision maker.
Health insurers may use algorithms to decide whether to pay for tests or treatment recommended by your doctor. Oftentimes, your doctor needs to receive prior authorization from your health insurer before providing care. The algorithm can decide whether the test or treatment is medically necessary and how much care the patient needs (e.g. how many days in the hospital or how much physical therapy is required post-surgery). If Stellarus evolves like other payer tech systems, it could eventually use algorithms to flag tests or procedures as “low-value” or non-evidence-based activities. It could require additional justification before it’s willing to make payment. And it could automatically approve or deny certain requests based upon established guidelines and algorithmic cost containment which is what most patients and clinicians fear when they hear “AI optimization.”
A second approach is what’s known as efficacy-based algorithmic decisioning, otherwise known as clinical quality improvement. Under this approach, Stellarus might analyze clinical outcome algorithms (e.g. which drugs or therapies produce the best real-world outcomes for specific patient profiles, which combinations of treatments yield the best cost-benefit ratios) which nudge doctors toward evidence-based effective treatments rather than automatically denying it. This approach uses algorithms not to deny care, but to inform coverage policies with stronger evidence and to guide physicians through dashboards showing predicted outcomes.
California has passed legislation that is going to affect Stellarus’ activities in the state of California, and assuming that their activities are in compliance with California law, there’s no reason to believe that their activities would be any different when they are interfacing with HMSA in Hawaii. Under California law, humans have to be in the decision-making loop. Any decision to deny or alter access to tests or treatments based upon algorithmic output would need, under California law, physician review, and could not rely exclusively on the algorithm. It also requires individualized assessment where algorithms would have to incorporate individual patient data history, patient data such as history, clinical context, and not rely solely on generic or population-level inputs when medical necessary determinations are being made. It would have to provide transparency to policyholders. The insurer, HMSA, would have to disclose how the decision process works, including the methodology and criteria, and maintain documentation for regulators and customers. The use of AI would also have to be non-discrimination based. The algorithms would be audited and monitored to ensure that they do not propagate or worsen historical access disparities, for example, a bias against a certain patient group. If AI is used in patient-facing communications, automated results, interpretation or suggestions, there would have to be clear notice if AI generated the communication, and patients would have to be able to contact a human healthcare provider. And finally, California law, and again, if they follow it in Hawaii, would provide data governance and auditability. The underlying algorithms used would have to be able to be audited by government officials with appropriate governance regarding the training data updates, bias mitigation, especially if generative AI or large models are used. So, I don’t think that there’s immediate concerns right now, if they follow how they’re rolling it out in California. Over time, we will see if all these protections are available to Hawaii residents.
Q. What about the rest of the country? Is there anything happening on the federal level to ensure that AI is used in a safe, effective, and fair manner?
Insurance AI tools are largely unregulated and don’t have to undergo Food and Drug Administration review. Moreover, insurance companies often argue that their algorithms are proprietary or constitute trade secrets. However, there is a notion that when California is the tide and it raises all boats, and Blue Shield is, I believe, doing business throughout the country, Stellarus is not going to be cherry-picking and falling prey to a patchwork of regulatory matters. They will probably be raising the bar across the country if you happen to be within the Blue Shield family of companies.
Q. Does Medicare require prior authorization before medical tests or treatment?
While 99% of Medicare Advantage beneficiaries have prior authorization requirements in their plans, traditional Medicare plan beneficiaries do not. But Centers for Medicare & Medicaid Services (CMS) recently announced a new Medicare pilot program in six states that will require prior authorization for traditional Medicare plans beginning in 2026 to address “crushing fraud waste and abuse.” If AI is used to make such decisions, it is possible that the number of prior authorization rejections could increase.
To learn more about this subject, tune into this video podcast.
Disclaimer: this material is intended for informational purposes only and does not constitute legal advice. The law varies by jurisdiction and is constantly changing. For legal advice, you should consult a lawyer that can apply the appropriate law to the facts in your case.
