If you work in healthcare right now, you have probably been asked some version of the same question: “So… what is your AI strategy?” If you have not been asked If you work in healthcare right now, you have probably been asked some version of the same question: “So… what is your AI strategy?” If you have not been asked

Stop Chasing AI. Start Solving Health Plan Problems

If you work in healthcare right now, you have probably been asked some version of the same question:

“So… what is your AI strategy?”

If you have not been asked that, you are a unicorn.

AI has evolved from a collection of targeted tools into an umbrella term that now includes everything from robotic process automation to generative AI systems that can synthesize and recommend actions based on massive datasets. What used to be niche experimentation has become a strategic priority for almost every health insurer. At the same time, regulators are paying far closer attention to where and how AI is being deployed, particularly in processes that influence coverage decisions and patient outcomes.

This creates two pressures for health plans. Executives, boards, and clients want a clear AI story that signals innovation and competitiveness. Providers, regulators, and members want assurance that AI is used responsibly, safely, and with meaningful oversight. Starting with the question “What can AI do for us?” rarely leads to the right outcome.

As a product leader, I start with a different question:

“What business problem do we need to solve?”

This question grounds the entire AI conversation. Health plan challenges have not fundamentally changed because AI entered the scene. Plans still need to ensure that members receive appropriate, high-quality care. They still face complex regulatory requirements that shape payment rules and operational processes. They continue to struggle with streamlining claims processing, premium billing, provider network management, and risk adjustment. They must understand emerging cost drivers, modernize benefit design, retain members, and differentiate in an increasingly competitive market.

AI might help address these challenges, but only when it is treated as a tool and not a goal. Good AI strategy is really just good problem-solving strategy – and as important to be able to recognize when AI is not the best solution to your problem as much as when it is.

What AI Is — And What It Is Not

Inside a health plan, the term “AI” can refer to multiple technologies with very different strengths. Deterministic automation tools can eliminate repetitive manual tasks. Machine learning models can surface fraud risks or analyze population trends. Natural language processing systems can classify, summarize, and route unstructured text. Large language models can draft, recommend, and synthesize information in ways traditional systems never could.

AI is powerful when applied to the right kind of work. It excels at handling repetitive, rules-driven tasks where the decision criteria are clear. Claims intake, data extraction from PDFs, eligibility checks, and standard edits can be redesigned to leverage AI tools that reduce manual work, lower error rates, and speed up processing. AI is also remarkably good at finding patterns in large datasets, which can help identify anomalies, pinpoint drivers of cost, and surface opportunities for intervention.

Language-heavy workflows offer another promising set of use cases. Prior authorization requests, member appeals, provider inquiries, and internal messages require reading, summarizing, and judgment. AI systems can help classify and summarize this information and draft responses that human staff review and finalize.

AI can even support strategic analysis. Analysts and product teams can use AI to explore “what if” questions about benefit design, care management, or network composition faster than they can with traditional tools.

But AI has clear limitations. It struggles in environments with poor or fragmented data. When underlying data is inconsistent or incomplete, AI amplifies problems instead of solving them. It also creates risk when used without transparency in high-stakes decisions. Regulators are increasingly scrutinizing AI in utilization management and prior authorization, demanding clarity about how decisions are influenced and ensuring human oversight.

AI is also not a replacement for clinical or operational expertise. It can surface insights and identify patterns, but expert judgment is still essential. And generative models can produce confident but incorrect output, especially when prompts lack context. This can become dangerous in healthcare settings where accuracy directly affects people’s lives.

The essential point is that AI is highly effective at pattern recognition and automation under the right circumstances. It is not an all-purpose brain or a universal fix. Matching the right AI technique to the right class of problem is where the real value lies.

Start With Problems, Not Platforms

Before launching any AI initiative, it is worth asking several foundational questions.

The first question is simple: What specific outcome are we trying to improve? Plans often cite broad objectives like efficiency or modernization, but AI implementation requires much more precision. Are you trying to increase first-pass payment rates, reduce prior authorization turnaround time, shorten call center handle time, or improve provider satisfaction? Clear outcomes lead to clear design.

Next, how do you measure that outcome today? Without a baseline, it is impossible to demonstrate ROI or even know whether AI is helping.

Then, what is truly blocking progress? Sometimes the challenge is volume and complexity, but often it is outdated workflows, unclear policies, or siloed systems. AI is not always the right answer. Sometimes cleaning up processes or data is the higher-impact first step.

Finally, who owns the result? AI initiatives succeed when business and operational leaders share ownership with technical teams. When AI lives only in IT or innovation labs, pilots often look impressive while real-world outcomes fall short.

Once you answer these questions, you can design AI solutions that are grounded, effective, and aligned to outcomes rather than trends.

The AI Capabilities Health Plans Actually Need

When the focus is on solving problems, certain categories of AI capability consistently rise to the top.

One is automation. Health plans perform countless routine tasks, from data entry and validation to assignment and routing. These tasks are critical but not complex. AI-enabled automation can streamline these steps so that employees can focus on nuanced, judgment-driven cases. This approach increases efficiency and reduces errors without removing human oversight from sensitive workflows.

Another major opportunity lies in AI-supported research and analytics. Plans have enormous datasets yet often struggle to extract actionable insights. AI can help reveal patterns in clinical trends, member behavior, benefit utilization, and cost drivers. It can help identify gaps in care, surface emerging risks, and highlight areas where interventions might be most effective. In these cases, AI accelerates the work of analysts and clinicians by synthesizing vast amounts of information quickly.

Training and development is also an area where an AI solution can meaningfully assist with content definition. Health plans deal with constant updates to regulations, policies, and procedures. Generative AI can help translate those updates into clear, role-specific guidance, draft training materials, and create realistic practice scenarios. This reduces the lag between policy change and frontline execution.

Choosing the right AI partners is equally important. Not all AI vendors understand the operational and regulatory realities of health plans. The best partners combine technical capability with deep domain knowledge. They can articulate when AI is not the right solution, help design workflows that incorporate human oversight appropriately, and provide transparency into how their models work. In a crowded AI market, selecting partners who understand payer operations is as essential as selecting the technology itself.

Finally, internal AI literacy is a must-have. Health plans cannot rely on organic, informal learning when it comes to AI. Before adopting AI solutions, teams should have formal training on what AI can and cannot do, how to construct effective prompts, how to evaluate AI output, and how to consider issues such as bias and equity. This is especially important in functions like clinical review, compliance, customer service, and network management, where decisions carry real-world consequences.

Build Or Buy? A Practical Decision Framework

Most health plans will eventually adopt a hybrid approach, building some AI capabilities while purchasing others. The key is deciding deliberately rather than reactively.

The first consideration is whether a capability is central to competitive differentiation. If a model or algorithm gives you a unique strategic advantage, building or co-developing it may make sense.

The next question is whether the problem is common or specialized. Many AI capabilities, such as document extraction or basic triage, are widely available and well understood. Others, like supporting a niche network strategy or a unique benefit structure, may require custom work.

Talent and infrastructure also matter. Building AI is not just about training a model. It requires engineers, governance, monitoring, and ongoing maintenance. If the organization is not prepared to support those functions long term, buying or partnering is the safer choice.

Lifecycle cost is another critical factor. Building can appear less expensive up front, but costs often grow once maintenance, monitoring, and regulatory updates are included. Buying may be more predictable over time.

And timing is crucial. If a business need is urgent and a proven solution exists, buying is the pragmatic choice. Building can happen later when time allows.

A thoughtful mix of build and buy decisions creates resilience and flexibility as AI capabilities evolve.

Designing For ROI From Day One

The most compelling AI stories are not about technology. They are about outcomes.

The question to answer is not “How did we use AI?” but “What did AI help us improve?”

To design for ROI from the start, organizations should create a clear value hypothesis that ties AI to specific goals. They should choose a small set of measurable metrics that cover efficiency, quality, compliance, experience, and outcomes. They should pilot AI with controlled groups and compare results to existing processes. They should track where AI suggestions are used, overridden, or adjusted, and why.

Most importantly, plans should expect and plan for iteration. AI systems evolve, and so do organizations. Tuning and adjustment should be part of the roadmap, not a surprise.

When AI implementation is anchored to measurable outcomes, ROI becomes part of the strategy rather than an afterthought.

Governance, Guardrails, And Sustainable AI

As AI becomes more embedded in health plan operations, governance must keep pace. Cross-functional governance groups that include legal, compliance, clinical, IT, product, and operations can ensure that AI is deployed responsibly. Maintaining an inventory of all models in use, their data sources, and their areas of influence is essential.

Plans should incorporate principles of fairness, transparency, and accountability into AI policies, ensure human oversight in any process that affects coverage or care, and regularly monitor for bias or disparate impact.

Strong governance does not slow innovation. It enables innovation by providing confidence, clarity, and trust.

From “AI Strategy” to “Learning Strategy”

The most important shift for health plans is recognizing that AI is not a one-time strategy. It is a continuous learning journey.

Whatever an organization thinks it knows about AI today will evolve quickly. Regulations, expectations, and capabilities will continue to change. The organizations that thrive will be those that build adaptive learning systems, grounded in clear problem definition, measurable outcomes, responsible implementation, and continuous improvement.

Instead of saying, “We are piloting AI,” successful plans will be able to say:

“We are using AI carefully and deliberately to simplify work, improve experiences, and make better informed decisions. We know where it works and where it does not, because we measure and learn.”

That is the story that will matter most in the years ahead.

Norah Brennan is Vice President of Product at HealthAxis. She leads product strategy and development focused on helping health plans modernize operations and deliver better outcomes for members and providers.

Market Opportunity
Nowchain Logo
Nowchain Price(NOW)
$0.00055
$0.00055$0.00055
+12.24%
USD
Nowchain (NOW) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Gold hits yet another new all-time high of $4,740 as Bitcoin crashes to $91,000

Gold hits yet another new all-time high of $4,740 as Bitcoin crashes to $91,000

The post Gold hits yet another new all-time high of $4,740 as Bitcoin crashes to $91,000 appeared on BitcoinEthereumNews.com. Gold futures pushed to a new all time
Share
BitcoinEthereumNews2026/01/20 14:04
CLARITY Act Gains Support as Bitcoin Policy Momentum Builds

CLARITY Act Gains Support as Bitcoin Policy Momentum Builds

The post CLARITY Act Gains Support as Bitcoin Policy Momentum Builds appeared on BitcoinEthereumNews.com. The crypto regulation in the United States has gained momentum ahead of midterm elections in 2026. Satoshi Action Fund CEO has promised a massive Bitcoin announcement that may change crypto adoption optics. The Clarity Act has already received minimum required support from the Democratic Senators. Crypto regulation in the U.S. is picking up speed heading into the 2026 midterms. The Satoshi Action Fund, led by Dennis Porter, has ramped up lobbying efforts in Washington D.C., pushing lawmakers to prioritize the CLARITY Act. Porter also teased that a “massive” Bitcoin announcement is coming next week; one he claims could change the trajectory of Bitcoin adoption in the U.S.  Industry voices are urging traders to watch closely. Benjamin Aaron Semchee, chairman of Averliz, told followers that Porter’s call deserves attention, underscoring how policy shifts could hit markets fast. What Crypto Regulations Are Expected from Washington D.C? Building on the GENIUS Act The U.S. lawmakers came together from both major parties to pass the GENIUS Act, which focuses on stablecoins as a form of payment.  With the country’s labor data having revealed weakness, lawmakers are now more keen than ever to tap into the emerging technologies to create new and higher paying jobs. Bipartisan Push for the CLARITY Act Momentum is now behind the CLARITY Act, which aims to overhaul crypto market structure rules. On Friday, 12 Democratic Senators, led by Senator Ruben Gallego, reaffirmed their intent to work across the aisle. “We hope our Republican colleagues will agree to a bipartisan authorship process, as is the norm for legislation of this scale. Given our shared interest in moving forward quickly on this issue, we hope they will agree to reasonable requests to allow for true collaboration,” the Dem Senators noted. Related: Ray Dalio Warns of US ‘Economic Heart Attack’ From Debt, Sees…
Share
BitcoinEthereumNews2025/09/20 21:02
Goddess of Wealth Jailed for $7.2 Billion Crypto Scam Targeting Thousands

Goddess of Wealth Jailed for $7.2 Billion Crypto Scam Targeting Thousands

Zhimin Qian jailed in UK for $7.2B crypto scam targeting 128,000 victims; 61,000 Bitcoin seized in record-breaking operation. Zhimin Qian, also known as Yadi Zhang
Share
LiveBitcoinNews2026/01/20 14:00