AI in Healthcare: Balancing Innovation and Risk

The Double-Edged Sword of AI in Healthcare
Artificial intelligence (AI) is reshaping healthcare, offering breakthroughs in diagnostics, treatment personalization, and operational efficiency. But as Dr. Danny Tobey, a physician-attorney and global co-chair of DLA Piper’s AI practice, warns, this innovation comes with a “Pandora’s Box” of legal and ethical risks.

In an era where generative AI can fabricate lifelike answers and “black box” algorithms operate inscrutably, healthcare leaders face unprecedented challenges. Balancing innovation with governance isn’t just advisable—it’s existential.

The Regulatory Gray Zone: Why AI in Healthcare Lacks Clear Rules

Healthcare is no stranger to regulation, but AI exists in a fragmented landscape. Dr. Tobey describes the current environment as a “patchwork” of global policies, with agencies like the FDA and HHS clashing with state-level regulators and emerging litigation.

For instance, the EU’s AI Act classifies healthcare AI tools as “high-risk,” demanding stringent transparency, while the U.S. relies on existing laws like HIPAA and FDA guidelines for medical devices.

Key Challenges:

  • Overlapping Mandates: Organizations must comply with conflicting international, federal, and state regulations.
  • Litigation Surges: Early cases involving AI “hallucinations” (e.g., fabricated legal precedents) and algorithmic bias (e.g., discriminatory insurance practices) set risky precedents 110.
  • Lack of Standards: While the WHO and OECD propose ethical frameworks, binding laws remain scarce, leaving institutions in a “Wild West” scenario.

Generative AI: The Ultimate Risk Multiplier

Generative AI’s ability to solve complex problems probabilistically makes it uniquely powerful—and dangerous. Unlike traditional AI, it can produce plausible but inaccurate outputs, such as misdiagnoses or flawed treatment plans.

For example, an EHR vendor’s sepsis-detection AI faced scrutiny for frequent false alarms, risking patient trust and legal liability.

Mitigating Generative AI Risks
Dr. Tobey emphasizes four guardrails:

  1. Source Reliability: Ensure AI draws from peer-reviewed, up-to-date medical databases.
  2. Boundary Setting: Program AI to decline tasks outside its scope (e.g., clinical decisions).
  3. Transparency: Educate clinicians on AI’s limitations to prevent over-reliance.
  4. Continuous Testing: Monitor outputs to catch “drift” in AI behavior.

Build vs. Buy: Which Poses Greater Risk?

Health systems often debate whether to develop AI tools internally or license third-party solutions. Tobey argues neither approach is inherently safer:

  • Off-the-Shelf Tools: Vendors may lack domain-specific training, leading to mismatches with hospital workflows. For instance, an AI trained on urban patient data might fail in rural settings 12.
  • In-House Development: Without robust governance, internal teams risk bias from incomplete datasets or poor model validation. A 2023 study found AI tools using cost-of-care data disproportionately underserve Black patients due to systemic inequities.

The Governance Imperative:
Whether building or buying, institutions need frameworks that mandate:

  • Board-level accountability for AI ethics.
  • Budgets prioritizing safety over rapid deployment.
  • Cross-functional teams (clinicians, lawyers, data scientists) to audit AI systems.

The Hidden Costs of “Easy” AI Adoption

Many healthcare leaders underestimate the resources required for responsible AI. Open-source models like ChatGPT are cheap to implement but demand heavy investment in governance.

For example, Mass General Brigham’s AI for social determinants of health (SDOH) required integrating non-clinical data (e.g., housing, transportation) to avoid gaps in vulnerable populations—a process far costlier than initial development.

Budgeting for Responsible AI:

  • Compliance: Align with evolving standards like the EU AI Act’s transparency requirements.
  • Liability Protection: Insure against errors, as seen in no-fault compensation funds recommended by the WHO 37.
  • Training: Educate staff to interpret AI outputs critically, reducing “automation bias”.

Pillars of a Future-Proof AI Governance Framework

Dr. Tobey outlines four non-negotiables for healthcare organizations:

  1. Leadership Commitment: Boards must prioritize AI safety as a strategic pillar.
  2. Dedicated Funding: Allocate budgets for audits, updates, and ethical reviews.
  3. Multidisciplinary Oversight: Include ethicists, cybersecurity experts, and patient advocates in AI governance.
  4. Proactive Testing: Simulate real-world scenarios to identify biases or errors before deployment.

Case in Point:
DLA Piper’s work with biopharma clients includes stress-testing AI against edge cases (e.g., rare diseases) and ensuring compliance with GDPR and HIPAA during cross-border data transfers.

Don’t Fear Innovation—Manage It

While risks abound, Tobey cautions against paralysis. Healthcare’s human-centric system already tolerates errors—from misdiagnoses to administrative delays. AI can democratize access, like chatbots providing mental health support in underserved regions.

The key is balancing caution with ambition: “Throw out the baby with the bathwater, and you’ll stifle lifesaving progress”.

Embrace AI—But Anchor It in Governance
AI’s potential to revolutionize healthcare is undeniable, but its risks require equally transformative governance.

By adopting Dr. Tobey’s pillars—leadership buy-in, rigorous testing, and multidisciplinary oversight—health systems can unlock AI’s rewards without unleashing its Pandora’s Box of liabilities.

Leave a Comment