AI + HIPAA: What You Can (and Absolutely Cannot) Do with AI in Healthcare

Because “We didn’t know that was a violation” doesn’t work as a legal defense.

AI is changing healthcare—fast. It’s helping predict patient outcomes, assist with documentation, and streamline workflows in ways we couldn’t have imagined five years ago. But it’s also raising a lot of eyebrows (and red flags) when it comes to HIPAA compliance.

And contrary to what you may have heard, HIPAA is responding.

In late 2024, the Department of Health and Human Services (HHS) released a Notice of Proposed Rulemaking (NPRM)to modify the HIPAA Security Rule—specifically addressing how emerging technologies like AI must be governed in regulated environments.

So no, the old "HIPAA hasn’t caught up to AI yet" excuse doesn’t fly anymore. Let’s talk about what the rules actually say, what the proposed changes mean, and how your organization can use AI in smart, compliant ways.

🧠 What Counts as AI in Healthcare?

Let’s start simple: what do we mean when we say “AI in healthcare”? AI could be:

  • Machine learning tools that analyze lab results

  • Voice-to-text documentation systems

  • Predictive analytics platforms in your EHR

  • Chatbots for billing or symptom checking

  • Generative AI tools like ChatGPT used to summarize clinical notes

If any of those systems interact with PHI (Protected Health Information)—even indirectly—they’re subject to HIPAA.

🏥 What HIPAA Says (and What’s Changing)

Originally, HIPAA was passed in 1996. Unsurprisingly, it didn’t exactly foresee deep learning models or AI-driven diagnostics. But the HIPAA Security Rule still applies to any technology that accesses, processes, or transmits ePHI.

In late 2024, HHS proposed new changes to modernize the Security Rule. These are the most relevant to AI:

🔍 1. Mandatory Asset Inventories

Entities would be required to create and maintain written inventories of all technology assets, including AI tools that process or analyze PHI. That means if you’ve deployed a generative AI tool in a clinical workflow—it must be tracked, documented, and evaluated for risk.

🔐 2. Stronger Risk Analysis Requirements

The new rule would require organizations to evaluate:

  • How much PHI an AI tool interacts with

  • Who has access to the data and outputs

  • Whether the tool’s output could result in inappropriate disclosure

It’s not just “does this tool touch PHI?” anymore—it’s howwhere, and who’s involved.

📋 3. AI Governance Programs

Perhaps the biggest shift: the proposed rule highlights the need for AI governance within HIPAA-regulated entities. That means documented policies, usage boundaries, and safeguards tailored to how your organization uses AI.

This moves us from passive compliance to active oversight—and fast.

💡 Read more from HHS: NPRM Fact Sheet

😬 Where Organizations Get It Wrong

You don’t need to go rogue with AI to violate HIPAA. Plenty of violations happen with good intentions and bad assumptions. A few of the most common mistakes we’ve seen:

❌ Uploading patient data into ChatGPT “just to help write a summary”

Unless you’re using a HIPAA-compliant version (with a signed BAA), you’ve just disclosed PHI to an unauthorized third party.

❌ Using AI transcription tools with no BAA in place

Some popular voice-to-text apps offer amazing features. But if they don’t offer a Business Associate Agreement, they cannot legally process PHI.

❌ Letting AI models “learn” on real patient data

Training an internal model on PHI without appropriate de-identification or controls could violate both HIPAA and your own internal policies.

✅ What You Can (and Should) Do with AI in Healthcare

HIPAA doesn’t prohibit AI. It just demands that you use it safely, transparently, and with documented oversight.

✔ Use vendors that are HIPAA-compliant and offer a BAA

If the tool touches PHI, this is non-negotiable. No BAA = no-go.

✔ De-identify data where possible

Removing identifiers (names, dates, SSNs, facial images, etc.) can let you explore AI capabilities with much lower risk.

✔ Define your governance model now

Even if the NPRM isn’t finalized yet, get ahead of it. That means asset inventories, use policies, and documented risk evaluations for every AI tool in use.

✔ Educate your teams

Your clinicians, support staff, and even marketing teams need to understand that not all AI is safe to use with sensitive data. Internal policy is your first line of defense.

🐴 How Ferrous Equine Helps

We work with healthcare organizations navigating this exact challenge: “How do we explore AI tools without ending up on the wrong side of HIPAA?”

We help by:

  • Reviewing current and proposed HIPAA requirements through an AI lens

  • Auditing your tech stack for unauthorized or unvetted tools

  • Helping create practical, usable AI governance frameworks

  • Advising on vendor reviews, BAAs, and internal approval processes

  • Including AI usage in your risk assessments and documentation

We're not just trying to slow you down—we're here to keep your innovation sustainable.

Final Thought: AI Is Here. HIPAA Is Catching Up. Are You?

AI isn’t a HIPAA loophole—it’s a compliance challenge. But it's also a massive opportunity for those willing to build carefully, document clearly, and lead responsibly.

The NPRM makes it crystal clear: if you’re using AI in a way that impacts PHI, you must treat it like any other sensitive asset—with governance, safeguards, and visibility.

So if you’re pushing forward with AI in your organization: good. Now make sure your compliance program is moving just as fast.

👉 Need help aligning your AI strategy with HIPAA today—and what’s coming tomorrow?
We’ll help you move forward confidently, with smart policy, clear oversight, and no guessing.

 

Next
Next

📈 The Rising Costs of IT in 2025: What It Means and What to Do About It