Where HIPAA Might Go Next: Preparing for AI in 2025+
🚨 The Current Reality:
HIPAA, written in 1996, was never built for generative AI, LLMs, or predictive models. Yet today, solo providers are using tools like ChatGPT, Abridge, and ambient scribe apps in daily care delivery.
📈 What Might Change:
🧠 1. Revised Definitions of PHI
AI models may force regulators to redefine what counts as PHI. Generated notes, voice-to-text outputs, and inferred conditions could be regulated as part of a patient’s Designated Record Set (DRS).
🔁 2. Tighter Vendor Rules
We expect HHS to issue explicit guidance for AI vendors in healthcare — likely requiring more BAAs, transparency, and minimum data use principles.
🧾 3. Mandatory AI Documentation
Policies, risk assessments, and audit logs will likely become required during OCR audits, not just best practice.
👥 4. Shared Liability Models
If an AI tool “goes rogue,” providers and vendors may both bear liability — especially when no review processes are in place.
✅ How to Prepare Now:
Write an AI Acceptable Use Policy
Keep a prompt/output audit log
Vet vendors for BAA, data use, and model security
Don’t wait — assume AI will be regulated like any other vendor