Behind the Breach: What Happens When AI Tools Go Rogue
Case: The Therapist, the AI Note Tool, and the Unexpected Disclosure
In 2024, a solo behavioral health therapist used a free AI note-taking app she found online. It transcribed her therapy sessions in real time — and it worked like magic. Until a patient requested their records.
When she pulled the transcripts, she realized the app had been auto-saving sessions to the cloud — unencrypted, with no BAA, and no access controls. Worse: the company was using her transcripts to train their language model.
The result?
Patient complaint
Reportable HIPAA breach
Legal costs, reputational harm
Complete loss of trust with affected clients
What went wrong?
She didn’t read the privacy policy
No risk assessment was done before trying the tool
No BAA was signed
No audit trail was kept
No offboarding or deletion policy existed
⚖️ Lessons for Providers:
✅ ALWAYS vet an AI tool’s privacy policy before use
Look for: data use in training, cross-border transfers, deletion timelines
Ask: Do they sign a HIPAA-compliant BAA?
✅ NEVER store patient transcripts in third-party tools without clear controls
If it touches PHI, it must be logged, risk-assessed, and covered by a BAA
✅ ALWAYS log your usage — even if it’s just once
One “test” use can still count as a breach if it involves PHI
👁️ What to Watch For (Red Flags):
✅ “We use your data to improve our model”
✅ “We may share anonymized data with partners”
✅ “You must delete your data manually”
✅ “We do not sign BAAs” (or they charge $$$ for one)
📣 Moral of the Story:
AI doesn’t have to be risky — but if you don’t approach it like any other health tech vendor, it absolutely can become your next liability.