Red Flags in AI Privacy Policies: What to Look For

When reviewing privacy policies or terms of use for AI tools, keep your eye out for these common risk signals:

❌ Red Flag #1: “We may use your data to improve our models.”

→ This often means your input may be stored and reviewed.

❌ Red Flag #2: “We do not accept responsibility for incorrect outputs.”

→ You need strong disclaimers and manual review protocols.

❌ Red Flag #3: “We do not allow use for sensitive information, including health data.”

→ This disqualifies the tool for any PHI use.

❌ Red Flag #4: No mention of HIPAA, BAA, or compliance frameworks

→ If they serve healthcare, they should at least mention security standards.

❌ Red Flag #5: “You agree to indemnify us...”

→ Look out for legal clauses that shift all liability to you, the user.

 🟩 Green Flags to Look For:

  • HIPAA or SOC 2 Type II mentioned

  • Ability to turn off training

  • Option to enter into a BAA

  • Enterprise plan with enhanced security

Previous
Previous

Smart Intakes: Using AI Assistants Without Violating Privacy

Next
Next

ONC, FTC, and AI: What Non-HIPAA Rules Still Apply