Latest HIPAA AI Compliance News: Guidance, Enforcement, and What’s Changing
HIPAA Security Rule Updates
The most significant proposed overhaul of the HIPAA Security Rule in two decades arrived on January 6, 2025, when HHS/OCR published a Notice of Proposed Rulemaking (NPRM) after previewing it on December 27, 2024. The NPRM aims to harden cybersecurity expectations for protecting electronic protected health information (ePHI) in light of escalating attacks against the health sector. As of February 19, 2026, the Security Rule changes are not yet final, but the proposal remains a key signal of where enforcement is headed. ([hhs.gov](https://www.hhs.gov/hipaa/for-professionals/security/hipaa-security-rule-nprm/index.html?utm_source=openai))
What’s on the table? The NPRM would tighten long-debated areas—most notably by elevating today’s “addressable” standards (e.g., certain forms of data encryption) toward firm, “required” expectations, introducing clearer mandates for risk analysis and risk management, and sharpening business associate accountability. Draft text also contemplates encrypting ePHI in transit and at rest, with only narrow exceptions documented through formal risk-based justifications. ([arnoldporter.com](https://www.arnoldporter.com/en/perspectives/advisories/2025/01/ocr-proposes-major-changes-to-hipaa-security-rule?utm_source=openai))
In parallel, NIST finalized SP 800-66 Revision 2 on February 14, 2024—a comprehensive resource developed with OCR that maps HIPAA safeguards to practical security activities and to the NIST Cybersecurity Framework. Many organizations are using SP 800-66r2 right now to guide Security Rule gap assessments while they monitor the NPRM’s status. ([csrc.nist.gov](https://csrc.nist.gov/News/2024/nist-publishes-sp-80066-revision-2-implementing-th?utm_source=openai))
Risk Analysis Initiative
OCR’s “Risk Analysis Initiative” has become a centerpiece of recent enforcement. In 2025 alone, OCR announced multiple settlements highlighting failures to conduct an “accurate and thorough” risk analysis for ePHI—often uncovered during ransomware investigations. One high‑profile example was the August 18, 2025 BST & Co. settlement, which OCR labeled its 15th ransomware enforcement action and 10th action under the initiative. ([hhs.gov](https://www.hhs.gov/press-room/hhs-ocr-bst-hipaa-settlement.html?utm_source=openai))
Earlier that year, OCR resolved a ransomware investigation with a neurology practice, again underscoring that incomplete risk analysis and risk management drive Security Rule findings. OCR’s public messaging continues to call out core security practices—maintaining audit logs, reviewing system activity, implementing data encryption, and updating training—as non‑negotiable parts of risk management. ([hhs.gov](https://www.hhs.gov/press-room/ocr-hipaa-racap-np.html?utm_source=openai))
Industry coverage of these actions echoes the trend: OCR is scrutinizing whether you have identified where ePHI resides, assessed threats and vulnerabilities, and implemented and documented mitigation plans. If AI systems process ePHI, those systems must be explicitly in scope of your risk analysis. ([techtarget.com](https://www.techtarget.com/healthtechsecurity/feature/HIPAA-compliance-in-the-era-of-OCRs-risk-analysis-initiative?utm_source=openai))
AI Compliance Integration
AI does not change who must comply with HIPAA, but it expands where compliance must show up. Map every AI use case that creates, receives, maintains, or transmits ePHI into your security program and apply the Security Rule’s administrative, physical, and technical safeguards—starting with risk analysis and risk management under 45 C.F.R. §164.308(a)(1). ([law.cornell.edu](https://www.law.cornell.edu/cfr/text/45/164.308?utm_source=openai))
Pay special attention to technical safeguards. Audit controls are required; you should be able to record and examine activity in systems that handle ePHI—including model prompts, outputs, and integrations. Implement access controls, authentication, integrity protections, and data encryption mechanisms consistent with §164.312. Where encryption decisions are risk‑based today, document them rigorously and anticipate stricter expectations if the NPRM is finalized. ([law.cornell.edu](https://www.law.cornell.edu/cfr/text/45/164.312?utm_source=openai))
Third‑party AI vendors that handle ePHI are business associates. You must execute a HIPAA‑compliant business associate agreement (BAA), even if the vendor only stores encrypted ePHI without a decryption key, and you must evaluate their controls as part of your risk analysis. ([hhs.gov](https://www.hhs.gov/hipaa/for-professionals/special-topics/health-information-technology/cloud-computing/index.html?utm_source=openai))
If your AI capabilities are embedded in certified EHR technology, ONC’s HTI‑1 Final Rule adds algorithm transparency and risk management expectations for “predictive decision support interventions” (DSIs). These requirements help clinical users assess fairness, appropriateness, validity, effectiveness, and safety—factors you should already be documenting for AI that influences care. ([hipaajournal.com](https://www.hipaajournal.com/onc-publishes-hti-1-final-rule/?utm_source=openai))
Continuous Monitoring vs. Annual Audits
Annual audits alone no longer meet the moment. HIPAA requires periodic evaluation of your security program under §164.308(a)(8), and leading practice is to operationalize continuous monitoring so you can detect issues in near‑real time and adapt controls as your AI and data pipelines evolve. ([hhs.gov](https://www.hhs.gov/hipaa/for-professionals/compliance-enforcement/audit/protocol-edited/index.html?utm_source=openai))
NIST SP 800‑137 details how to build an information security continuous monitoring (ISCM) program—integrating asset inventories, vulnerability management, configuration baselines, log analytics, and incident response into ongoing risk decisions. For AI, extend ISCM to model and data lifecycle risks: track model versions, datasets, feature stores, drift, adversarial testing, and prompt/output audit logs tied to ePHI. ([csrc.nist.gov](https://csrc.nist.gov/pubs/sp/800/137/final?utm_source=openai))
Ready to assess your HIPAA security risks?
Join thousands of organizations that use Accountable to identify and fix their security gaps.
Take the Free Risk AssessmentEnforcement Emphasis on Security Compliance
OCR’s 2025 actions reinforced a broader enforcement posture: ransomware enforcement and Security Rule gaps remain front and center. Alongside the BST & Co. settlement, OCR’s resolution agreements page shows additional 2025 matters (including a New York ambulatory surgery center) focused on risk analysis failures and breach notification timeliness—both of which directly affect AI‑enabled environments that store or transmit ePHI. ([hhs.gov](https://www.hhs.gov/hipaa/for-professionals/compliance-enforcement/agreements/index.html?utm_source=openai))
The compliance takeaway is consistent: demonstrate that your administrative controls, data encryption decisions, access management, and audit logs are implemented, reviewed, and improved as systems change—especially when AI services, models, or data flows are introduced. ([hhs.gov](https://www.hhs.gov/press-room/hhs-ocr-bst-hipaa-settlement.html?utm_source=openai))
AI-Specific Compliance Risks
ePHI leakage through AI pipelines is a concrete risk. Recent research on medical document OCR with vision‑language models shows that masking alone may not prevent recovery of structured identifiers, highlighting the need for layered controls and post‑processing before storage or downstream use. Treat training, fine‑tuning, inference prompts, and outputs as potential ePHI flows and monitor them accordingly. ([arxiv.org](https://arxiv.org/abs/2511.18272?utm_source=openai))
Vendor exposure remains a top concern. If prompts, datasets, or outputs containing ePHI are transmitted to external services, you’ve likely made a HIPAA disclosure that requires a BAA and corresponding safeguards. Incorporate AI vendors into your risk analysis, ensure minimum necessary data use, and verify audit logs and retention/deletion practices. ([hhs.gov](https://www.hhs.gov/hipaa/for-professionals/special-topics/health-information-technology/cloud-computing/index.html?utm_source=openai))
Bias and safety risks intersect with HIPAA when AI influences care. Adopt bias audits and performance monitoring to surface disparate impacts, and document your “intervention risk management” for predictive DSIs where applicable. These steps align with transparency expectations in HTI‑1 and with NIST’s AI Risk Management Framework emphasis on trustworthy, measurable AI. ([hipaajournal.com](https://www.hipaajournal.com/onc-publishes-hti-1-final-rule/?utm_source=openai))
Compliance Frameworks for AI
Build on proven standards and sector guidance
- Use NIST SP 800‑66r2 to translate HIPAA requirements into concrete security activities for AI systems that handle ePHI (asset inventories, audit logs, encryption decisions, contingency planning). ([csrc.nist.gov](https://csrc.nist.gov/News/2024/nist-publishes-sp-80066-revision-2-implementing-th?utm_source=openai))
- Adopt the NIST AI Risk Management Framework to structure governance and risk controls across AI lifecycles (govern, map, measure, manage), including fairness, explainability, robustness, and security. ([nist.gov](https://www.nist.gov/news-events/events/2023/01/nist-ai-risk-management-framework-ai-rmf-10-launch?utm_source=openai))
- Consider ISO/IEC 42001 to formalize an AI management system (policy, roles, risk, transparency, monitoring) that complements HIPAA’s administrative controls. ([iso.org](https://www.iso.org/standard/42001?utm_source=openai))
- Leverage HHS 405(d) Health Industry Cybersecurity Practices (HICP) to prioritize sector‑specific controls against the most common threats targeting healthcare. ([405d.hhs.gov](https://405d.hhs.gov/Documents/405d-security-operations-center-n-incident-response-pulse-check_R.pdf?utm_source=openai))
- If you develop or deliver certified EHR capabilities, align with ONC HTI‑1’s DSI transparency and risk management expectations for predictive models used in clinical workflows. ([hipaajournal.com](https://www.hipaajournal.com/onc-publishes-hti-1-final-rule/?utm_source=openai))
Conclusion
The bottom line: risk analysis, documented safeguards, and continuous monitoring are now table stakes for AI that touches ePHI. Track the Security Rule NPRM, fold AI systems into your HIPAA program, harden administrative controls and data encryption, and keep defensible audit logs. Doing so positions you for ransomware‑era enforcement and for the transparency regulators expect from AI in healthcare. ([hhs.gov](https://www.hhs.gov/hipaa/for-professionals/security/hipaa-security-rule-nprm/factsheet/index.html?utm_source=openai))
FAQs.
What are the recent updates to the HIPAA Security Rule?
HHS/OCR issued a Security Rule NPRM on December 27, 2024 (published January 6, 2025) proposing stronger cybersecurity requirements—tightening risk analysis/management expectations, clarifying business associate obligations, and moving toward mandatory encryption of ePHI at rest and in transit with limited exceptions. The proposal is not final as of February 19, 2026, but it signals higher baseline expectations. ([hhs.gov](https://www.hhs.gov/hipaa/for-professionals/security/hipaa-security-rule-nprm/index.html?utm_source=openai))
How does AI impact HIPAA compliance requirements?
HIPAA’s scope is unchanged, but AI broadens where you must apply it. Any AI use case that creates, receives, maintains, or transmits ePHI must be in your risk analysis, with safeguards across administrative controls, access and authentication, audit logs, and data encryption under §164.312; third‑party AI vendors that handle ePHI require BAAs. If AI is packaged in certified EHR technology, ONC’s HTI‑1 rule adds algorithm transparency and risk management duties. ([law.cornell.edu](https://www.law.cornell.edu/cfr/text/45/164.312?utm_source=openai))
What enforcement actions has OCR taken related to AI and HIPAA?
OCR’s public actions have focused on Security Rule gaps that apply directly to AI environments—especially risk analysis failures revealed during ransomware incidents. In 2025, OCR announced multiple settlements, including BST & Co. (August 18, 2025), and highlighted its Risk Analysis Initiative and ongoing ransomware enforcement. These expectations attach to AI systems whenever they process ePHI. ([hhs.gov](https://www.hhs.gov/press-room/hhs-ocr-bst-hipaa-settlement.html?utm_source=openai))
How can continuous monitoring improve HIPAA compliance for AI systems?
Continuous monitoring operationalizes HIPAA’s required periodic evaluation by giving you near‑real‑time visibility into AI and data pipelines. Applying NIST SP 800‑137 principles—asset and config awareness, vulnerability and patch cadence, log analytics, incident response—plus model‑specific metrics (drift, performance, fairness) strengthens detection, response, and documentation across the AI lifecycle. ([hhs.gov](https://www.hhs.gov/hipaa/for-professionals/compliance-enforcement/audit/protocol-edited/index.html?utm_source=openai))
Ready to assess your HIPAA security risks?
Join thousands of organizations that use Accountable to identify and fix their security gaps.
Take the Free Risk Assessment