Who’s auditing AI? Safeguarding compliance in ambulatory care
From predictive algorithms that flag high-risk patients, to virtual assistants that support triage in urgent care, AI is embedded in acute and ambulatory settings today.
Oct 16, 2025
Oct 16, 2025

Leaders are excited about the potential for efficiency, accuracy, and cost savings. Yet, as AI expands, so do questions about its reliability, compliance, and patient safety. The future of healthcare innovation cannot be separated from the future of clinical auditing. The challenge for hospitals and clinics is not just how to adopt AI, but how to govern it responsibly.
The overlooked gap: Upholding clinical accountability in AI
Healthcare organizations have long relied on audits to validate compliance, safeguard reimbursement, and reduce risk. Clinical documentation audits, coding reviews, and regulatory compliance checks are routine. But, in many systems, the algorithms now guiding documentation, coding prompts, or clinical decision-making go largely unexamined.
Consider the questions most boards and compliance leaders cannot yet answer:
- How accurate are AI-generated notes compared to the medical record?
- Could AI-driven clinical documentation integrity tools introduce systematic overcoding or undercoding?
- What mechanisms exist to catch bias or error when predictive analytics are used for patient stratification or triage?
- How does AI output align with Centers for Medicare & Medicaid Services, the Health Insurance Portability and Accountability Act, and Joint Commission expectations?
Without a structured audit framework, these risks are hidden. Organizations may unknowingly rely on flawed outputs, leaving themselves exposed to regulatory penalties, payor denials, or even patient harm.
AI and documentation integrity: A new frontier for CDI
One of the most immediate areas of risk is AI-enabled clinical documentation. Natural language processing and ambient listening tools promise to relieve clinician burden by drafting notes automatically. Meanwhile, advanced CDI systems use AI to suggest diagnoses or prompt documentation clarification.
Although these innovations can improve efficiency, they also introduce risk, including:
- Inaccurate clinical capture: AI-generated notes may misrepresent what was said or omit key context.
- Overcoding/undercoding: Suggested diagnoses may not align with clinical judgment, leading to compliance concerns.
- Provider detachment: Clinicians may sign off on notes they did not fully review, increasing liability.
Auditing is essential to validate that AI-assisted documentation is accurate, compliant, and truly reflective of the encounter.
Expanding clinical audit’s role in the AI era
Traditional audit has focused on coding accuracy, compliance, and clinical quality. But as AI expands, so must the audit scope. Future-ready clinical auditing must evaluate not only what is documented but also how it was generated. Key areas of AI-focused auditing include:
1. Algorithm validation
- Are AI tools producing accurate outputs when compared to human-reviewed cases?
- Do outputs align with clinical and regulatory standards?
2. Bias and equity monitoring
- Do predictive tools disproportionately affect certain populations?
- Are audit processes designed to catch systemic disparities AI introduces?
3. Compliance with standards
- Are AI-generated outputs consistent with CMS guidelines, ICD-10-CM coding standards, and Joint Commission requirements?
- Are HIPAA safeguards applied to AI data processing and storage?
4. Human oversight
- Are clinicians consistently reviewing and validating AI-assisted documentation?
- Do policies ensure a “human in the loop” safeguard?
5. Revenue integrity
- Could AI-driven CDI or coding tools create patterns that lead to payor denials or fraud scrutiny?
- Are there audit trails to show how final codes and charges were derived?
Acute and ambulatory care: Different contexts, same risks
AI is not just an inpatient hospital issue. Ambulatory and urgent care settings are adopting AI at a rapid pace—from chatbots that triage patients before appointments to AI scribes that streamline outpatient visits.
- In acute care, risks include algorithm-driven decision support for sepsis, readmission prediction, and documentation integrity for high-risk diagnosis related groups.
- In ambulatory care, risks include front-end scheduling AI tools that could mis-prioritize patients or outpatient notes generated with incomplete context.
No matter the setting, the same truth applies: If AI influences care delivery, it must be auditable.
The future: A dual mandate
Looking ahead, clinical audit’s role will evolve into a dual mandate:
- Audit clinical documentation, coding, and outcomes—the traditional scope.
- Audit the technology tools shaping those outcomes—the new frontier.
This dual approach ensures innovation does not come at the expense of compliance or safety. It also reassures providers, payors, and regulators that organizations are applying appropriate governance to AI adoption.
How Kodiak can help
At Kodiak, we believe clinical auditing is not just about “checking the box”—it’s about creating resilience. Our audit frameworks already span acute and ambulatory settings, combining expertise in coding, CDI, regulatory compliance, and clinical risk. As AI expands, we are helping organizations design AI-aware audit strategies that:
- Validate AI-assisted documentation and coding outputs.
- Assess compliance with CMS, HIPAA, and Joint Commission standards.
- Identify risks before payors, regulators, or patients do.
- Provide leaders with the confidence that innovation is matched by oversight.
The question is no longer if AI will shape healthcare—it already has. The real question is: Who is auditing the AI?
For hospitals and ambulatory clinics, the answer must be: You are—with the right partners by your side.
Want the latest updates from Kodiak?
Get access to our communications, including our Healthcare Connection newsletter, to tap into industry trends, CPE webinars, and more.