Blog

COSO Just Dropped Its GenAI Internal Control Guidance. Here's What Internal Auditors Need to Do With It.

Subscribe now to join the Risk Register community:

The Committee of Sponsoring Organizations released something in February 2026 that internal audit teams have needed for months: a principle-based playbook for GenAI governance that actually maps to the framework you already use.

No parallel structure to build. No competing model to reconcile. The existing COSO Internal Control Integrated Framework applies. The terrain underneath it just changed.

We're breaking down the key insights, connecting them to what we've been writing about at Cherry Hill Advisory, and identifying the action items that matter most for audit leaders right now.

Why This Publication Matters

COSO has been the backbone of internal control frameworks since 1992. When they release guidance, it reshapes how organizations design, test, and report on controls.

Their latest publication, Achieving Effective Internal Control Over Generative AI, does something the profession desperately needed. It maps GenAI risks directly to the five COSO components and 17 principles that auditors already know.

That distinction matters.

Organizations don't need to build a parallel governance structure for AI. They need to extend the one they already have, with sharper risk identification, updated control activities, and monitoring that keeps pace with how fast GenAI systems evolve.

And the timing is urgent. 77% of employees have been observed sharing sensitive, proprietary information with tools like ChatGPT. GenAI-related DLP incidents increased more than 2.5X, now comprising 14% of all DLP incidents.

Shadow AI isn't theoretical. It's already happening in your organization.

The Capability-First Lens: A Structural Shift for Audit Planning

One of the most useful contributions in this publication is the eight capability types framework.

Instead of organizing GenAI by vendor, product name, or department, COSO classifies it by what the AI actually does in the data-to-decision lifecycle:

  • Data extraction and ingestion (capturing raw data from structured and unstructured sources)
  • Data transformation and integration (cleaning, normalizing, combining)
  • Automated transaction processing and reconciliation (high-volume execution)
  • Workflow orchestration and autonomous task execution (multi-step agent coordination)
  • Judgment, forecasting, and insight generation (producing analysis and forecasts)
  • AI-powered monitoring and continuous review (anomaly scanning)
  • Knowledge retrieval and summarization (condensing large information volumes)
  • Human-AI collaboration (chat-based augmentation of human work)

This isn't academic taxonomy. It's an audit scoping tool.

When you classify GenAI by capability type, you can trace where risk originates, how it propagates downstream, and where controls need to sit. A hallucination in capability type 1 (ingestion) has a different blast radius than a hallucination in capability type 5 (judgment). The control response should be different too.

For internal auditors: Map every GenAI use case in your organization to one of these eight types. That single exercise will expose gaps in your current risk assessment faster than any vendor demo or maturity model.

Five Foundational Characteristics That Change How You Think About Controls

Before diving into the COSO component mapping, the publication establishes five foundational characteristics of GenAI that should inform every control decision:

GenAI is probabilistic, not deterministic. Outputs are claims, not facts. Controls must treat them that way. For general-purpose LLMs like GPT-4, hallucinations occur in roughly 3% of RAG-based responses. In specialized domains, rates can spike as high as 60–80%.

GenAI is dynamic. Models, prompts, and retrieval data change frequently. Annual risk assessments are insufficient.

GenAI is easily scalable (for better or worse). It can scale quality. It can also scale errors and bias at the same rate.

GenAI has a low barrier to entry. Shadow AI usage in some industries has increased as much as 250 percent year over year.

GenAI can help govern GenAI. Multi-model validation, automated monitoring, and documentation generation are legitimate governance tools.

Characteristic number 5 is worth highlighting. The publication explicitly endorses using GenAI as part of the control structure itself, provided the controls around those governance tools are properly designed and tested.

This is a meaningful signal from COSO. It validates the direction many forward-thinking audit functions are already moving.

Connecting the Dots: How This Ties to What We've Been Saying at Cherry Hill Advisory

We've been writing and speaking about GenAI governance for internal audit since before the frameworks caught up. This COSO publication validates several themes we've been pushing hard.

The Vanishing Audit Trail Problem

In our recent article, The Vanishing Audit Trail: What Internal Audit Needs to Know About AI Reasoning, we flagged a specific risk: AI models may stop producing readable reasoning traces. Chain of Thought (CoT) monitorability, one of the few transparency mechanisms available to auditors, is fragile and already degrading in some systems.

COSO's publication reinforces this concern under Principle 13 (Uses relevant, quality information) and Principle 16 (Conducts ongoing and/or separate evaluations).

The guidance states that processes using GenAI should capture and store all information needed to understand, validate, and assess outputs. That includes prompts, inputs, outputs, source references, model versions, and confidence scores.

The connection is direct: if your AI systems stop exposing how they reason, you can't meet COSO's information quality requirements. The audit trail doesn't just get harder to interpret. It disappears.

Ethics in the Age of Algorithms

Our CPE program, Ethics in the Age of Algorithms, has been delivered to IIA chapters globally. It focuses on how internal auditors should navigate the ethical implications of AI, apply the IIA Code of Ethics to modern scenarios, and use structured decision-making models for AI-related dilemmas.

COSO's guidance under Principle 1 (Integrity and ethical values) calls for a GenAI Acceptable Use Policy that addresses bias avoidance, prohibited data types, and transparency commitments. Under Principle 8 (Assesses fraud risk), the publication identifies deepfakes, synthetic records, and model manipulation as emerging fraud vectors.

These aren't abstract governance ideas. They're the operational reality our CPE program prepares auditors to handle.

AI Governance as a Service Line

Cherry Hill Advisory's AI Governance and Emerging Risk practice exists because we saw this moment coming.

Our AI Auditing Guide, built on the IIA AI Auditing Framework and the NIST AI Risk Management Framework, gives practitioners the evaluation tools this COSO publication now demands.

The COSO paper and our existing tools are complementary. COSO tells you what needs to be controlled. Our guide helps you test whether those controls are working.

Seven Insights Internal Auditors Should Not Miss

After reading this publication cover to cover, here are the insights that stand out most for practitioners.

1. Prompts Are Configuration Items. Treat Them Like Code.

The publication is explicit: prompts, system prompts, retrieval connectors, and transformation rules are governed configurations. They require version history, approval workflows, and rollback plans.

Most organizations treat prompts as informal text. COSO is saying they should be subject to the same change management rigor as any other controlled system setting.

That means access control, segregation of duties between configurators and reviewers, and documented approvals for changes.

If your organization has no prompt governance today, this is your first remediation item.

2. Shadow AI Is the GenAI Equivalent of Shadow IT. But Faster.

The publication repeatedly flags the low barrier to entry as a control environment risk. GenAI tools are accessible enough that unauthorized implementations can start outside formal channels with zero IT involvement.

The recommended response: periodic scans or surveys to detect shadow AI use, combined with an Acceptable Use Policy that sets clear boundaries.

This isn't optional. Under Principle 9 (Identifies and analyzes significant change), the publication requires organizations to track changes that materially alter risk profiles. Shadow AI deployments qualify.

3. "Human in the Loop" Is Not Binary. It's a Spectrum.

The publication identifies five leading approaches for operationalizing COSO principles in AI environments:

  • Human-in-the-loop review (ranging from full re-performance to risk-based sampling)
  • Performance testing (test populations and edge case stress tests)
  • Multi-model validation (cross-checking outputs across independent models)
  • Data analytics monitoring (continuous anomaly detection with calibrated thresholds)
  • Third-party validation (independent review or certification)

The key takeaway: human review is not always full re-performance. The level of human corroboration should be proportionate to the risk.

For low-risk, high-volume processes, sampling and automated monitoring may be sufficient. For high-judgment outputs that inform material decisions, more intensive review is warranted.

4. The "Vanishing Accrual" Case Study Is a Warning for Every Finance Team

One of the integrated case examples describes a corporate accounting team that used GenAI to automatically accrue expenses based on historical invoice patterns.

When a supplier shifted from monthly to quarterly billing, the model didn't detect the change and missed the accrual. The error was caught during variance analysis, but it extended the close and required a subsequent adjustment.

The fix: pattern-change alerts in the data transformation stage, controller review of auto-accrual logic each close cycle, and variance thresholds that trigger immediate human investigation.

This scenario will happen at your organization if it hasn't already.

The lesson isn't "don't automate accruals." It's "automate accruals with controls that detect when underlying patterns shift."

5. GenAI Monitoring Systems Need Their Own Monitoring

The publication contains a subtle but important observation under capability type 6 (AI-powered monitoring): monitoring systems themselves need monitoring to ensure detection logic stays accurate and relevant.

This is the governance equivalent of "who watches the watchers."

If you deploy GenAI to scan for anomalies, you need recalibration schedules, hindsight analysis of detection accuracy, and controls around updates to detection thresholds. Without these, your monitoring system degrades silently while giving you false confidence.

6. Vendor-Driven Model Changes Are a Significant Change Under COSO Principle 9

Many organizations consume GenAI through third-party SaaS platforms. When those vendors update models, the risk profile of every process that depends on those models can shift without internal awareness.

The publication is clear: vendor-driven changes require specific notification obligations or independent verification. Risk assessment must track these changes and trigger re-evaluation before they affect operations.

For third-party risk management programs, this means updating vendor contracts to include model change notification requirements, and building internal processes to validate performance after vendor updates.

7. The Implementation Roadmap Is Cyclical, Not Linear

The six-step roadmap (establish governance, inventory use cases, assess risks, design controls, implement and communicate, monitor and adapt) is designed to repeat. Once step 6 is complete, you return to step 1 to re-evaluate.

This matters because many organizations treat AI governance as a project with an end date.

The publication is saying it's an ongoing operational cycle. The pace of GenAI change demands it.

Gartner predicts that at least 30% of generative AI projects will be abandoned after proof of concept by the end of 2025, due to poor data quality, inadequate risk controls, escalating costs or unclear business value.

What to Do Next

If you're a Chief Audit Executive: Start with the inventory. Identify every active and planned GenAI use case across the organization, classify each by capability type, and assign an owner. You can't assess risks you haven't cataloged.

If you're an internal audit manager or senior auditor: Read the Appendix B detailed COSO mapping by capability type. It contains specific risks, control considerations, illustrative metrics, and common artifacts for each of the eight types. That appendix is your audit program starter kit for GenAI engagements.

If you serve on a board or audit committee: Ask management two questions:

  1. Do we have a complete inventory of GenAI use cases across the organization?
  2. Are our existing COSO-aligned controls designed to address GenAI-specific risks like hallucinations, model drift, and shadow AI?

If the answer to either is no, there's work to do.

How Cherry Hill Advisory Can Help

We help internal audit teams, boards, and management navigate GenAI governance with confidence.

Our services include:

  • AI Governance and Emerging Risk Advisory for designing and testing GenAI controls aligned to COSO, IIA, and NIST frameworks
  • External Quality Assessments (EQAs) that evaluate whether your audit function is keeping pace with AI-driven risk
  • Thought Leadership, Training, and Speaking including our NASBA-approved CPE programs on AI ethics, AI auditing, and emerging risk
  • Free tools including our AI Auditing Guide built on the IIA AI Auditing Framework and NIST AI RMF

The COSO GenAI publication gives the profession a common language. We help you turn that language into results.

Schedule a call to discuss how your organization can operationalize COSO's GenAI guidance.

Stay connected: follow us on LinkedIn and explore more at www.CherryHillAdvisory.com.

Subscribe to The Risk Register for weekly insights on internal audit, risk, and compliance.

Subscribe now to join the Risk Register community: