Stay connected: follow us on LinkedIn and explore more at
www.CherryHillAdvisory.com.
.png)
The EU Artificial Intelligence Act, formally adopted in 2024, represents a global milestone. It is the world’s first comprehensive legislation focused entirely on the regulation of Artificial Intelligence. The law sets mandatory requirements for developers, users, importers, and distributors of AI systems that affect the EU market or EU citizens, regardless of whether the company is based in the EU.
For internal auditors, this is not just another compliance topic. The law imposes fines of up to €35 million or 7 percent of global turnover, placing AI governance firmly on the audit agenda.
Internal auditors are uniquely positioned to assess organizational readiness, promote accountability, and support long-term compliance. The risks are not hypothetical. Non-compliance carries legal, financial, and reputational consequences that demand immediate attention.
Understanding the enforcement timeline is essential. Here are the critical dates:
DateObligation or MilestoneAugust 1, 2024Regulation entered into forceFebruary 2, 2025Ban on prohibited AI practices takes effect; literacy obligations beginAugust 2, 2025Compliance requirements begin for general-purpose AI systemsAugust 2, 2026Full enforcement for high-risk AI systems
What this means: The real compliance deadline for high-risk AI systems is August 2, 2026. There are no grace periods or extensions expected.
The EU AI Act categorizes AI systems into four levels of risk. Internal auditors must understand how systems across the enterprise are classified.
These systems are banned entirely beginning February 2025. Examples include:
These include AI used in critical sectors such as:
Obligations for these systems include:
These systems, such as chatbots and emotion detection tools, must provide transparency to users. For example, users should be notified when they are interacting with AI.
Examples include spam filters and AI used in video games. These are subject only to voluntary codes of conduct.
The EU AI Act outlines tiered penalties:
Internal auditors can play a decisive role in AI governance and regulatory readiness. Here are practical steps to take:
To enhance efficiency and coverage, internal audit teams should consider the following technologies:
These solutions can improve auditability and help meet the transparency and accountability requirements of the law.
To prepare for full enforcement in 2026, internal auditors should take the following steps now:
August 2, 2026 is not a suggested target. It is a binding compliance deadline for high-risk AI systems. Internal auditors bring the necessary skills, independence, and enterprise-wide visibility to help their organizations meet this challenge.
Early action will not only reduce regulatory and operational risk. It will also strengthen stakeholder trust in how the organization governs its use of artificial intelligence.
The time to act is now.
The EU Artificial Intelligence Act, formally adopted in 2024, represents a global milestone. It is the world’s first comprehensive legislation focused entirely on the regulation of Artificial Intelligence. The law sets mandatory requirements for developers, users, importers, and distributors of AI systems that affect the EU market or EU citizens, regardless of whether the company is based in the EU.
For internal auditors, this is not just another compliance topic. The law imposes fines of up to €35 million or 7 percent of global turnover, placing AI governance firmly on the audit agenda.
Internal auditors are uniquely positioned to assess organizational readiness, promote accountability, and support long-term compliance. The risks are not hypothetical. Non-compliance carries legal, financial, and reputational consequences that demand immediate attention.
Understanding the enforcement timeline is essential. Here are the critical dates:
DateObligation or MilestoneAugust 1, 2024Regulation entered into forceFebruary 2, 2025Ban on prohibited AI practices takes effect; literacy obligations beginAugust 2, 2025Compliance requirements begin for general-purpose AI systemsAugust 2, 2026Full enforcement for high-risk AI systems
What this means: The real compliance deadline for high-risk AI systems is August 2, 2026. There are no grace periods or extensions expected.
The EU AI Act categorizes AI systems into four levels of risk. Internal auditors must understand how systems across the enterprise are classified.
These systems are banned entirely beginning February 2025. Examples include:
These include AI used in critical sectors such as:
Obligations for these systems include:
These systems, such as chatbots and emotion detection tools, must provide transparency to users. For example, users should be notified when they are interacting with AI.
Examples include spam filters and AI used in video games. These are subject only to voluntary codes of conduct.
The EU AI Act outlines tiered penalties:
Internal auditors can play a decisive role in AI governance and regulatory readiness. Here are practical steps to take:
To enhance efficiency and coverage, internal audit teams should consider the following technologies:
These solutions can improve auditability and help meet the transparency and accountability requirements of the law.
To prepare for full enforcement in 2026, internal auditors should take the following steps now:
August 2, 2026 is not a suggested target. It is a binding compliance deadline for high-risk AI systems. Internal auditors bring the necessary skills, independence, and enterprise-wide visibility to help their organizations meet this challenge.
Early action will not only reduce regulatory and operational risk. It will also strengthen stakeholder trust in how the organization governs its use of artificial intelligence.
The time to act is now.