Complimentary Guide
Internal Audit Playbook for Artificial Intelligence
How internal audit can govern AI risks and promote compliance
With the buzz around artificial intelligence (AI) reaching a fever pitch, companies have been grappling with where to place their bets on use cases, how to improve their operations and business models, and how to gain greater returns on their investments. As these capabilities mature, many leaders are also retroactively coming to terms with the need for greater governance — in which chief audit executives (CAEs), drawing on their skills in understanding and mitigating risk, must have a seat at the table.


The dynamism of AI and generative AI (GenAI) has added massive quantities of complexity across most functions in an organization, often at the urgent behest of the C-suite, while regulations globally are slowly taking shape. CAEs and internal audit functions face a tall order: to guard against risks from technologies that they likely don’t fully understand and to continue to evolve, without hamstringing functions that see AI and GenAI adoption as do-or-die imperatives. To stay ahead, internal audit must get up to speed on AI risks and controls to properly check and verify alignment and provide assurance that the use of the AI systems within the organization is responsible.


Dive into our recent POV where we dive deeper into the following topics and outline actionable takeaways.  In brief:

  • Internal audit faces challenges in managing AI risks, requiring a proactive approach to governance.
  • Chief audit executives should develop annual AI audit plans, educate teams on risks and integrate AI governance into frameworks to promote responsible use.
  • Collaboration with executive leaders and risk committees is essential for effective AI oversight.

Discover how Internal audit functions can adapt to AI complexities while also fostering innovation and learning.

Feeling inspired? Share these insights on social.

Gain Access