The Internal Auditor’s Artificial Intelligence Strategy Playbook
Exploring AI for Internal Audit
Table of Contents:
- Introduction
- The ABCs of AI – A Glossary
- Getting Started with AI in Your Internal Audit Function
- Exploring the Benefits and Concerns of Leveraging AI in Internal Audit
- AI Use Cases for Internal Audit
- A Strong Data Foundation is Key to Building AI Maturity
- Understanding the Risks of AI
- 9 Questions to Ask Yourself When Assessing the Risks Associated with AI in Internal Audit
- Generative AI in Action
- BDO’s Holistic Approach to AI
Introduction:
The internal audit function has significantly matured. Historically siloed, time-consuming, and highly manual, the internal audit process is evolving into an increasingly data-driven and holistic operation touching virtually every aspect of a business.
Now, artificial intelligence (AI) is poised to further transform the internal audit function. The World Economic Forum (WEF) predicts new technologies like AI will disrupt 85 million jobs globally between 2020 and 2025—and create 97 million new roles. Humans and machines, however, go hand in hand. AI is a tool used to augment human skills and create efficiencies that enable auditors to spend more time on tasks that require their specialized expertise. This shift is ushering in a new era when human-machine partnerships boost productivity and deliver exceptional value.
As AI evolves, its effects will likely amplify and force many businesses to rethink job roles, skill sets, and how work gets done. When implemented strategically and responsibly to aid internal auditors, AI has the potential to modernize, elevate, and improve the internal audit function. The function will benefit from leveraging AI as internal auditors play an increasingly strategic role in driving value throughout their organization.
The ABCs of AI – A Glossary
Data arms auditors with insights and takeaways that inform smarter decisions. Internal audit teams can add AI tools to their processes to empower even greater analysis and gain the ability to not only learn from the past but also to predict the future.
The power of the human brain stems from its proficiency in solving problems, making connections, and recognizing patterns and emotions. AI functions as a human enabler by augmenting these abilities and powering decision-making throughout many aspects of the audit.
As AI and related topics continue to make their way into conversations among internal audit teams, it is important to understand and differentiate the many terms you may hear, including the following:
Big Data | Massive data sets that are statistically analyzed to extract detailed insights. The data might encompass billions of pieces of discrete information that require large-scale computer-processing power, which can be structured and read in many ways. Analyzing this data with the help of AI tools is beneficial, as it expands parameters into previously unconsidered fields and brings new trends and patterns to the surface. |
Chatbot | The generic term for a virtual assistant that can converse with users using previously scripted conversations. With chatbots supplying automated responses to frequently asked questions and recurring tasks, employees are free to use their time and skills in other areas. |
ChatGPT | Developed by OpenAI, ChatGPT is an interface built on technology that harnesses large amounts of textual data from the internet to automatically undertake a range of time-consuming language-based tasks normally fulfilled by humans. These tasks might include summarizing long texts, drafting pro forma documents and emails, and completing translations. As with Chatbots, human oversight is essential, as current GPT technology cannot fully encompass all the nuances of human language. |
Data Analytics Maturity | The ability of an organization to gather, maintain, and analyze large amounts of data, using both human and AI tools, to arrive at the best decisions for the business. |
Deep Learning | Built on artificial neural networks, this type of AI adopts the human method of “learning by doing.” Currently, deep learning is utilized in industries like manufacturing and aerospace for tasks as varied as detecting when a worker is too close to a machine or classifying images taken from satellites. |
Edge Computing | The practice of bringing computation closer to the “edge” of large data sources to help organizations reduce infrastructure requirements. It also helps solve problems of bandwidth, processing time, latency, and energy usage. |
Generative AI | Generative AI, such as ChatGPT, is a type of AI technology that broadly describes machine learning systems capable of generating text, images, code, or other types of content, often in response to a prompt. |
Generative Adversarial Network (GAN) | Generative models that create new data instances that resemble training data. For example, GANs can create images that look like photographs of human faces, even though the faces don’t belong to a real person. |
Generative Pre-Trained Transformer (GPT) | A family of neural network models that uses transformer architecture to power generative AI applications like ChatGPT. GPT models give applications the ability to create human-like text and content and deliver conversational responses to prompts or questions. |
Large Language Models (LLMs) | LLMs harness massive amounts of human-generated writing and apply machine learning algorithms to “predict” or generate new texts. |
Machine Learning | How a chatbot or algorithm is empowered to “learn” or develop itself automatically, based on the frequency and array of user inputs. Over time, for example, AI will become better at discerning the needs of its users, resulting in smoother interactions and streamlined workflows. |
Getting Started with AI in Your Internal Audit Function
AI has the potential to transform the way internal audits are conducted, as well as how assurance is approached more broadly. No matter the size of your internal audit team, BDO recommends the following foundational steps to get started with AI efficiently, effectively, and responsibly.
- Educate: Before your internal audit team can implement AI, it first needs to understand it. All team members need to grasp what AI is, what it can do, and its implications for the function. This foundational knowledge can help auditors make informed decisions and set realistic expectations.
- Identify use cases: AI is not a one-size-fits-all solution. The real power of AI emerges when it is tailored to address specific challenges or enhance processes within the internal audit function. It’s important to pinpoint those areas where AI has the potential to deliver the most significant impact, like improving an auditor’s day-to-day processes, enabling ongoing risk monitoring, or reviewing larger data sets during testing.
- Prepare and build: Just like constructing a building, the strength of an internal audit team’s AI initiatives lies in the foundation. Strong data is critical as it feeds AI, and its quality informs the system’s success. Security, governance, privacy, and practicing responsible AI are not just checkboxes, they are essential. Confirming these elements are robust and followed closely can help safeguard the organization.
- Enable and adopt: A tool is only as good as its user. Integrating AI into an internal audit workflow requires a cultural shift. It’s about training, providing resources, dispelling myths, and most importantly, helping the team understand the ‘why’ behind AI. If the internal audit team can see the value, they will not only adopt it but become champions for the technology.
- Go and grow: Once the groundwork is laid, it’s time to activate. The internal audit team’s journey to AI does not end at launch; it’s an ongoing, iterative process. As the internal audit function evolves, so too will its AI needs. Regularly revisiting and refining your AI systems helps them remain relevant and continue to deliver value.
Read on to dive deeper into each step of the AI journey.
Exploring the Benefits and Concerns of Leveraging AI in Internal Audit
To date, AI tools have mainly been applied to more transactional and finance-focused areas — for example, revenue testing — but there is a growing appetite and opportunity to integrate AI into other areas of the internal audit function.
Using the power of AI to analyze data quickly and accurately and identify patterns and anomalies can lead to more valuable insights for executive leadership teams and unlock quicker fixes for control issues.
Leveraging AI, internal audit teams can improve efficiencies in areas such as payroll expenses, vendor management, order and sales activity, and error identification in accounts payable processes. By automating routine tasks, auditors can focus on high-risk areas, providing more value to the organization.
Evolutions in AI will better allow internal audit functions to collaborate with others across the assurance landscape to harness data and technology to create stronger controls and forecast risks and breakdowns before they happen. AI can also improve data security, operational productivity, and more, including aiding in:
Fraud Detection
Generative AI tools like ChatGPT analyze large quantities of information within minutes, identifying suspicious patterns that an auditor may miss. For example, an internal audit team could use ChatGPT to automatically review customer interactions for abnormal behaviors. It could also automate analytical tasks during fraud investigations to save valuable time and reduce operational expenditure (OPEX).
Resource Management
Today's businesses generate and manage large amounts of information as part of their day-to-day operations. Managing this information, however, becomes tricky over time simply due to the quantity of data. Generative AI can take over knowledge and resource management by organizing and retrieving the resources auditors need on demand. Further, it can update business documents and generate new ones as needed, which can streamline internal audit processes and boost efficiency.
While the opportunities for using AI are endless, internal auditors still have some common concerns about using generative AI in their function, including:
Common Concerns | BDO’s Point of View |
AI will take my job. | Humans and machines go hand in hand. AI is a tool used to augment human skills and create efficiencies that enable auditors to spend more time on tasks that require higher-level thinking and analysis. AI can transform your job in that it will free up significant time, create efficiencies, and increase the breadth of information that you can gain. Importantly, AI models will always require a person to review the model’s suggestions and make final decisions. A human-in-the-loop (HITL) approach to AI integration aims to achieve what neither auditor nor AI can achieve alone, creating a continuous feedback loop and endless potential to drive value. With constant feedback, the algorithm learns and iterates on its results every time. |
Large language models (LLMs) are not secure or ethical. | LLMs, like ChatGPT, require large amounts of data to learn and generate responses. This data may include sensitive customer data, personal information, or confidential business data, which inevitably raises risk concerns. However, by leveraging a secure in-house GPT system, you can train the AI on your internal data within your existing, secure corporate IT environment, helping to safeguard against risk. It's important to prioritize ethical and responsible AI practices to help safeguard transparency, fairness, and accountability. Organizations should define and implement ethics policies and consider deploying specific tools for Responsible AI. A third-party AI consultant can also help you address ethical considerations, mitigate bias, and comply with relevant regulations and guidelines. |
I can’t validate the outputs generated by generative AI. | By leveraging a secure, in-house GPT system, you connect and synchronize relevant data from your systems, including your Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), and other internal databases. By leveraging your internal data, the system can generate more accurate and context-aware responses, driving advanced efficiency gains across your internal audit team. Because the model is trained on your systems and data, you can more confidently validate the outputs. While effective and efficient, AI models will always require auditors to review the model’s suggestions and make final decisions accordingly. |
AI Use Cases for Internal Audit
AI developments hold massive potential for transforming the internal audit landscape. AI technologies will enable auditors to analyze vast amounts of data more efficiently and effectively and identify trends, anomalies, and risks that may have gone unnoticed.
Generative AI can be used in several areas of the internal audit, including in the planning, testing, reporting, and monitoring phases. Specifically:
Planning | Testing | Reporting | Monitoring |
During the planning phase, the auditor sets the tone for the audit and defines the objectives, scope, and methodology. AI can take something that was once very manual — analyzing large volumes of data — and identify patterns and trends that may not have been immediately obvious. Not only can generative AI help save auditors time during planning, but it can also help identify areas of opportunity to improve the audit process before testing begins. | When internal auditors gather evidence, they can use generative AI to analyze and interpret the data, including financial data from procurement, order to cash, financial close, and other areas of the organization. AI can help identify trends, patterns, and most importantly, anomalies in the data that may require further investigation, including potential instances of fraud. Generative AI also allows auditors to look at the data in totality during the testing phase to paint a better, fuller picture of the risk landscape, instead of limiting themselves to a smaller sample size. | During reporting, auditors communicate their findings with the appropriate stakeholders. Generative AI can lend support by helping to generate comprehensive and timely reports. AI can also generate code to help create audit memos with key findings, supply data-driven insights, and even provide recommendations using predictive analysis that the auditor may not have previously considered. | Internal audit teams are responsible for monitoring to affirm the appropriate actions are taken to address audit findings. During ongoing monitoring, AI can monitor implementation and help identify additional areas for improvement as well as emerging risks. With intelligent automation, internal audit teams can enable continuous or perpetual auditing, helping safeguard organizations from ongoing risk. |
A Strong Data Foundation is Key to Building AI Maturity
The path to AI maturity must start with a sturdy foundation of data. For internal audit teams working toward AI maturity, success depends on the availability and cleanliness of their data. Cracks in the data foundation will compromise everything built upon it — from descriptive intelligence to data-backed decision-making. Insights are only as good as their underlying data.
A significant portion of enterprise data is either trivial, irrelevant or cannot be read by the systems in place. The process of extracting insights from data is often constrained by inconsistent naming conventions, duplicate data, and incomplete records. When data initiatives fall flat, the failure can often be attributed to a lack of investment in building a strong foundation.
Yet, internal audit teams cannot allow perfection to become the enemy of good. Every new digital initiative is an opportunity to drive incremental improvement in data management and provisioning, integrating multiple sources of data and edging closer to a single source of truth. It is a process that takes time and patience but will increase the value an internal audit team can extract from its data and prepare the function to build AI maturity.
Understanding the Risks of AI
As with any new technology, we must be aware of the risks inherent to adopting innovative processes — making effective governance a vital component of any AI strategy. Planning how you will address AI risks now can help your function prepare for the future.
Here are three ethical concerns related to emerging generative AI that internal audit teams must consider.
Bias and Discrimination
While AI itself has no biases, it can adopt any implicit or explicit biases in its training data, including those related to race, sex, gender, age, religion, geographic location, and more. AI cannot understand other perspectives because it hasn’t been exposed to them. Training an AI using diverse data sets is therefore critical and can help reduce the risk of creating a biased algorithm.
To further manage bias and discrimination risks and maximize the benefits of incorporating AI, there are several approaches and criteria that organizations should keep in mind:
- Because AI bias originates from people, businesses can address it just as they address human bias — through training. Expanding existing employee bias training and awareness initiatives to include the potential impact of AI could go a long way in preventing AI bias.
- Routine testing of data sources and training modules can help identify undetected patterns that could cause AI bias.
- There is no set-and-forget solution. AI bias can be introduced at any point in the creation or integration process, and companies must establish a regular cadence of review as models are updated and inputs change.
Responsibility and Accountability
When using AI to help make decisions for your organization, who do you hold accountable if something goes wrong? In other words, is the person who trained the AI responsible for the decisions it makes? The internal audit team of the future will inevitably need to consider these questions.
Accountability considerations also encompass questions regarding the rationale behind an AI’s decisions. Generative LLMs are known to produce “hallucinations,” which are false or nonsensical claims or fabricated information.
Internal audit teams should approach these issues proactively by setting clear rules for who is responsible for what the AI does as a part of the internal audit process. Implementing an AI governance framework that defines each stakeholder's roles and responsibilities helps create a transparent, accountable system. Teams should keep in mind these key areas of Responsible AI when shaping their governance framework:
- Fairness
- Bias
- Effectiveness
- Robustness
Privacy and Data Security
Because integrating AI requires enormous quantities of user data, many are raising concerns about user information security. Internal audit teams must establish robust security measures that prevent sensitive personal data from ending up in the wrong hands.
By incorporating privacy protections into the AI development process from the start, you can better protect your data at every stage. This precaution is especially important for applications like payroll, where user data cannot be anonymized.
By leveraging a secure in-house GPT system, you are operating within your existing, secure corporate IT environment, helping to safeguard against risk. Some organizations may also consider creating separate LLMs by department to keep data private.
Generative AI in Action
You are ready to review a Revenue Flux Analysis as part of a Financial Close Audit and you receive a report from an ERP system with monthly revenue data by product, customer, and region. You might think that everything looks normal, but because of your professional skepticism and years of experience, you ask your internal GPT if any of your Business Units had any significant revenue fluctuations above a 10% threshold defined by the organization. The AI responds by saying, “Yes, ‘Business Unit XYZ’ reported a significant reduction in revenue for three consecutive months in 2022.
Now, you think, “But why?” You return to the AI with a variety of clarifying questions, among which are:
- What drove the reduction in revenue? Was it based on variances in unit price, quantity, or both?
- Did the Control owner explain the revenue variances?
- Is it documented in the lead sheet?
- Is there a seasonality effect that should be taken into consideration based on historical sales data?
- How much did the Business Unit provide in discounts to customers?
Internal auditors, like you, will consider all situations and contexts that can explain an anomaly in the data. They use their vast knowledge of and experience with the company, internal control weaknesses, internal audit standards, and more, to think about why the anomaly may have occurred and then test those hypotheses until proof is found.
While AI can gather or analyze data, it falls short of humans when it comes to understanding nuances, creative decision-making, providing strategic advice, and following regulatory compliance. Consequently, we still need human intelligence to define action plans that align the audit with the business strategy. The internal audit team will still have to provide qualitative insights to make strategic business decisions or interpret and apply complex, evolving regulations and standards.
AI can help answer internal audit questions, but an auditor’s intuition is still driving investigation and validation.
BDO’s Holistic Approach to AI
BDO's comprehensive suite of AI services is designed to assist organizations at every stage of their AI journey, providing end-to-end support for successful AI implementation and adoption. Whether your internal audit team is just beginning to explore the possibilities of AI, or you are already leveraging it for audit transformation, we provide tailored assistance and help your AI adoption mature over time.
BDO can help with:
- AI Consulting and Strategy
- AI Risks and Control Design
- AI Technical Implementation
- AI Adoption and Change Management
- Responsible AI
- Ongoing Monitoring and Support
At BDO, we strive to future-proof internal audit functions like yours. Interested in exploring AI for internal audit? Request a free consultation today.
SHARE