ISO 42001: Helping To Build Trust in AI

Artificial intelligence has become prevalent in organizations across virtually every industry, and it isn’t slowing down. As it continues to become ingrained within everyday processes, AI presents new and exciting use cases to help organizations become more efficient and productive. Yet with its rapid expansion and need for large quantities of high-quality data, AI poses serious questions around its responsible use, the source and quality of training data, privacy protections, the extent of testing and monitoring, and other risk guardrails. This necessitates the need for a strong governance framework. While organizations may assert they are taking measures to use the new technology responsibly, the reality is that there isn’t currently a standardized, measurable way to verify those claims. 

But that’s all changing — and soon.


What Organizations Need To Know

In December 2023, the International Organization for Standardization (ISO) released ISO 42001, a new standard that outlines requirements for the responsible use of AI. It will also become the first standard with a certification for companies to demonstrate they are employing best practices when developing or implementing AI systems. The standard specifies requirements and provides guidance for establishing, implementing, maintaining, and continually improving an artificial intelligence management system (AIMS), which in turn helps organizations foster the responsible development and use of AI systems. 

ISO 42001 contains several elements regarding AI that apply to organizations in every industry, including controls addressing risks related to:

  • Ethics, bias, and inclusion
  • Data protection and quality, inclusive of a diverse population set
  • Cybersecurity of AI systems

ISO 42001 is designed and structured like other ISO management system standards: It includes mandatory management clauses with a set of technical controls,  but offers organizations flexibility in what controls to implement based on their specific use cases and risks related to development or use of AI systems. 


Understanding ISO 42001

Like other ISO standards, ISO 42001 will require organizations to meet certain expectations to achieve certification, but there is good news for companies already compliant with other ISO standards. By adapting and enhancing existing compliance frameworks already in place, organizations may only need to make incremental adjustments to meet the criteria for ISO 42001.

At the onset, the standard requires organizations to define and document an AI policy that outlines the principles guiding all activities related to AI. It also mandates the policy contain  general requirements for an AI system impact assessment and system development. Additionally, it must highlight alignment with other organizational policies and provide a process (for internal and external users) for reporting concerns related to AI.

None of that is to say organizations still in the early phases of their AI journey should be concerned; even companies with mature AI processes still have some work to do to become compliant. Though not yet a “must-have” certification, ISO 42001 may reach that status in the near future. In the interim, organizations of all sizes can set themselves up for success by beginning the preparation process now. 

Laying the groundwork for conscientious AI implementation may provide additional benefits as well. Responsible AI use is something regulatory bodies around the world are monitoring closely, and the concepts of ISO 42001 may help organizations meet current and future regulations. This includes the European Union AI Act, which has already taken effect and includes many of the same principles found in ISO 42001.


Key Considerations To Meet Requirements

ISO 42001 applies to organizations across a wide range of industries; however, from a practical standpoint, a one-size-fits-all approach is not a viable means of certification. That is why the standard was designed to allow companies to work within the context of their unique situations. The standard requires an organization to document its intended use of an AI system and the role the organization plays in managing the technology. Further, a risk assessment is part of aligning with the standard, but that evaluation is going to comprise varying factors based upon an organization’s risk exposure and structure. For example, the risk assessment for a healthcare provider will consider factors that may not necessarily be applicable to a company in the manufacturing space. The results of the risk assessment will help the organization determine its AI footprint within their operations so they can identify the controls that are most relevant. 

In addition to the risk assessment, this standard requires organizations to perform a system impact assessment at a more granular and technical level. Doing so allows the organization to assess the AI system’s impact on both groups and individuals while also addressing users’ expectations and feedback.

Some of the areas of impact to be considered are:

  • Fairness
  • Accountability
  • Trustworthiness
  • Transparency and explainability 
  • Security and privacy
  • Safety and health
  • Financial consequences
  • Accessibility and human rights

Similarly, the societal impact of AI systems must be assessed, with consideration for key areas including environmental sustainability, economic factors, government regulation, and cultural values.


Creating Structure

Organizations should develop comprehensive and detailed processes and controls around the AI system life cycle (similar to the software development life cycle (SDLC)). It should also document an AI system requirements and specifications plan, with notes addressing the concerns identified in the impact assessment. Further, the plan should contain details on how the model can be trained and data requirements will be achieved.

The company will need to document the system’s architectural design, with information about the required hardware and software components, machine learning approach, learning algorithms, and type of machine learning models to be used. The organization also needs to establish robust validation and verification methods comprising steps for the selection of test data and requirements ensuring the AI system is representative of the user base. Finally, it’s imperative the system includes an evaluation plan to ensure reliability and safety, acceptable error rates, and that other key metrics are met. The organization should provide continuous performance monitoring for general errors or failures and the accuracy and precision of outputs. 


The Next Steps

While the ISO 42001 certification process has not yet begun, that phase is likely to come sometime in the latter half of 2024. Organizations considering obtaining the certification when it becomes available should begin evaluating the requirements now. Even those not currently considering applying for ISO 42001 can still benefit by reviewing information contained within its guidelines for best practices and general AI implementation information.

No matter where you are in your AI journey, BDO can guide your organization through the adoption and implementation of AI and new technology. Our knowledgeable, experienced professionals can help you avoid potential pitfalls by engaging BDO’s Cybersecurity Advisory Services or learning more about our artificial intelligence solutions. BDO’s Third Party Attestation team is also ready to help organizations with their audit readiness assessment and future ISO certifications.