Skip to main content

On-demand webinar coming soon...

Blog

Approaching the OECD Framework for the Classification of AI Systems

Managing the risks associated with AI development can be challenging but following the OECD’s set of standards can give you the framework to help guide your efforts

Bex Evans
Senior Product Marketing Manager
July 13, 2023

Multifaceted glass ceiling

Artificial Intelligence (AI). It’s everywhere. And I’m not talking about just recently - although you can thank a certain chatbot for making it a hot topic within the cultural zeitgeist. AI has been powering the technology all around us for quite some time. It powers navigation to help provide real-time route suggestions. Among other things, AI is being used to inform ETAs for rideshare apps, power transcription technology, and filtering your emails to spare you from thousands of spam emails. However, it’s also powering more high-risk technology including autonomous vehicles and fraud detection. 

When any technology is thrust into the mainstream, there are risks. The advent of the automobile was followed by the formation of the National Highway Safety Administration. New drugs go through extensive studies to be deemed viable for the masses by the Food and Drug Administration (FDA). And we’re seeing it now with AI, as calls from both tech and policy leaders to regulate AI get louder. The UK Information Commissioner’s Office recently commented on the widespread adoption of AI by urging businesses not to rush products to market without first properly assessing the privacy risks attached. Additionally, industry leaders have acknowledged the rapid pace at which this technology is evolving seen in the voluntary pact being developed by Alphabet and the European Union that would help govern the use of AI in lieu of formal legislation.   

For most businesses it is unlikely that entering into a voluntary pact with the European Union will be an option, meaning you’ll have to turn to alternative measures. So, how can you approach the development of novel technologies with a defined risk-based approach? There are a number of frameworks that are currently available for you to leverage when building AI products and solutions. Such frameworks can give you the guidance and practical advice to help approach the development of these new technologies in an ethical and risk-aware manner. Industry bodies such as the National Institute of Standards and Technology (NIST) and the International Standards Organization (ISO) are just some of the organizations that have provided frameworks for AI and similar technologies. In this article we will be taking a closer look at the Organisation for Economic Co-operation and Development (OECD) Framework for the Classification of AI Systems and its companion checklist, how it compares to its NIST and ISO counterparts, and how you can approach the adoption of this framework. 

What is the OECD Framework for the Classification of AI Systems? 

The OECD Framework for Classifying AI Systems is a guide aimed at policy makers that sets out several dimensions to characterize AI systems against while linking these characteristics with the OECD AI Principles. The framework defines a clear set of goals and seeks to establish a common understanding of AI systems, inform data inventories, support sector-specific frameworks, and assist with risk management and risk assessments as well as helping to further understanding typical AI risks such as bias, explainability and robustness.

The OECD AI Principles establish a set of standards for AI that are intended to be practical and flexible, these include: 

Inclusive growth, sustainable development, and well-being - This principle recognizes AI as a technology with a potentially powerful impact. Therefore, it can, and should, be used to help make advances with some of the most critical goals for humanity, society, and sustainability. However, it is critical that AI’s potential to further perpetuate or even aggravate existing social biases, or imbalance of risks/negative impacts based on wealth or territory are mitigated. AI should be built to empower everyone. 

Human-centered values and fairness - This principle aims to place human rights, equality, fairness, rule of law, social justice, and privacy at the center of the development and functioning of AI systems. Like the concept of Privacy by Design, it ensures that these elements are considered throughout each stage of AI lifecycle but in particular in the design stages. If this principle isn’t maintained, it can lead to infringing on basic human rights, discrimination, and can undermine public’s trust in AI generally. 

Transparency and explainability - This principle is based in disclosing when AI is being used and enabling people to understand how the AI system is built, how it operates, what information is it fed. Transparency aligns with the commonly understood definition whereby individuals are made aware for the details of the processing activity, allowing them to make informed choices.  Explainability focuses on enabling affected individuals to understand how the system reached the outcome. When provided with transparent and accessible information individuals can challenge the outcome more easily. 

Robustness, security, and safety - This principle ensures that AI system are developed to be able to withstand digital security risks and that systems do not present unreasonable risks to consumer safety when used. Central considerations for maintaining this principle include traceability, which is focused on maintaining records of data characteristics, data sources, and data cleaning as well as applying a risk management approach along with appropriate documentation of risk-based decisions. 

Accountability - This principle will be familiar to most privacy professionals. Accountability underpins the AI system’s life cycle with the responsibility to ensure that the AI system functions properly and is demonstrably aligned with the other principles.  

How does the OECD framework compare to other frameworks?

The OECD Framework for the Classification of AI Systems is not the only framework out there that focuses on establishing governance processes for the trustworthy and responsible use of AI and similar technologies. In recent years, several industry bodies have developed and released frameworks aimed at helping businesses to build a program for governing AI systems. When assessing which is the right approach for you, there are a few frameworks that should be considered and while the OECD framework can be used as to support your approach, its breadth means that it can easily supplement other available frameworks. Meaning that you won’t need to make a decision between OECD, NIST, or ISO how you can fit these frameworks together to work harmoniously. Here we will compare both NIST and ISO frameworks.  
 

NIST AI Risk Management Framework 

On January 26, 2023, NIST released the Artificial Intelligence Risk Management Framework (AI RMF) - a guidance document for voluntary use by organizations that design, develop, or use AI systems. The AI RMF is aimed at providing a practical framework for measuring and protecting the potential harm posed by AI systems by mitigating risk, unlocking opportunity, and raising the trustworthiness of AI systems. NIST outlines the following characteristics for organizations to measure the trustworthiness of their systems against: 

  • Valid and reliable 

  • Safe, secure, and resilient 

  • Accountable and transparent 

  • Explainable and interpretable 

  • Privacy enhanced

  • Fair with harmful biases managed

In addition, the NIST AI RMF will provide actionable guidance across four steps – govern, map, measure, and manage. These steps aim to give organizations a framework for understanding and assessing risk as well as keeping on top of these risks with defined processes.

 

ISO guidance on artificial intelligence risk management 

On February 6, 2023, the ISO published ISO/IEC 23894:2023 – a guidance documents for artificial intelligence risk management. Like the NIST AI RMF, the ISO guidance is aimed at helping organizations that develop, deploy, or use AI systems to introduce risk management best practices. The guidance outlines a range of guiding principles that highlight that risk management should be:

  • Integrated

  • Structured and comprehensive

  • Customized

  • Inclusive

  • Dynamic

  • Informed by the best available information

  • Consider human and cultural factors

  • Continuously improved

Again, like the NIST AI RMF, the ISO guidance defines processes and policies for the application of AI risk management. This includes communicating and consulting, establishing the context, assessing, treating, monitoring, reviewing, recording and reporting on risks attached to the development and us of AI systems and takes into account the AI system life cycle. 

 

How do these frameworks compare to the OECD framework?

The application of the OECD framework is intended to be broader than other AI risk management frameworks and aims to help organizations understand and assess AI systems across multiple contexts – or dimensions, as they are referred to with the OECD’s framework - including:

  • People & Planet 

  • Economic Context 

  • Data & Input 

  • AI Model 

  • Task & Output 

Unlike the NIST and ISO framework, the OECD framework promotes building a fundamental understanding of AI and related language to help inform policies within each defined context. The OECD framework also supports sector-specific frameworks and can be used in tandem with financial or healthcare-specific regulation or guidance related to AI usage. It also aims to support the development of risk assessments as well as governance policies for ongoing management of AI risk. 

Both NIST and ISO AI risk management frameworks contain a deeper focus on the specific controls and requirements for auditing risk. As such, they can complement the more ‘policy-centric’ parts of the OECD framework and can help organizations further mature and operationalize their take on AI systems development and usage. 

 

How can organizations implement the OECD framework for classification of AI systems?

OneTrust has added a checklist template based on the OECD Framework for the Classification of AI Systems to the recently introduced AI Governance solution. The OECD AI Checklist template aims to help you evaluate AI systems from a policy perspective and can be applied to a range of AI systems across the following dimensions outlined in the OECD framework.

The checklist will also ensure that efficient triaging and policies are in place within your organization to tackle the broad range of domains and potential concerns linked to the usage of AI systems. In short: you can use the checklist to validate if there is the right set of policies in your business to address gaps in AI systems under each of the principles, there is organization and right set of owners to help manage the risks identified through this checklist and to oversee the mitigation for them, or whether all of these domains are correctly represented within the life cycle of the AI system development and/or usage.  

To ensure organizations develop and deploy AI systems responsibly, OneTrust has created a comprehensive assessment template within the AI Governance tool that includes the OECD framework for classification of AI systems checklist. The OneTrust AI Governance solution is a comprehensive tool designed to help organizations inventory, assess, and monitor the wide range of risks associated with AI. 

Speak to an expert today to learn more about how OneTrust can help your organizations to manage AI systems and their associated risks. 


You may also like

Webinar

Responsible AI

Overcoming the privacy pitfalls of GenAI

This webinar will explore the key privacy pitfalls organizations face when implementing GenAI, focusing on purpose limitation, data proportionality, and business continuity. Attendees will gain insights into how to navigate these challenges through strong data governance, version control, and detailed model documentation to ensure compliance and mitigate risks.

December 10, 2024

Learn more

Report

Responsible AI

Getting Ready for the EU AI Act, Phase 1: Discover & Catalog, The Gartner® Report​

Getting Ready for the EU AI Act, Phase 1: Discover & Catalog, The Gartner® Report​

October 16, 2024

Learn more

Webinar

AI Governance

California's approach to AI: Unpacking new legislation

This webinar unpacks California’s approach to AI and emerging legislations, including legislation on defining AI, AI transparency disclosures, the use of deepfakes, generative AI, and AI models.

October 15, 2024

Learn more

eBook

AI Governance

Securing reliable AI solutions: Strategies for trustworthy procurement

Download this eBook to explore strategies for trustworthy AI procurement and learn how to evaluate vendors, manage risks, and ensure transparency in AI adoption.

September 12, 2024

Learn more

Webinar

AI Governance

Ensuring compliance and operational readiness under the EU AI Act

Join our webinar and learn about the EU AI Act's enforcement requirements and practical strategies for achieving compliance and operational readiness.

August 22, 2024

Learn more

Video

AI Governance

OneTrust AI Governance demo video

Learn how OneTrust AI Governance acts as a unified program center for AI initiatives so you can build and scale your AI governance program

August 13, 2024

Learn more

Webinar

Responsible AI

Privacy and AI: Bridging the divide

Watch this webinar for insights on ensuring responsible data use while building effective AI and privacy programs.

July 31, 2024

Learn more

Webinar

AI Governance

AI governance masterclass miniseries: EU AI Act

Discover the EU AI Act's impact on your business with our video series on its scope, roles, and assessments for responsible AI governance and innovation.

July 31, 2024

Learn more

Resource Kit

Responsible AI

EU AI Act compliance resource kit

Download this resource kit to help you understand, navigate, and ensure compliance with the EU AI Act.

July 22, 2024

Learn more

Webinar

AI Governance

From build to buy: Exploring common approaches to governing AI

In this webinar, we'll navigate the intricate landscape of AI Governance, offering guidance for organizations whether they're developing proprietary AI systems or procuring third-party solutions.

July 10, 2024

Learn more

eBook

AI Governance

Navigating the ISO 42001 framework

Discover the ISO 42001 framework for ethical AI use, risk management, transparency, and continuous improvement. Download our guide for practical implementation steps.

July 03, 2024

Learn more

Webinar

AI Governance

AI Governance Leadership Webinar: Best Practices from IAPP AIGG with KPMG

Join out webinar to hear about the challenges and solutions in AI governance as discussed at the IAPP conference, featuring insights and learnings from our industry thought leadership panel.

June 18, 2024

Learn more

Webinar

AI Governance

Colorado's Bill on AI: Protecting consumers in interactions with AI systems

Colorado has passed landmark legislation regulating the use of Artificial Intelligence (AI) Systems. In this webinar, our panel of experts will review best practices and practical recommendations for compliance with the new law.

June 11, 2024

Learn more

Webinar

AI Governance

Governing data for AI

In this webinar, we’ll break down the AI development lifecycle and the key considerations for teams innovating with AI and ML technologies.

June 04, 2024

Learn more

Report

AI Governance

Global AI Governance law and policy: Jurisdiction overviews

In this 5-part regulatory article series, OneTrust sponsored the IAPP to uncover the legal frameworks, policies, and historical context pertinent to AI governance across five jurisdictions: Singapore, Canada, the U.K., the U.S., and the EU.

May 08, 2024

Learn more

Webinar

AI Governance

Embedding trust by design across the AI lifecycle

In this webinar, we’ll look at the AI development lifecycle and key considerations for governing each phase.

May 07, 2024

Learn more

Webinar

AI Governance

Navigating AI policy in the US: Insights on the OMB Announcement

This webinar will provide insights for navigating the pivotal intersection of the newly announced OMB Policy and the broader regulatory landscape shaping AI governance in the United States. Join us as we unpack the implications of this landmark policy on federal agencies and its ripple effects across the AI ecosystem.

April 18, 2024

Learn more

Webinar

AI Governance

Data privacy in the age of AI

In this webinar, we’ll discuss the evolution of privacy and data protection for AI technologies.

April 17, 2024

Learn more

Resource Kit

AI Governance

OneTrust's journey to AI governance resource toolkit

What actually goes into setting up an AI governance program? Download this resource kit to learn how OneTrust is approaching our own AI governance, and our experience may help shape yours.

April 11, 2024

Learn more

White Paper

AI Governance

Getting started with AI governance: Practical steps and strategies

Download this white paper to explore key drivers of AI and the challenges organizations face in navigating them, ultimately providing practical steps and strategies for setting up your AI governance program.

March 08, 2024

Learn more

Webinar

AI Governance

AI regulations in North America

In this webinar, we’ll discuss key updates and drivers for AI policy in the US; examining actions being taken by the White House, FTC, NIST, and the individual states. 

March 05, 2024

Learn more

In-Person Event

Responsible AI

Data Dialogues: Implementing Responsible AI

Learn how privacy, GRC, and data professionals can assess AI risk, ensure transparency, and enhance explainability in the deployment of AI and ML technologies.

February 23, 2024

Learn more

Webinar

AI Governance

Global trends shaping the AI landscape: What to expect

In this webinar, OneTrust DataGuidance and experts will examine global developments related to AI, highlighting key regulatory trends and themes that can be expected in 2024.

February 13, 2024

Learn more

Webinar

AI Governance

The EU AI Act

In this webinar, we’ll break down the four levels of AI risk under the AI Act, discuss legal requirements for deployers and providers of AI systems, and so much more.

February 06, 2024

Learn more

Webinar

Responsible AI

Preparing for the EU AI Act: Part 2

Join Sidley and OneTrust DataGuidance for a reactionary webinar to unpack the recently published, near-final text of the EU AI Act.

February 05, 2024

Learn more

Webinar

Privacy Management

Data Privacy Day 2024: Reflecting on the past year and anticipating the next

Join our panel of expert privacy professionals as they dissect the key happenings in 2023 and how privacy professionals can approach what may occur in 2024.

January 31, 2024

Learn more

Webinar

AI Governance

Getting started with AI Governance

In this webinar we’ll look at the AI Governance landscape, key trends and challenges, and preview topics we’ll dive into throughout this masterclass.

January 16, 2024

Learn more

Webinar

AI Governance

First Annual Generative AI Survey: Business Rewards vs. Security Risks Panel Discussion

OneTrust sponsored the first annual Generative AI survey, published by ISMG, and this webinar breaks down the key findings of the survey’s results.

January 12, 2024

Learn more

Report

AI Governance

ISMG's First annual generative AI study - Business rewards vs. security risks: Research report

OneTrust sponsored the first annual ISMG generative AI survey: Business rewards vs. security risks.

January 04, 2024

Learn more

Webinar

AI Governance

Building your AI inventory: Strategies for evolving privacy and risk management programs

In this webinar, we’ll talk about setting up an AI registry, assessing AI systems and their components for risk, and unpack strategies to avoid the pitfalls of repurposing records of processing to manage AI systems and address their unique risks. 

December 19, 2023

Learn more

Infographic

Responsible AI

EU AIA Conformity Assessment: A step-by-step guide

A Conformity Assessment is the process of verifying and/or demonstrating that a “high- risk AI system” complies with the requirements of the EU AI Act. Download the infographic for a step-by-step guide to perform one.

November 17, 2023

Learn more

eBook

AI Governance

Navigating the EU AI Act

With the use of AI proliferating at an exponential rate, the EU rolled out a comprehensive, industry-agnostic regulation that looks to minimize AI’s risk while maximizing its potential. 

November 17, 2023

Learn more

Webinar

Responsible AI

OneTrust AI Governance: Championing responsible AI adoption begins here

Join this webinar demonstrating how OneTrust AI Governance can equip your organization to manage AI systems and mitigate risk to demonstrate trust.

November 14, 2023

Learn more

White Paper

AI Governance

AI playbook: An actionable guide

What are your obligations as a business when it comes to AI? Are you using it responsibly? Learn more about how to go about establishing an AI governance team. 

October 31, 2023

Learn more

Webinar

The Shifting US Privacy Landscape: Lessons learned from enforcement actions and emerging trends

Stay ahead of US privacy laws as we explore the lessons learned from CCPA and FTC enforcement and how AI is effecting the regulatory landscape.

October 12, 2023

Learn more

Infographic

AI Governance

The Road to AI Governance: How to get started

AI Governance is a huge initiative to get started with for your organization. From data mapping your AI inventory to revising assessments of AI systems, put your team in a position to ensure responsible AI use across all departments.

October 06, 2023

Learn more

White Paper

AI Governance

How to develop an AI governance program

Download this white paper to learn how your organization can develop an AI governance team to carry out responsible AI use in all use cases.

October 06, 2023

Learn more

eBook

Responsible AI

AI Chatbots: Your questions answered

We answer your questions about AI and chatbot privacy concerns and how it is changing the global regulatory landscape.

August 08, 2023

Learn more

Webinar

Responsible AI

Unpacking the EU AI Act and its impact on the UK

Prepare your business for EU AI Act and its impact on the UK with this expert webinar. We explore the Act's key points and requirements, building an AI compliance program, and staying ahead of the rapidly changing AI regulatory landscape.

July 12, 2023

Learn more

Webinar

AI Governance

The EU's AI Act and developing an AI compliance program

Join Sidley and OneTrust DataGuidence as we discuss the proposed EU AI Act, the systems and organizations that it covers, and how to stay ahead of upcoming AI regulations.

May 30, 2023

Learn more

White Paper

AI Governance

Data protection and fairness in AI-driven automated data processing applications: A regulatory overview

With AI systems impacting our lives more than ever before, it's crucial that businesses understand their legal obligations and responsible AI practices.  

May 15, 2023

Learn more

Webinar

AI Governance

AI regulation in the UK – The current state of play

Join OneTrust and their panel of experts as they explore Artificial Intelligence regulation within the UK, sharing invaluable insights into where we are and what’s to come.

March 20, 2023

Learn more

Regulation Book

AI Governance

AI Governance: A consolidated reference

Download this reference book and have foundational AI governance documents at your fingertips as you position your organization to meet emerging AI regulations and guidelines.

Learn more

Webinar

AI Governance

AI governance masterclass

Navigate global AI regulations and identify strategic steps to operationalize compliance with the AI governance masterclass series.

Learn more

AI Governance Demo: Tooling and Considerations to Champion and Implement an AI Governance Program Webinar | Resources | OneTrust

Learn more