Skip to main content

On-demand webinar coming soon...

Blog

Understanding the EU AI Act’s risk levels

The EU AI Act has created four different risk levels to characterize the use of AI systems. Learn more about each, and how it can impact the use of AI in your organization

Laurence McNally
Product Manager, OneTrust AI Governance
November 30, 2023

Behind view of a staircase against a stone wall.

The proposed EU AI Act takes a comprehensive approach to regulating artificial intelligence, laying down obligations for providers and deployers in an effort to ensure safe and ethical use of AI technology. To do this, the EU has proposed the first regulatory framework for AI, including categorizing risk levels for different systems. 

 

Breaking down risk levels

The draft EU AI Act breaks down risk for AI systems into four different categories: 

  1. Unacceptable

  2. High

  3. Limited

  4. Minimal risk systems

 

Unacceptable risk  

This category bans AI systems that are clear threats to human safety or rights. For example, toys with voice assistance promoting dangerous behavior or social scoring by governments that might lead to discrimination are considered unacceptable. 

 

High risk 

The draft EU AI Act considers AI systems that pose a threat to human safety or fundamental rights to be high risk. This can include systems used in toys, aviation, cars, medical devices, and elevators – all products that fall under the EU’s product safety regulations. 

High risk systems can also include things like critical infrastructures (like transport systems) where a malfunction could endanger lives. But aside from physical safety, these risk levels are also designed to help protect human rights and quality of life; for instance, scoring exams or sorting resumes with AI is considered a high-risk system, as it could impact someone’s career path and future. 

Various law enforcement activities are also considered high risk, like evaluating the reliability of evidence, verifying travel documents at immigration control, and remote biometric identification.

 

Limited and minimal risk 

This level targets AI systems with specific transparency needs, like using a chatbot for customer service. Your users have to be aware that they’re interacting with a machine and be given the opportunity to opt out and speak to a human instead. Limited risk systems rely on transparency and informed consent of the user, as well as giving them the easy option to withdraw.  

Minimal risk is the lowest level of risk set forth in the AI Act, and refers to applications, like AI-powered video games or email spam filters. 

 

Generative AI

Recently, the EU Parliament introduced amendments that address Generative AI and impose additional transparency rules for its use. Negotiations between the European Parliament, Council, and Commission have also focused on the tiering of foundation models, and so we may see additional clarifications around their use. 

AI tools like large language model-backed chatbots (think ChatGPT) may have to follow additional rules like revealing that the content was produced using AI, ensuring that the model isn’t creating illegal content, and publishing summaries of copyrighted data used for training. 

 

Putting risk levels into context

It’s one thing to see the EU AI Act’s risk levels laid out, but it’s another to understand how they fit into your daily business operations. 

 

Using automated workflows

Unacceptable risk systems, as the name suggests, are prohibited by the AI Act, and therefore can’t be used by anyone in your organization. A tool like OneTrust AI Governance will automatically flag and reject any systems that are categorized in this level of risk, protecting your organization and freeing up your team’s time for other reviews.

On the other end of the risk spectrum, limited and minimal risk systems can be automatically approved by your AI Governance tool, allowing your broader team to move forward with their project and continue to innovate with AI. 

In both these risk cases, the decision of whether to approve or deny use of the system can be done automatically by your tool, as the guidelines are clear either way. Where things get less clear is when it comes to high-risk systems.  

 

Diving into high-risk systems

Systems that are deemed high risk aren’t automatically banned under the draft EU AI Act, but they do have more requirements that need to be met before that system can be deployed. These requirements demonstrate that the technology and its use doesn’t pose a significant threat to health, safety, and fundamental human rights. 

Developers of AI systems determine their system’s risk category themselves using standards set forth by the EU. Once the systems determined high risk are in use, the deployers have responsibilities for ongoing compliance, monitoring, human oversight, and transparency obligations once they decide to put a high-risk system to use.

This ongoing compliance and monitoring can take a lot of manual labor, so finding ways to automate these reviews will save your team a lot of time. OneTrust flags high-risk systems for manual review, so they aren’t automatically rejected, but your compliance team does need to take the time for due diligence to decide if using the system – and taking on the additional operational responsibility – is worth it, or if it would be better to pursue a different system instead.

 

See it in practice: Building personalized marketing emails using GPT-4

To see how a real project might move forward using an AI system and an AI Governance tool, here’s a practical example. 

Suppose your marketing team wants to use OpenAI’s GPT-4 to create personalized marketing emails. This is the project initialization phase, where a team identifies a use case for an AI system and needs to get it approved. 

Your compliance team would then need to conduct an assessment to determine if the system makes sense and is safe to use. OneTrust offers these accessible and concise assessments, where the project owner can lay out their goals and intentions. 

From there, the project needs to be assigned a risk categorization. OneTrust automates this process by assessing the project and automatically assigning a risk level, as explained above. 

Depending on the level of risk assigned, your team can then expedite deployment of their project. In this case, the use of GPT-4 has been deemed low-risk by the AI Governance tool, and is automatically approved for the marketing team to move forward with their project. 

 

Innovate with speed and safety

The OneTrust AI Governance solution offers more than just compliance with the EU AI Act. It's a complete solution for overseeing AI activities in your organization. 

We help you innovate faster without neglecting safety, whether you're dealing with the EU's regulations or other governance challenges. 

Request a demo today to explore how OneTrust can play an integral role in your AI journey.


You may also like

Webinar

AI Governance

Governing data for AI

In this webinar, we’ll break down the AI development lifecycle and the key considerations for teams innovating with AI and ML technologies.

June 04, 2024

Learn more

Webinar

AI Governance

Embedding trust by design across the AI lifecycle

In this webinar, we’ll look at the AI development lifecycle and key considerations for governing each phase.

May 07, 2024

Learn more

Webinar

AI Governance

Data privacy in the age of AI

In this webinar, we’ll discuss the evolution of privacy and data protection for AI technologies.

April 17, 2024

Learn more

Webinar

AI Governance

AI regulations in North America

In this webinar, we’ll discuss key updates and drivers for AI policy in the US; examining actions being taken by the White House, FTC, NIST, and the individual states. 

March 05, 2024

Learn more

Webinar

AI Governance

Global trends shaping the AI landscape: What to expect

In this webinar, OneTrust DataGuidance and experts will examine global developments related to AI, highlighting key regulatory trends and themes that can be expected in 2024.

February 13, 2024

Learn more

Webinar

AI Governance

The EU AI Act

In this webinar, we’ll break down the four levels of AI risk under the AI Act, discuss legal requirements for deployers and providers of AI systems, and so much more.

February 06, 2024

Learn more

Webinar

Responsible AI

Preparing for the EU AI Act: Part 2

Join Sidley and OneTrust DataGuidance for a reactionary webinar to unpack the recently published, near-final text of the EU AI Act.

February 05, 2024

Learn more

Checklist

AI Governance

Questions to add to existing vendor assessments for AI

Managing third-party risk is a critical part of AI governance, but you don’t have to start from scratch. Use these questions to adapt your existing vendor assessments to be used for AI.

January 31, 2024

Learn more

Webinar

AI Governance

Getting started with AI Governance

In this webinar we’ll look at the AI Governance landscape, key trends and challenges, and preview topics we’ll dive into throughout this masterclass.

January 16, 2024

Learn more

Webinar

AI Governance

Building your AI inventory: Strategies for evolving privacy and risk management programs

In this webinar, we’ll talk about setting up an AI registry, assessing AI systems and their components for risk, and unpack strategies to avoid the pitfalls of repurposing records of processing to manage AI systems and address their unique risks. 

December 19, 2023

Learn more

Webinar

Responsible AI

Preparing for the EU AI Act

Join Sidley and OneTrust DataGuidance for a reactionary webinar on the EU AI Act.

December 14, 2023

Learn more

Webinar

Consent & Preferences

Marketing Panel: Balance privacy and personalization with first-party data strategies

Join this on-demand session to learn how you can leverage first-party data strategies to achieve both privacy and personalization in your marketing efforts.

December 04, 2023

Learn more

Webinar

AI Governance

Revisiting IAPP DPC: Top trends from IAPP's privacy conference in Brussels

Join OneTrust and KPMG webinar to learn more about the top trends from this year’s IAPP Europe DPC. 

November 28, 2023

Learn more

eBook

AI Governance

Navigating the draft EU AI Act

With the use of AI proliferating at an exponential rate, the EU is in the process of rolling out a comprehensive, industry-agnostic regulation that looks to minimize AI’s risk while maximizing its potential.

November 17, 2023

Learn more

eBook

Responsible AI

Conformity assessments under the proposed EU AI Act: A step-by-step guide

Conformity Assessments are a key and overarching accountability tool introduced by the EU AI Act. Download the guide to learn more about the Act, Conformity Assessments, and how to perform one.

November 17, 2023

Learn more

Infographic

Responsible AI

EU AIA Conformity Assessment: A step-by-step guide

A Conformity Assessment is the process of verifying and/or demonstrating that a “high- risk AI system” complies with the requirements of the EU AI Act. Download the infographic for a step-by-step guide to perform one.

November 17, 2023

Learn more

White Paper

AI Governance

AI playbook: An actionable guide

What are your obligations as a business when it comes to AI? Are you using it responsibly? Learn more about how to go about establishing an AI governance team. 

October 31, 2023

Learn more

Webinar

AI Governance

AI governance masterclass

Navigate global AI regulations and identify strategic steps to operationalize compliance with the AI governance masterclass series.

Learn more