Skip to main content

EU AI regulation: Artificial intelligence provides us with many new and promising opportunities. But how do you know if your algorithms are ethical and trustworthy? The European Union is currently in the process of drafting legislation to regulate AI. In this video, we help you understand how EU regulations affect your algorithm.

 

EU principles

The EU has set out 7 key requirements that determine if an AI system is trustworthy. These are:

 

  1. Human agency and oversight
  2. Robustness and safety
  3. Privacy and data governance
  4. Transparency
  5. Diversity, non-discrimination, and fairness
  6. Societal and environmental well-being, and
  7. Accountability

Depending on what your algorithm looks like, there are four risk levels that guide your work when developing an algorithm. To help understand this, we have visualised it as a volcano, starting with the lowest risk level at the bottom and the highest one at the top. Let’s break this down.

 

Minimal Risk for EU AI Regulation

Firstly, at the bottom are algorithms that pose minimal risk in relation to the 7 requirements, which means they are unlikely to negatively impact people’s well-being. Think of spam filters – we all know them from our emails. This means that your application is trustworthy according to EU guidelines. So, do not worry, you can proceed with production and implementation.

 

Limited Risk for EU AI Regulation

A limited risk occurs when your application interacts with or imitates people, for example chatbots or deep fakes. In that case, you have the obligation to be transparent. In practice, this means that users must be made aware that AI is being applied and that they are not interacting with a human being. Based on this information, the user can then make their own assessment whether or not to proceed with using the application.

 

High Risk for EU AI Regulation

Now we are moving to a more challenging level. When AI is used in specific areas that may affect many people, trustworthy and ethical algorithms are essential. For example, when used in law enforcement, credit scoring, college admissions, critical infrastructures or types of biometric identification.

Applying AI in these areas require you to perform a conformity assessment to ensure you have the right systems in place, such as a risk management system, record-keeping, technical documentation, data governance, human oversight and expert judgement, accuracy, robustness and security. Explainability, transparency and the provision of information play a key role when working with high-risk AI applications.

 

Unacceptable Risk for EU AI Regulation

Now we are at the top of the volcano, which is about AI applications that are deemed unacceptable in the EU because they pose a risk to people’s fundamental rights. Think of social scoring, public biometric identification or the manipulation of free will. In this case, it is quite simple: your algorithm is banned by the EU and you are not allowed to put it into production.

So, there you go! A crash course on how EU regulations affect your AI algorithm. Want to know more about how to assess your algorithms, mitigate risks or implement trustworthy and ethical AI? Amsterdam Data Collective can help you with this. Please contact Elianne Anemaat eanemaat@adc-consulting.com or check our contactpage.

 

 

Source: https://ec.europa.eu/info/strategy/priorities-2019-2024/europe-fit-digital-age/excellence-trust-artificial-intelligence_en

 

What stage is your organisation in on its data-driven journey?

Discover your data maturity stage. Take our Data Maturity Assessment to find out and gain valuable insights into your organisation’s data practices.

Read more about the assessment
Gallery of ADC