The EU’s Proposed Regulations on Artificial Intelligence

Reading Time: 3 minutes

The European Union  has recently announced plans to introduce new regulations on the use of AI. The main idea of these regulations is to ensure that AI is used ethically and responsibly. They are designed to support innovation,  and at the same time to  limit usage of technologies harmful for EU‘s citizens, like violation of the right to privacy. These regulations are still in the proposal stage and have not yet been finalized, but how they will impact the society, if they come into force?

The use of AI technologies has been divided into four risk categories. This framework is known as the EU Pyramid of Risks of Artificial Intelligence. At the top of the pyramid we can find technologies that should be totally banned, and at the bottom technologies with minimal risk for society. So let’s explain more precisely, how each category is defined.

R.bb97fd5141c1b968e5a70cbd43acbdc0 (1240×700) (bing.com)

Starting from the bottom:

  1. Level 1 – Minimal Risk AI systems- at this level we can find about 80-90% of technologies that exist, and they will continue to exist as they do right now without the legislation. Example: E-mail spam filters, smart home devices or personal assistants.
  1. Level 2 – Limited Risk AI systems. These are AI systems that have a limited risk to society or individuals, and they require a higher level of oversight. Example: Chatbots, when an individual doesn’t know if the conversation is conducted with real human being or a robot.
  1. Level 3 – High Risk AI systems: These are AI systems that pose a high risk to society and individuals and that’s why there is a higher need to  regulate them. Example: Algorithms used to review job applications, so it is the program that decides whether someone will be hired or fired.
  1. Level 4 – Unacceptable AI systems-Systems that directly threaten and violate individuals’ right to privacy and the consequences of an error or malfunction of the system could be catastrophic for the society. Example: Biometric technologies, such as facial recognition used in parks, city centers or offices.

Who would be enforcing this regulations?

This question is not exactly answered by EU yet, but there are some plans for enforcing the regulations. There is an idea that each member state should establish an authority that will be responsible for defining regulations , but that could make a chaos, because different countries could have different regulations then. Another, probably better option, is to establish European AI Center in which not only representants of member countries will be involved, but also stakeholders.

What will be the consequences for non-compliance with EU regulations?

Fines for non-compliance with EU regulations can be quite substantial, and may be imposed on both companies and individuals. Everything will depend on how serious the offense is, but the fines can reach up to 4% of a company’s annual global turnover or 20 million €.

Zobacz obraz źródłowy
c96cbb_117bae5519ed4d21933419c6068f4928~mv2.jpg (1000×550) (wixstatic.com)

Is there a need to regulate the AI sector?

We have regulation in pretty every sector, but none in AI, even though is large and growing of human use. As AI is developing so fast, maybe it is a good idea to regulate it somehow, to spot any potential danger or biases before they occur in life, and harm the society. On the other hand is hard to regulate AI without getting in the way of innovation.

Write down in a comment, your opinion about regulating AI!

Recources:

A European approach to artificial intelligence | Shaping Europe’s digital future (europa.eu)

UE chce uregulować sztuczną inteligencję | UE-Polska-Niemcy – Wiadomości po polsku | DW | 21.04.2021

The European Approach to Regulating Artificial Intelligence – YouTube

Leave a Reply