The European Union proceeds to the introduction of common regulations regarding the use of artificial intelligence. Norms are to ban the field of mass surveillance (with the exception of maintaining public safety), scoring assessment of social behavior, and biometrical identification (such as face recognition). The intentions are indeed right-minded, but we ought to remember that the more regulations, the bigger the risk of over-regulation and the lower competitiveness of the EU.
If you’ve wondered whether the scenario from the Black Mirror series is ever going to happen in real life, the European Union is currently working to ensure that it is not. Last week the European Commission unveiled its plan to regulate the use of AI, which prohibits its use for citizens’ evaluation by governments, mass surveillance, and biometrical identification. In the case of the last-mentioned, in order to imply the technology in some instances, the special consent of the authorities would be required. Additionally, areas of AI that are considered high-risk systems would have to undergo detailed inspections prior to implementation and general application. To AI’s high-risk software we may include:
- Vocational education or training (e.g. assessment of exams)
- Product safety features (e.g. the use of AI in robot-assisted surgery)
- Employment management (e.g. CV sorting software for recruitment procedures)
- Basic private and public services (e.g. creditworthiness assessment)
- Managing migration, asylum and border control (e.g. assessing authenticity of travel documents)
- Administration of justice and democratic processes (e.g. applying the law to a specific set of facts)
For these cases appropriate risk assessment and mitigation systems would be implemented as well as human surveillance measures to minimize any potential risks. The EU also forebodes the issuing of certificates of conformity, which would be valid for a maximum of 5 years and would embrace both companies embedded in the EU and those from abroad. Companies that would not apply to the adopted rules regarding the use of AI would be subject to a very high penalty, which could reach 6% of their revenue.
It is needless to say that the European Union’s initiative to introduce standardized norms constitutes a step towards the right direction. With the implementation of regulations, the EU would pave the way for ethical technology which is something absolutely essential, taking into consideration the future and an enormous role that technology plays in it. Every tech-based innovation should be ensured not to harm or discriminate any individual and trust in AI should be a requirement, not a thinkable addition. However is enforcing these laws really possible? The fact is that AI is used by the biggest technological companies in the world and till this time the EU could not even find a way to appropriately tax them for their business activities in the EU’s area. Threatening with huge penalties is also not a new matter, being already known from the RODO rules. Maybe it is just about taking these first steps and paving the way so that it can reach a larger scale in the future?
Sources:

