• C/BALMES 76, PRAL 1ª, 08007 BARCELONA +34 934 878 030 bmk@bmk.es

EU Parliament: Green light for European AI Act

“Dear ChatGPT, are you safe to use?”

– not safe enough, according to European legislation. Therefore, the use of AI has been regulated in the European Artificial Intelligence Act: the world’s first comprehensive set of rules for dealing with AI technology. The EU Commission submitted a legislative proposal back in April 2021, since then the regulation has been the subject of numerous tenacious discussions in European legislation. On 13.03.24 – earlier than expected – the European Parliament finally paved the way for its enactment. The first law of its kind is intended to protect EU citizens from abuse, increase confidence in the use of AI, promote innovation and ensure that the EU takes a leading role in this area.

The new regulations are based on the risk posed by AI technology and do not differentiate between users. The higher the potential risks of an AI application – so the logic goes – the stricter and more intrusive the requirements. For example, the act sets out a large number of requirements for the quality of data, documentation and risk assessments that must be carried out before use. Certain applications, such as social scoring, are to be banned altogether. Possible sanctions are to be decided by the member states in future.

Obliged parties are developers, importers, distributors as well as “deployers” (in other words: users in a professional context) of AI systems. A majority of companies will already be affected by the regulation of the use of general-purpose AI systems. These are AI systems that are suitable for a variety of purposes. These models were only included in the act at the end of 2023, presumably in response to the popularity of the ChatGPT, which had been made available to the public just before.

The AI Act also aims to boost innovation by requiring real-world laboratories to be set up in the member states to enable companies to carry out tests under real-life conditions. These must be accessible to small and medium-sized enterprises (SMEs) and start-ups and create scope for testing innovative AI. Whether this will really enable innovation for SMEs remains questionable, as the main focus of the law continues to be the requirements, which go hand in hand with bureaucracy and a lack of clarity. However, these are easier for BigTech companies than SMEs to cope with.

So what happens next? The last step is for the member states to give their approval, which is considered a mere formality. Six months after the law comes into force, the member states must first phase out prohibited systems. After two years, all points of the law have then to be fully implemented. It is therefore advisable for companies to familiarize themselves with the AI Act now and take appropriate steps to comply with it.

 

Seray Kantarci

18/03/2024