Skip to main content
  • Regulatory Update

The European Artificial Intelligence Act: Key Elements and Best Practices

Medical device makers will want to read the second story in our series covering the implications of the EU Artificial Intelligence Act.

Person sitting in front of giant monitors while coding in a dark room

May 20, 2024

By Sade Sobande

This is the second in our series of regulatory updates on the EU Artificial Intelligence Act (AIA).

We discussed the EU AIA in our first regulatory update. The EU AIA introduces a comprehensive framework for AI-enabled device regulation in the EU. In this regulatory update we discuss key elements of the regulation and best practices, enforcement and oversight, AI regulatory sandboxes and real-world testing.

AI regulatory best practices

While the AIA is the world’s first comprehensive regulation for AI systems, it is based on established AI best practices. The EU has previously had soft law approaches via guidelines for developing these products. One such guideline is the Ethics Guidelines for Trustworthy Artificial Intelligence. Presented by the High-Level Expert Group on AI on April 8, 2019, this best practice outlined seven requirements for assuring trustworthy AI across the AI system’s entire life cycle.

Established best practices for AI:

  • Human agency and oversight
  • Technical robustness and safety
  • Privacy and data governance
  • Transparency
  • Diversity, non-discrimination and fairness
  • Environmental and societal well-being
  • Accountability

Although such ‘soft’ approaches have lacked the necessary power of enforcement to support effective regulation, guidelines such as the EU Ethics Guidelines for Trustworthy AI provided a foundation and framework that is reflected by the current regulation.

It would be remiss not to mention the Council of Europe treaty on AI. Captioned Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law’, the treaty was adopted on May 17 by the 46 Council of Europe Member States, the European Union, and 11 non-member states. It signals the first internationally binding treaty on AI. With a focus on governance and fundamental rights across the AI system lifecycle, it complements the AIA, echoing principles of risk management, transparency, human oversight, bias, reliability and innovation, amongst others.

The best principles outlined in these documents are important, because whilst the AIA is very much focused on high-risk AI systems and GPAI models, the requirements and obligations may still be applied to minimal-risk AI systems as they are based on best practices. The AI office and Member States will therefore encourage providers and deployers of minimal-risk AI systems to create codes of conduct, including governance mechanisms, which foster the voluntary application of some or all of the mandatory requirements applicable to high-risk AI systems.

Enforcement and oversight

Different levels of enforcement and oversight exist under the AIA. Each member state must designate at least one notifying authority and at least one market surveillance authority as national competent authorities. An AI Board (The Board) and AI Office are established under the regulation. The Board has a largely advisory role and assists the Commission and member states in facilitating compliance with the AIA. Among other roles and responsibilities, it is tasked with promoting AI literacy and issuing opinions, recommendations, advice or contributing to guidance related to implementing the AIA. The AI Office is responsible for the enforcement and supervision of GPAI models and facilitating codes of conduct to encourage the voluntary application of mandates for high-risk AI systems.

AI regulatory sandboxes and real-world testing

To foster innovation and encourage investment in AI, each member state must ensure that their national competent authorities establish at least one AI regulatory sandbox at a national level. These sandboxes allow for a controlled environment in that AI systems can be developed, provided training for, tested and validated for a limited time before their placement on the market or into service. Accessibility to these sandboxes by small and medium enterprises, including start-ups should be prioritized. Testing under real-world conditions may also be conducted within the framework of an AI regulatory sandbox. Additionally, testing of certain high-risk AI systems outside an AI regulatory sandbox may be conducted under certain conditions in line with a real-world testing plan, the elements of which shall be specified by the Commission via implementing acts.

Concluding remarks

Now is the time for providers and deployers to start developing strategies and quality plans to support compliance with the AIA on the effective dates.

Look for our next regulatory update for details on penalties and timelines. We at Emergo by UL are available to assist manufacturers with their AIA regulatory strategy and compliance plans.

X

Request more information from our specialists

Thanks for your interest in our products and services. Let's collect some information so we can connect you with the right person.

Please wait…