Reining in the risks of Artificial Intelligence technology
DCU Business Alumni
Neil Redmond
New EU legislation will protect citizens from potential abuses of Artificial Technology (AI), writes Neil Redmond
In April 2021, the European Commission published its proposal for a European Union regulatory framework on artificial intelligence (AI). The proposed AI regulation aims to protect European citizens from the potential risks of AI technology and practices. While most AI systems pose little risk, certain systems create risks that must be addressed to avoid harm to individuals. The AI Act is the first attempt globally to regulate AI. There are many concepts in the regulation which take direct inspiration from the General Data Protection Regulation (GDPR), particularly the extra-territorial scope; enforcement via administrative fines; and the establishment of a European Artificial Intelligence Board (EAIB).
Risk-based approach
The AI Act proposes a risk-based approach around the use and governance of AI systems. The regulation defines various levels of risk in AI: − Unacceptable Risk: AI systems that are deemed to be a clear threat to safety, livelihoods, and rights of people will be prohibited − High-risk: AI systems that could be a threat to the safety of individuals or may affect their livelihood; these systems will be subject to strict obligations such as risk assessments before they can be introduced − Low or minimal risk: When using these AI systems, users should be made aware that they are interacting with a machine so they can make informed decisions about their interaction; most AI systems fall into this category – these are not in the scope of the regulation as they present limited or no risk to individuals’ rights or safety Enforcement and Governance The Commission proposes that Member States will be required to appoint national supervisory authorities to supervise the application of the regulation. The creation of the EAIB has also been proposed to “facilitate a smooth, effective and harmonised implementation” of the regulation.
Penalties
Just like the GDPR, Member States through their supervisory authorities will have the power to issue appropriate penalties, including administrative fines for non-compliance with the AI Act. The regulation sets out thresholds to be considered: − Up to €30m or 6% of the total worldwide annual turnover of the preceding financial year for infringements on prohibited practices or non-compliance related to requirements on data governance. − Up to €20m or 4% of the total worldwide annual turnover of the preceding financial year for non-compliance with any of the other requirements or obligations of the Regulation.
Next Steps
The proposed regulation is currently going through standard legislative procedure. In recent months the Czech Republic, in its role as current EU council president has presented a compromised text. The AI Act will become law once both the Council and the European Parliament agree on a common version of the text. The regulation may enter into force on a transitional basis in 2023 and be fully adopted in early 2024. It is expected that the preparation and impact of the AI Act will be similar to that of the GDPR. Just like the GDPR, it is hoped that it will become the standard globally for the regulation of AI. DCU graduate Neil Redmond (BEng 1997, MBA 2004) is director of Cyber Security at PWC