Whereas privacy and data protection have been the hot topics over the past two years, artificial intelligence (AI) looks to be the next big thing. Due to AI’s rapid development, its many forms, and its extremely advanced underlying techniques, this phenomenon has the attention of policy makers, legal professionals, and politicians. In recent years, the call for a structured and comprehensive approach towards, and legal regulation of, AI has become more pronounced.
Within the European Union, the European Commission (EC) leads the response to such call, as it has the sole responsibility of drafting proposals for new EU-wide legislation. This blog post provides key insights into recent developments concerning AI regulation from the EC.
What Is AI?
There is a broad spectrum of views on the exact meaning of AI. For this post, AI refers to AI-systems, which are based on machine learning.
Machine learning, in turn, refers to a software which is trained to perform a certain task, based on algorithms and large amounts of data. AI-systems are used to recognize patterns. As such, AI-systems have been used in multiple industries, including marketing, health care, and retail.
The EC has been closely watching AI. In February 2020, the EC published the ‘White Paper on Artificial Intelligence – A European Approach to Excellence and Trust’ (the White Paper), which contains proposed measures and proposed regulations to promote and support the development and use of AI. It also sets out some important challenges.
A risk-based approach
The EC plans to establish two ecosystems: one ecosystem of excellence (for the promotion of the development and initiatives taken with regard to AI) and one ecosystem of trust. The ecosystem of trust’s purpose is to monitor the risks involved with AI and to facilitate the development of legal initiatives. The EC notes that some forms of regulation already exist (such as the GDPR and EU consumer directives). However, since AI is unpredictable and complex, it is necessary to continuously monitor the developments and to anticipate legal risks. To do so, the EC proposes a risk-based approach. An AI application is considered to be a risk if (i) the sector in which the AI application is used involves significant risks and (ii) the application itself involves a significant risk.
While some uncertainty remains, the EC has indicated that the finance, energy, automotive and logistics and health care sectors, as well as certain public spheres, may be high-risk sectors.
Requirements for high-risk AI applications
When it is established that an AI application entails a high risk, the EC proposes that the following legal requirements apply:
- training data (large datasets used in training AI systems): requirements regarding the quality of training data as well as compliance with privacy and data protection rules;
- information requirements: requirements related to transparency (information about AI systems) and notifying citizens when they are interacting with an AI system rather than a human being;
- accuracy requirements: requirements on accuracy, reproducibility of outcomes, ability to react to errors and inconsistencies, and resilience against attacks and manipulation of data or algorithms;
- human oversight: the EC considers some form of human oversight and suggests that, before an AI-related decision is implemented, such decision should be reviewed. Real-time human intervention is also presented as a possibility.
Public Discussion about Facial Recognition
Facial recognition technology has prompted significant public discussion. An earlier, unofficial version of the EC’s White Paper suggested a ban on the use of this technology for three to five years; this ban was not included in the official version of the White Paper. In the White Paper, the EC notes that certain forms of facial recognition are already allowed under EU data protection and privacy rules (only when proportionate and subject to additional requirements). The EC now proposes to participate in public discussions about the potential use of facial recognition in other circumstances.
Consequences for Businesses
The above-mentioned requirements would only apply to high-risk AI applications. To distinguish between the myriad AI applications, the EC proposes to design a specific assessment, which includes a process for testing AI applications. For AI applications that are not high risk, the EC proposes a self-certification system, under which specific labels would symbolize compliance with applicable rules, and AI products would be marked as trustworthy.
The EC notes that with regards to liability of AI applications, the extensive body of existing EU product safety and liability legislation remains relevant and ‘potentially applicable’ to a number of AI applications. However, the EC also notes that the legislative framework could be improved to address new risks and situations related to AP applications, such as the changing functionality of AI systems and the allocation of responsibilities between different economic operators in the supply chain.
The proposed requirements may well increase compliance costs of high-risk AI applications, whereas the self-certification labelling system would do the same for low-risk AI applications. However, the determination in practice of what constitutes a high-risk sector and a high-risk application remains unclear. Additional guidance is required to remove this uncertainty and to avoid unnecessary costs for this test.
After the White Paper’s publication, there was an online public consultation process from Feb. 19 through June 14, 2020. All European citizens, Member States, and relevant stakeholders (including civil society, industry, and academics) could participate in the consultation. Over 1,250 replies were received.
In addition to the public consultation, on June 29, 2020, the European Data Protection Supervisor (EDPS) published its ‘Opinion on the European Commission’s White Paper on Artificial Intelligence – A European approach to excellence and trust’, a sector-specific review of the White Paper.
The EDPS gives feedback on the EC’s proposed requirements and regulations from a general data protection perspective, taking into account existing regulations such as the GDPR.
With respect to the determination of high-risk sectors and high-risk AI applications, the EDPS notes that this should be “more robust and more nuanced”. In particular, the EDPS notes that the criteria for determining the level of risk should reflect the European Data Protection Board’s Guidelines on Data Protection Impact Assessments, and should therefore include (amongst other criteria):
- evaluation or scoring;
- automated-decision making with legal or similar significant effect;
- systematic monitoring;
- data processed on a large scale; and
- datasets that have been matched or combined.
The EDPS concludes that, while it agrees with the EC that there is a need for a European approach to AI, the EC’s proposals require adjustments and clarification. If a new legal framework is published, the EDPS will provide further advice to the EC.
The online public consultation is part of a broader stakeholder consultation process. Following an in-depth analysis of the consultation results and a detailed impact assessment, the EC will present a regulatory proposal. It is not yet known when that proposal will be presented.