EU AI Act
EU lawmakers recently have come to an agreement, after years of debate, on AI policy. The bill aims to mitigate risks in the areas of health care, education, border surveillance, and public services. [1] We will briefly summarise below some of the most important points discussed in the draft, as it has been filtered through technology press including the "MIT Technology Report," "Tech Policy Press," "Tech Crunch," "The Futuristic Lawyer," and "Euractiv."
Foundation Models
One of the main features of the bill is that lawmakers have included Foundation Models as an aspect of the legislation. In her Article "Five things you need to know about the EU’s new AI Act" for the "MIT Technology Review," Melissa Heikkilä reports that the AI Act will require Foundation Models to comply with EU copyright laws and require AI companies to share information about how their models were trained. In her words, the companies will have to “draw up better documentation, comply with EU copyright law, and share more information about what data the model was trained on. ” [2]
In line with US President Joe Biden’s Executive Order from the 30th of October, the threshold to decide if a general purpose AI (GPAI) model is "high impact" is mainly based on the amount of computing power used to train the model. This is measured in floating point operations per second (FLOPS). GPT-4 would automatically be covered as a high-impact GPAI by the EU’s AI Act – currently the only foundation model in the world. [3]
For powerful models, a.k.a. “high-impact GPAI models with systemic risk, ” there are additional required compliances. Tech companies will have to share how secure and energy-efficient their AI models are, for example. These transparency obligations also include technical documentation about the model’s architecture, information about how it was trained, and a summary of the content used for training the model. [4]
According to Fortune, high-impact GPAIs will be obliged to:
- Report their energy consumption;
- Perform red-teaming, i.e. adversarial tests, either internally or externally;
- Assess and mitigate possible systemic risks, and report any incidents;
- Ensure they are using adequate cybersecurity controls;
- Report the information used to fine-tune the model, and their system architecture;
TobiasMJ, known by the handle The Futuristic Lawyer, reiterates these points and adds that, according to Bloomberg, models that are provided for free and which are open-source – such as the French AI company Mistral AI’s newest model, Mixtral-8x7B, and Meta AI’s Llama 2 – are not required to show compliance with the transparency obligations as their purpose is solely restricted to research and innovation, not involving direct commercial applications. [5]
Regulations
Melissa Heikkilä explains that companies are legally bound to notify users when they are interacting with a chatbot, or with biometric categorization, or emotion recognition systems. AI companies are compelled to design systems in such a way that AI-generated media can be detected, as well as to label deepfakes and AI-generated content. [6]
TobiasMJ argues that political consensus is crucial to develop best practices such that Big Tech lobbying power does not overwhelm public safety. [7]
A number of functions are specficially barred in the legislation . These include:
- Biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);
- Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
- Emotion recognition in the workplace and/or educational institutions;
- Social scoring based on social behaviour or personal characteristics;
- AI systems that manipulate human behaviour to circumvent individual free will;
- AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).
As TobiasMJ reports, non-real-time use of remote biometric ID AIs will be limited to “the targeted search of a person convicted or suspected of having committed a serious crime.” Real-time use will be limited in time and location, and can only be used for the following purposes:
- Targeted searches of victims (e.g. of abduction, trafficking, sexual exploitation);
- Prevention of a specific and present terrorist threat;
- The localisation or identification of a person suspected of having committed one of the specific crimes mentioned in the regulation (e.g. terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organisation, environmental crime);
Compliance
What if the firms don't comply with those rules? The law specifies that they will have to pay fines for violations. The fines for noncompliance are steep: from 1.5% to 7% of a firm’s global sales turnover, depending on the severity of the offence, and the size of the company. [8]
In order to enforce the rules and fines as necessary, a new AI office within the EU’ s executive has been created. This office will be “overseeing the most advanced AI models, including by contributing to fostering standards and testing practices. ” [9]
The panel comprising this office will be composed of independent experts and will be scientific in nature. Its main tasks will be “to advise the AI Office about GPAI models. ” [10]
Opinions
Dr. Philipp Hacker, the Research Chair for Law and Ethics of the Digital Society, European New School of Digital Studies, European University Viadrina Frankfurt (Oder), mentions some gaps that remained unaddressed in the current Act draft. [11] He considers the minimum standards presently proposed to be extremely weak, citing “mere transparency and limited copyright provisions, ” and not extending to other areas of risk.
Dr. Hacker disagrees with the 10^25 FLOPs breakpoint, and the application of certain regulations only for models operating above these floating points densities. He notes that smaller models can exhibit similar security risks, and should be made to comply with similar regulations. He advocates for compliance of the open models as well, since they could compromise public safety despite their transparency and accessibility.
Regarding self-regulation, Hacker finds the model efficient economically and advocates for “[r]ightly tailored foundation model regulation, ” and “stringent guardrails” in order to avoid cyber malware and the proliferation of biological/chemical terrorism, as well as threats related to misinformation and hate speech. His proposal of best industry practices advises red-teaming and the introduction of safety layers to guard against such abuse by malicious actors.
The French President Emmanuel Macron, however, has voiced concerns about the stringency applied to the foundational models. He thinks the legislation might hinder the EU in what he regards as the “ AI race ” to develop technologies faster than rivals including the US, UK, and China who operate under less, sometimes massisvely less, strict regulatory regimes. "The Financial Times" reports that France, “alongside Germany and Italy, are in early discussions about seeking alterations or preventing the law from being passed. ” [12]
Arthur Mensch, the CEO of Mistral AI, expresses strong reservations about the recent amendments to the EU AI Act. Mensch believes the Act should concentrate on product safety instead of expanding its regulatory reach to foundational AI models. Mensch likens this approach to the regulation of programming languages like C. He advocates for regulations to be proportional to the risk levels of different AI applications, emphasising that "the original EU AI Act found a reasonable equilibrium" in this regard. Critically, he warns against the recent shift towards addressing vague “systemic risks, ” which could create a divide between large and small companies, stifling innovation in the European AI ecosystem. [13]