The European Parliament voted today to move forward with the first comprehensive artificial intelligence legislation, known as the AI Act.
It's the first step in a process that will culminate later this year when a final version of the law is expected to pass, The New York Times reports. That would cement Europe as a global model for tech regulation, as it has been with data privacy, for example.
“We have made history today,” says Brando Benifei, an Italian member of the European Commission, which authored the AI Act, as reported by The Washington Post. Benifei says European lawmakers will "set the way" for the rest of the world on "responsible AI."
First slide in a presentation on the AI Act by the European Comission.The European Commission began drafting the 108-page proposal in 2021, with increased urgency since the explosive release of ChatGPT in fall 2022, as well as competitors from Google, Microsoft, and image creators such as Dall-E and Midjourney.
The US has yet to pass similar legislation, opting for softer approaches such as the National Artificial Intelligence Initiative Act (2020) and AI Bill of Rights (October 2022).
At a Senate hearing on AI last month, OpenAI CEO Sam Altman urged Congress to act and implement stricter regulations to mitigate the potential harm of these powerful, opaque systems. Microsoft President Brad Smith also called on the US and other countries to establish their own government agencies dedicated to regulating AI.
What's in the EU AI Act?
Page 1 of 108The AI Act defines artificial intelligence as software that can, "for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with," and goes on to outline the ideals and regulations that "high risk" systems, and their users, must comply with.
It stipulates that, prior to public release, high-risk systems must undergo an assessment to ensure they comply with the regulation, and that they are designed to reduce risk through "adequate design and development." Regarding AI models that continue to "learn" after their initial release, such as ChatGPT, it requires a "post-market monitoring system" to watch, document, and report significant changes and any issues.
Regarding data, AI model creators must disclose the sources of copyrighted information their systems use to generate output. It also limits the "use and the processing of biometric data involved in an exhaustive manner," particularly in facial-recognition technology.
Systems that comply with the regulations must display a European Commission badge to the public as a sign of their compliance. The proposal also establishes a European Artificial Intelligence Board.
While the act focuses on corporations building (and profiting off) AI models, it also includes some provisions for the public to limit the spread of misinformation. For example, it requires anyone who uses AI to create a compelling deepfake of another person in image, audio, or video, to disclose "that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin."
At the same time, the act acknowledges the "wide array of economic and societal benefits" of AI. Its objective is to "foster AI innovation by establishing a controlled experimentation and testing environment in the development and pre-marketing phase."
The bill is restrictive enough, however, that at one point Altman said OpenAI would cease operations in Europe if it passed, though he later reversed that statement.
American Ghostwriter?
At times, the rhetoric in the AI Act seems to invoke American ideals of personal freedom and privacy. It calls the use of AI-based facial recognition for law enforcement in public spaces "particularly intrusive to the rights and freedoms of the concerned persons, to the extent that it may affect the private life of a large part of the population [and] evoke a feeling of constant surveillance."
Freedom of thought is another focus of the bill. It bans AI systems that "deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behavior." It also requires systems that use AI to notify people that they are interacting with an AI system, unless it is "obvious from the circumstances," as well as when they are exposed to a model that records and processes emotional and biometric data.