Eplly is Your Ultimate Source for the Latest News, Science, Health, Fashion, Education, Family, Music and Movies.
—— 《 Eplly • Com 》
ChatGPT’s Riskiness Splits Biden Administration on EU’s AI Rules
Views: 3466
2023-05-31 15:15
Biden administration officials are divided over how aggressively new artificial intelligence tools should be regulated — and their

Biden administration officials are divided over how aggressively new artificial intelligence tools should be regulated — and their differences are playing out this week in Sweden.

Some White House and Commerce Department officials support the strong measures proposed by the European Union for AI products such as ChatGPT and Dall-E, people involved in the discussions said. Meanwhile, US national security officials and some in the State Department say aggressively regulating this nascent technology will put the nation at a competitive disadvantage, according to the people, who asked not to be identified because the information isn’t public.

This dissonance has left the US without a coherent response during this week’s US-EU Trade and Technology Council gathering in Sweden to the EU’s plan to subject generative AI to additional rules. The proposal would force developers of artificial intelligence tools to comply with a host of strong regulations, such as requiring them to document any copyrighted material used to train their products and more closely track how that information is used.

National Security Council spokesman Adam Hodge said the Biden administration is working across the government to “advance a cohesive and comprehensive approach to AI-related risks and opportunities.”

How the EU decides to regulate AI arguably matters more than the debate in Washington. With Congress unlikely to pass binding rules for AI, the European bloc will be the first to dictate how tech giants including Microsoft Corp. and Google owner Alphabet Inc. develop the foundation models that underpin the next frontier of artificial intelligence.

Main Battlefield

These models rely on training data — often large samples of language pulled from the internet — to learn how to respond in various situations, rather then being designed for one specific task. This is the technology behind generative AI, which can respond to homework questions, design a power point or create fantastical images from text prompts.

The question for regulators is who should bear responsibility for the risks associated with the technology, such as the spread of misinformation or privacy violations. The proposed EU rules would add to reporting requirements for companies that develop models used in chatbots, like OpenAI.

Michelle Giuda, director of the Krach Institute for Tech Diplomacy and a former assistant Secretary of State for global public affairs in the Biden administration, said one of the fundamental tasks for the Trade and Technology Council will be to strengthen trust between allies to foster innovation and keep ahead of China’s advancements.

“The context is that innovation in AI is not happening in a vacuum — all of this is taking place in this 21st century contest between democracy and authoritarianism,” Giuda said. “And you’ve got technology as the main battlefield.”

High Risk

Until recently, the US and EU had a rough consensus to regulate uses rather than the technology itself, with a focus on high-risk areas such as critical infrastructure and law enforcement.

This approach was enshrined in the US’s non-binding framework for AI systems, as well as the European Commission’s initial proposals for the AI Act to regulate the technology. The last council meeting in December focused on end-use risk as well.

However, the release of ChatGPT made broader risks more apparent. This month an apparently AI-generated fake image of an explosion near the Pentagon spooked US markets, while the technology has already created corporate winners and losers.

This led the European Parliament to propose new rules that specifically target the foundation models used for generative AI. Lawmakers in committee agreed earlier this month that more scrutiny should be on the companies that develop these foundation models. Most of those companies, including Microsoft and Google, are based in the US.

Simmering Resentment

This added to already simmering resentment among tech executives over the EU’s antitrust and content moderation rules, which disproportionately affect US companies.

The tech industry has criticized the Biden administration for not doing more to stand up for US companies in the face of what they see as trade discrimination. With the EU’s proposed changes, they warn that the AI Act could go from a bright spot of cooperation to another example of Europe targeting US tech.

The revised AI Act could get a vote in parliament in June, ahead of final negotiations with the EU’s 27 member states.

Dragos Tudorache, one of the lead authors of the bill in the parliament, said after meeting with US officials that “they consider our moves to also deal with generative AI a good move.”

Some US officials disagree, warning that restricting foundation models could hurt US competitiveness, according to the people involved in the discussions.

Sam Altman, the chief executive officer of OpenAI, became the public face of corporate concern over regulatory overreach when he suggested his company could pull products from the European market if the rules were too difficult to follow. EU Commissioner Thierry Breton responded with a tweet accusing Altman of “attempting blackmail.”

Altman later said he would work to comply with EU’s rules. He will meet Commission President Ursula von der Leyen on Thursday.

European officials have resisted discussing the specifics of the AI Act with their US counterparts ahead of the TTC meeting, viewing it as inappropriate to bring Europe’s democratic process into a multilateral debate, the people with direct knowledge of the talks said.

The EU is still debating regulation and there are European officials who think the parliament has gone too far, according to some of the people.

Generative AI will be mentioned in the TTC conclusion, according to a draft obtained by Bloomberg. The document affirms the transatlantic commitment to a risk-based approach, but it also highlights “the scale of the opportunities and the need to address the associated risks” of generative AI.