Eplly is Your Ultimate Source for the Latest News, Science, Health, Fashion, Education, Family, Music and Movies.
—— 《 Eplly • Com 》
6 takeaways from the OpenAI senate hearing
Views: 2322
2023-05-17 07:53
Apparently, one of generative AI's extraordinary capabilities is unifying politicians, the public, and the private

Apparently, one of generative AI's extraordinary capabilities is unifying politicians, the public, and the private sector in regulating it.

We saw that today in a Senate Judiciary Committee hearing about how to govern AI. OpenAI CEO Sam Altman, IBM chief privacy and trust officer Christina Montgomery, and NYU emeritus professor Gary Marcus testified in front of the privacy, technology, and law subcommittee about what to do now that generative AI has been freed from Pandora's Box. Altman was open and cooperative, even advocating for regulation of ChatGPT and generative AI. But that seemed to have a disarming effect on the subcommittee, who asked mostly softball questions.

SEE ALSO: OpenAI rolling out ChatGPT plugins to Plus users

The three-hour hearing touched on the many risks generative AI poses to society, and how our country can successfully navigate the next industrial revolution. Unfortunately, addressing so many issues in one setting meant that there was barely time to delve into prevailing concerns like job replacement, copyright law, and oh yeah, national security. Here are the highlights:

1. Senator Blumenthal's opening remarks included a deepfake

Senator Richard Blumenthal kicked off the hearing with dramatic flair by playing a deepfake recording of his voice talking about ChatGPT. The recording was created using audio from his speeches and the remarks were generated by ChatGPT which was asked how Blumenthal would open the hearing. Leading with deepfake set the tone for the rest of the hearing by underscoring generative AI's impressive capabilities and how dangerous it can be if left unchecked.

2. The whole job replacement issue remains unresolved

One of the major concerns about ChatGPT and generative AI is the jobs it will replace by automating tasks. When asked if this was a concern, Altman view was that AI might replace jobs but create new ones: "I believe that there will be far greater jobs on the other side of this and that the jobs of today will get better." Montgomery added that the most important thing we should be doing is preparing the workforce for AI-related skills through training and education.

But who that responsibility falls to was left unsaid. "I think it will require partnership between the industry and government, but mostly action by government to figure out how we want to mitigate that," said Altman. In other words, that's not OpenAI's problem.

3. Everyone agrees AI needs to be regulated

Senator Dick Durbin opened his remarks by noting the unusually cooperative conversation between the public and private sectors. "I can't recall when we've had people representing large corporations or private sector entities come before us and plead with us to regulate them." Pleading for regulation may have been an exaggeration, but Altman and Montgomery showed they were willingly and sometimes enthusiastically open to government oversight.

That went beyond general platitudes. Altman said he believed Section 230 does not apply to generative AI, meaning companies that offer this technology should be held liable, and that there needs to be an entirely new framework for content created by generative AI.

This could be interpreted as a successful example of democratic checks and balances at work, but it also emphasized just how serious the threat of AI is — and how badly companies like OpenAI feel the need to protect themselves from liability.

Regulation of this magnitude might even lead to the creation of a new federal agency like the Food and Drug Administration, which is what Marcus proposed. "My view is that we probably need a cabinet level organization within the United States in order to address this. And my reasoning for that is that the number of risks is large, the amount of information to keep up on is so much I think we need a lot of technical expertise, I think we need a lot of coordination of these efforts."

Another idea floated around was licensing for generative AI, akin to licensing for nuclear power operations.

4. Misinformation is a huge concern, especially with an election coming up

One of the underlying themes of the hearing was how to learn from the mistakes Congress made by failing to hold social media companies accountable for content moderation, which lead to rampant misinformation during the 2016 and 2020 elections. Generative AI's potential to create and spread inaccurate or biased information on a large scale is real and imminent unless addressed now.

"Given that we're gonna face an election next year, and these models are getting better. I think this is a significant area of concern," said Altman. He is open to "nutrition labels" about the nature and source of generative AI content from third parties, but Marcus believes the root of the issue is transparency and access to how the algorithm works. "One of the things that I'm most concerned about with GPT. Four is that we don't know what it's trained on, I guess Sam knows, but the rest of us do not. And what it is trained on has consequences for essentially the biases of the system."

5. Senator Marsha Blackburn loves Garth Brooks and Senator Mazie Hinono loves BTS

Committee members couldn't resist the chance to add some levity to the serious nature of the hearing. Senators Cory Booker and Jon Ossoff soft-launched their bromance by calling each other handsome and brilliant. Senator Peter Welch made a self-deprecating remark about his interest of the hearing saying, "Senators are noted for their short attention spans, but I've sat through this entire hearing and enjoyed every minute of it."

When asking Altman about OpenAI's automated music tool Jukebox and copyright law, Blackburn expressed concern about ownership of a song created with the style and voice of her favorite artist Garth Brooks. Hinono was equally worried about deepfake songs created to sound like her favorite band BTS. Despite how weird it was to hear the famed country music singer and K-Pop sensation mentioned in this context, Blackburn and Hinono raised valid points about intellectual property.

6. National security is too big to cover today

The three-hour hearing covered so many risks of generative AI that Blumenthal only mentioned its threat to national security in his concluding remarks, saying, "the sources of threats to this nation in this space are very real and urgent. We're not going to deal with them today, but we do need to deal with them."

The hearing addressed a vast array of issues that generative AI could impact: employment, misinformation, intellectual property, privacy, safety, bias and discrimination, but the three-hour session wasn't enough time to address how it could affect the economy or threats from global adversaries.

Tags listicle