OpenAI just announced its latest large language model (LLM), GPT-4 Turbo. Elon Musk's xAI recently unveiled its own AI chatbot, Grok. And Samsung is jumping on the bandwagon, too, with its LLM, Gauss. On top of all this, AI-powered video and image generators are continuing to evolve. With artificial intelligence creating more and more content on the web, some social media platforms want users to know when media is created or altered by AI.
The latest company to step in with a set of policies around AI-created content is Meta, the parent company of Facebook and Instagram. And its new set of rules sets a standard on its platforms, specifically for political advertisers.
In a blog post on Wednesday, Meta stated it has a new policy that will force political advertisers to disclose when a Facebook or Instagram ad has been "digitally created or altered, including through the use of AI." This includes "any photorealistic image or video, or realistic sounding audio, that was digitally created or altered." The policy will pertain to all social issues, electoral, or political advertisements.
According to Meta's new policy, a disclosure on an ad will be required when the advert depicts a real, existing person "saying or doing something they did not say or do." Furthermore, if an ad contains a fictional yet realistic-looking person, it too must include a disclosure. The same goes for any political ad that uses manufactured footage of a realistic event or manipulates footage of a real event that happened.
There are some uses of AI or digital manipulation that will not require disclosures, Meta says. But these are only for uses that are "inconsequential or immaterial to the claim, assertion, or issue raised in the ad." Meta provides examples of these exceptions, such as image size adjusting, cropping an image, color correction, and image sharpening. The company also reiterates that any digital manipulation utilizing any of those examples that does change the claims or issues in the ad would need to be disclosed.
And, of course, these AI-created or altered ads are all still subject to Facebook and Instagram's rules around deceptive or dangerous content. The company's fact-checking partners can still rate these ads for misinformation or deceptive content.
AI will be a more prominent factor in next year's coming elections, such as the 2024 U.S. Presidential election, than it ever has been before. Even Meta now has its own large language model as well as an AI chatbot product. As these technologies continue to evolve, readers can be sure more online companies are going to produce a set of corporate or platform standards. With those elections on the horizon, Meta seems to be setting the ground rules for political ads now.
Meta's new AI policy will roll out officially in 2024 and will pertain to advertisers around the globe.