Tuesday, November 26, 2024
HomeTechnologyMeta plans to ramp up labeling of AI-generated images across its platforms

Meta plans to ramp up labeling of AI-generated images across its platforms


Meta plans to ramp up its labeling of AI-generated images across Facebook, Instagram and Threads to help make it clear that the visuals are artificial. It’s part of a broader push to tamp down misinformation and disinformation, which is particularly significant as we wrangle with the ramifications of generative AI (GAI) in a major election year in the US and other countries.

According to Meta’s president of global affairs, Nick Clegg, the company has been working with partners from across the industry to develop standards that include signifiers that an image, video or audio clip has been generated using AI. “Being able to detect these signals will make it possible for us to label AI-generated images that users post to Facebook, Instagram and Threads,” Clegg wrote in a Meta Newsroom post. “We’re building this capability now, and in the coming months we’ll start applying labels in all languages supported by each app.” Clegg added that, as it expands these capabilities over the next year, Meta expects to learn more about “how people are creating and sharing AI content, what sort of transparency people find most valuable and how these technologies evolve.” These will help inform both industry best practices and Meta’s own policies, he wrote.

Meta says the tools it’s working on will be able to detect invisible signals — namely AI generated information that aligns with the C2PA and IPTC technical standards — at scale. As such, it expects to be able to pinpoint and label images from Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock, all of which are incorporating GAI metadata into images that their products whip up.

As for GAI video and audio, Clegg points out that companies in the space haven’t started incorporating invisible signals into those at the same scale that they have images. As such, Meta isn’t yet able to detect video and audio that’s generated by third-party AI tools. In the meantime, Meta expects users to label such content themselves.

“While the industry works towards this capability, we’re adding a feature for people to disclose when they share AI-generated video or audio so we can add a label to it,” Clegg wrote. “We’ll require people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so. If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label if appropriate, so people have more information and context.”

That said, putting the onus on users to add disclosures and labels to AI-generated video and audio seems like a non-starter. Many of those people will be trying to intentionally deceive others. On top of that, others likely just won’t bother or won’t be aware of the GAI policies.

In addition, Meta is looking to make it harder for people to alter or remove invisible markers from GAI content. The company’s FAIR AI research lab has developed tech that “integrates the watermarking mechanism directly into the image generation process for some types of image generators, which could be valuable for open source models so the watermarking can’t be disabled,” Clegg wrote. Meta is also working on ways to automatically detect AI-generated material that doesn’t have invisible markers.

Meta plans to continue collaborating with industry partners and “remain in a dialogue with governments and civil society” as GAI becomes more prevalent. It believes this is the right approach to handling content that’s shared on Facebook, Instagram and Threads for the time being, though it will adjust things if necessary.

One key issue with Meta’s approach — at least while it works on ways to automatically detect GAI content that doesn’t use the industry-standard invisible markers — is that it requires buy-in from partners. For instance, C2PA has a ledger-style method of authentication. For that to work, both the tools used to create images and the platforms on which they’re hosted both need to buy into C2PA.

Meta shared the update on its approach to labeling AI-generated content just a few days after CEO Mark Zuckerberg shed some more light on his company’s plans to build general artificial intelligence. He noted that training data is one major advantage Meta has. The company estimates that the photos and videos shared on Facebook and Instagram amount to a dataset that’s greater than the Common Crawl. That’s a dataset of some 250 billion web pages that has been used to train other AI models. Meta will be able to tap into both, and it doesn’t have to share the data it has vacuumed up through Facebook and Instagram with anyone else.

The pledge to more broadly label AI-generated content also comes just one day after Meta’s Oversight Board determined that a video that was misleadingly edited to suggest that President Joe Biden repeatedly touched the chest of his granddaughter could stay on the company’s platforms. In fact, Biden simply placed an “I voted” sticker on her shirt after she voted in person for the first time. The board determined that the video was permissible under Meta’s rules on manipulated media, but it urged the company to update those community guidelines.



This story originally appeared on Engadget

RELATED ARTICLES
- Advertisment -

Most Popular

Recent Comments