Meta to start labeling AI-generated images from companies like OpenAl, Google
The company already labels any content generated using its own AI tools
Meta Platforms will begin detecting and labeling images generated by other companies’ artificial intelligence services in the coming months, using a set of invisible markers built into the files, its top policy executive said on Tuesday.
Meta will apply the labels to any content carrying the markers that is posted to its Facebook, Instagram and Threads services, in an effort to signal to users that the images which in many cases resemble real photos are actually digital creations, the company’s president of global affairs, Nick Clegg, wrote in a blog post.
The company already labels any content generated using its own AI tools.
Once the new system is up and running, Meta will do the same for images created on services run by OpenAI, Microsoft , Adobe, Midjourney, Shutterstock and Alphabet’s Google, Clegg said.
The announcement provides an early glimpse into an emerging system of standards technology companies are developing to mitigate the potential harms associated with generative AI technologies, which can spit out fake but realistic-seeming content in response to simple prompts.
The approach builds off a template established over the past decade by some of the same companies to coordinate the removal of banned content across platforms, including depictions of mass violence and child exploitation.
In an interview, Clegg told that he felt confident the companies could label AI-generated images reliably at this point, but said tools to mark audio and video content were more complicated and still being developed.
In the interim, he added, Meta would start requiring people to label their own altered audio and video content and would apply penalties if they failed to do so. Clegg did not describe the penalties.