7-2-2024 (NEW YORK) In the coming months, Meta Platforms plans to detect and label images created by artificial intelligence (AI) services from other companies. The detection will be done using a set of invisible markers embedded in the files, as announced by Nick Clegg, Meta’s President of Global Affairs, in a blog post on February 6.
The labels will be applied to any content carrying the markers that is posted on Meta’s platforms, including Facebook, Instagram, and Threads. The purpose is to inform users that the images, which often resemble real photographs, are actually digital creations. Meta already labels content generated using its own AI tools.
Once the system is implemented, Meta will extend the labeling process to images created on services operated by other companies such as OpenAI, Microsoft, Adobe, Midjourney, Shutterstock, and Google, according to Clegg.
This announcement provides an early insight into the evolving system of standards that technology companies are developing to address the potential risks associated with generative AI technologies. These technologies can generate fake but highly realistic content in response to simple prompts.
The approach builds upon a template established over the past decade by some of the same companies to coordinate the removal of banned content across platforms, including content depicting mass violence and child exploitation.
Clegg expressed confidence that the companies can reliably label AI-generated images at this point. However, he noted that tools to mark audio and video content are more complex and still under development.
“While the technology is not yet fully mature, particularly when it comes to audio and video, the hope is that we can create a sense of momentum and incentive for the rest of the industry to follow,” Clegg stated in an interview with Reuters.
In the meantime, Meta will require individuals to label their own altered audio and video content and may impose penalties for non-compliance, although Clegg did not provide further details on the penalties.
Clegg acknowledged that currently, there is no viable mechanism to label written text generated by AI tools like ChatGPT. He stated, “That ship has sailed.”
It remains unclear whether Meta will apply labels to generative AI content shared on its encrypted messaging service, WhatsApp.
Meta’s independent oversight board recently criticized the company’s policy on misleadingly doctored videos, stating that it was too narrow and suggested that the content should be labeled instead of removed.
Clegg agreed with the board’s assessment, acknowledging that Meta’s existing policy is inadequate in an environment where synthetic and hybrid content is expected to increase. He cited the new labeling partnership as evidence that Meta is already moving in the direction proposed by the board.