6-4-2024 (CALIFORNIA) Meta, the parent company of Facebook and Instagram, announced on Friday its plans to introduce labels for AI-generated media starting in May, aiming to address concerns surrounding deepfakes and provide greater transparency to users.
In a significant shift in approach, the social media giant stated that it will no longer remove manipulated images and audio that do not violate its rules, opting instead for labelling and contextualization to respect freedom of speech.
The decision follows criticism from Meta’s oversight board, which urged the company to revamp its strategy in dealing with manipulated media, especially in light of advancements in AI technology and the rising prevalence of deepfakes.
Concerns have been raised regarding the potential misuse of AI-powered applications for spreading disinformation, particularly during pivotal election periods, both in the United States and globally.
Under Meta’s new policy, content created or altered with AI, including video, audio, and images, will be labelled with a “Made with AI” tag. Additionally, content deemed at high risk of misleading the public will receive a more prominent label.
Monika Bickert, Meta’s Vice President of Content Policy, stated in a blog post that the company acknowledges the importance of transparency and additional context in addressing manipulated content.
The rollout of these labelling measures aligns with a February agreement among major tech companies and AI developers to collaborate on combating manipulated content aimed at deceiving voters.
Meta’s implementation will occur in two phases, with AI-generated content labelling commencing in May 2024, and the removal of manipulated media based solely on the previous policy ending in July.
Content manipulated with AI will only be removed from the platform if it violates other rules, such as those prohibiting hate speech or voter interference.
While these measures represent progress, concerns remain about potential loopholes, particularly regarding open-source software that may not adopt the watermarking standards utilized by major AI players.
Recent incidents involving convincing AI deepfakes, including a manipulated video of US President Joe Biden and a robocall impersonation targeting voters, underscore the urgency of addressing the challenges posed by manipulated media.