Technology
Danish Kapoor
Danish Kapoor

Meta hides labels on AI-edited images on Facebook

Meta is set to switch to a less noticeable labeling system for AI-edited images on Facebook starting next week. The change raises concerns about combating disinformation, especially as AI technology is increasingly used to produce fake images and content. AI-edited images will still have a warning note, but users will need to access this information by first clicking on the three-dot menu in the top right corner of the post and then scrolling down to access the “AI Info” option. This can make it harder to tell the truth, especially in digital content.

Meta’s new labeling system will once again clearly label images created with AI tools with the “AI Knowledge” label. However, this label will only be immediately noticeable if the content was created by AI. For images that have been edited and altered with AI, this transparency is now more subtle. Those concerned about content manipulation say this could make it easier to spread misinformation, especially during election periods. It will take more time and attention to figure out whether the fine details in images have been altered by AI.

Changes and concerns in Facebook’s tagging system

Meta recently implemented a more comprehensive tagging system for content created and edited with AI technologies. This system included not only images, but also videos and audio. However, the company received criticism in July for a tagging error. After many photographers claimed that Meta was mistakenly tagging content that was not created with AI, the company changed the phrase “Generated with AI” to “AI Knowledge.” It was stated that with this change, Meta would be more meticulous in its tagging process and prevent unnecessary errors. However, the fact that AI-edited content is being marked more secretly with the new update calls this meticulousness into question.

Meta says it developed its labeling process in collaboration with other tech companies across the industry, and that these changes are intended to better reflect the scope of AI’s use in content. While the company claims that these new practices are designed to improve user experience, it is also true that the spread of fake news and manipulated images could accelerate in this process. This type of content, especially when it reaches large audiences on social media, requires extra attention during sensitive periods such as election periods and political campaigns.

Maintaining transparency is essential to prevent AI-edited images from being used to spread misinformation. However, Meta’s move makes it more difficult to understand whether content is real or manipulated by AI. This change may contribute to hiding manipulations in visual content, increasing the risk of social media users encountering misleading content. Considering the speed at which disinformation spreads, especially during election periods, it is anticipated that users will need visual verification tools more.

Danish Kapoor