Technology
Danish Kapoor
Danish Kapoor

Racist videos produced with artificial intelligence reached millions in Tiktok

Google’s artificial intelligence video production tool announced in May, which is thought to be produced with 3 videos, reached millions of monitoring on Tiktok. Prepared by Media Matters, a media observation organization, the report revealed that most of these content is full of racist stereotypes. In particular, some of these videos, which target black individuals, have been identified as exceeding 14 million views. The common point of all these contents was that the “Veo” elephant in the videos and the labels used to point to Google’s tool.

Google said it has filtering systems that prevent harmful content while introducing Veo 3. However, these videos have revealed that the system may have problems in content control. Tiktok says that there is definitely no room for hate speech and discrimination in community rules. Nevertheless, the ability of such content to reach millions has caused new questions about the effectiveness of the moderation mechanisms of the platform.

TIKTOK HET SPEAKS RECOGNIZES REMOVE CONTENTS

The time of the videos detected by Media Matters was usually limited to eight seconds. This coincides with the technical limits of Veo 3, because Google’s tool allows only eight -second video production for the time being. Some content is created by adding multiple eight seconds of clip to a row. These technical details suggest that there is no manual editorial intervention in the construction process and that production is realized directly through AI.

Tiktok officials said that content containing hate speech were rapidly removed and many accounts with these content were removed from the platform before the report was published. Ariane de Selliers, who made a statement on behalf of Tiktok, said that the rules were applied strictly. This explanation, however, does not eliminate the question of whether algorithms can detect harmful content quickly. Because the fact that the videos reach such a great view reveals that the content is systemic overlooked.

Media Matters’ content is not limited to Tiktok, but other social media platforms such as Youtube and Instagram similar videos have been reported. In a separate examination by Wired, similar racist content was found to be common in Instagram. Moreover, not only black individuals, but also anti -Jewish content and videos targeting immigrant and Asian individuals are among the examples. This shows that the domain of the content produced with artificial intelligence is much wider.

Similar videos released on Youtube have less viewing than Tiktok. Nevertheless, the fact that the content is found on multiple platforms requires a holistic evaluation of the audit processes. Although platforms declare that they are standing against hate speech, the rate of spread of such content reveals the weakness of existing filter mechanisms. In this context, the inspection of artificial intelligence tools has been opened for discussion again.

Veo 3 allows users to produce videos and sound only with written commands. Although this type of production provides convenience, it can pave the way for malicious uses. Moreover, the lack of any visual or ethical filtering in the content production process can prepare the ground for the receiving videos to carry harmful content. The platforms have to intervene by realizing this content later.

All these developments lead to the necessity of questioning not only the responsibility of technology companies, but also the level of social consciousness. Because the content produced with artificial intelligence has the power to shape social perception. Particularly on platforms such as Tiktok, which has a young user audience, the spread of such content can have serious consequences. Therefore, it is clear that both production and broadcasting processes should be revised.

Although Google states that it takes measures to prevent harmful content demands, the infiltration of these content to platforms shows that the system is not strong enough. Similarly, Tiktok is late to implement community rules. The ability of content to receive such high access clearly demonstrates how the risk of artificial intelligence -supported production is left without control. For this reason, it is obvious that the cooperation between social media companies and technology manufacturers should be made more closely.

Danish Kapoor