OpenAI’s artificial intelligence model Sora, which has the ability to convert text to video, was leaked to the internet as a result of the protest of a group of artists. This has sparked a new debate about how artificial intelligence technologies affect artistic processes and how technology companies collaborate with artists. The artist group “PR Puppets”, which organized the protest, claimed that OpenAI was using them as unpaid research and development workers. The company denied these allegations and stated that participation in the Sora testing program was voluntary and there was no obligation to provide feedback.
The PR Puppets group made Sora public by sharing a link on the Hugging Face platform on Tuesday morning. This connection connected to Sora’s actual API endpoints and operated with authentication information via OpenAI’s videos.openai.com domain. However, OpenAI revoked this access within a few hours. Despite this, many users had the opportunity to experience Sora during this time and shared their own videos on social media.
The group described the leak as “a stance against companies abusing artists’ labor.” Following the protest, PR Puppets published an open letter in which Sora stated that he was speaking on behalf of the approximately 300 artists who participated in the testing process. The letter accused OpenAI of using artists as unpaid labor and emphasized that only a few artists’ works would be widely promoted.
Tension between OpenAI and artists
PR Puppets claimed that OpenAI “uses art as a tool in their public relations strategy.” The group argued that they were not against the use of artificial intelligence technology as a tool in art, but that this process should be carried out more fairly for artists. He also stated that requiring OpenAI’s approval before content produced with Sora can be shared publicly restricts the freedom of artists.
OpenAI, on the other hand, stated that Sora is still in the research preview phase and is trying to solidify security measures and ethical standards before making the platform available to a wider audience. “We are collaborating with hundreds of artists to improve Sora, and their suggestions along the way help prioritize new features,” the company said. However, the company also reminded that artists participating in the testing program should not violate confidentiality agreements.
Sora was first introduced by OpenAI in February 2024 and attracted great attention, especially in Hollywood. The platform attracted attention with its ability to convert text-based commands into visual content. However, interest in Sora decreased somewhat as companies such as Google and Meta announced video production tools with similar features.
OpenAI’s CTO at the time, Mira Murati, stated in a statement in March that Sora would be made available to everyone by the end of 2024. However, the company’s product development lead, Kevin Weil, said in an AMA (Ask Me Anything) session on Reddit that the full launch of the platform may be delayed. Weil explained the reason for the delay as “perfecting the model, security measures and reducing the risks of imitation.”
Sora’s post-protest leak reignited discussions about what role artificial intelligence technologies will play in artistic processes. Artists worry that while such tools facilitate their creative processes, they also create an environment where their efforts are underappreciated. Technology companies, on the other hand, emphasize the importance of collaborating with artists in the development of these tools.
What happened in the Sora incident can be considered as a reflection of the complex relationship between artificial intelligence and the art world. Ensuring the ethical use of such technologies and protecting the rights of artists is likely to require greater dialogue and collaboration in the future.