Google applied to the court to dismiss the lawsuit filed by Robby Starbuck, known for his anti-institutional anti-diversity views, and based on the claim that its artificial intelligence system produces slanderous results. Starbuck alleges that Google’s AI tools linked him to sexual assault accusations and a white supremacist.
However, it is known that Starbuck has previously filed a lawsuit against Meta on similar grounds. In his accusations against Meta, he claimed that the company’s artificial intelligence made him appear to have participated in the events that took place in the US Capitol on January 6, 2021. However, Meta ended this case with a compromise in August. Moreover, the company even hired Starbuck as a consultant to later address “ideological and political biases” in its AI chatbots, The Wall Street Journal reported.
Google calls lawsuit “misleading usage”
Robby Starbuck is demanding a total of $15 million in compensation from Google. But according to Google’s petition to the court, Starbuck’s claims are merely artificial intelligence hallucinations resulting from the “misuse” of developer tools. The company states that Starbucks did not share the prompt statements that led to the allegations in question and did not provide any examples of real persons affected by these contents.
Despite this, the judicial process of the case continues and Starbuck has not made a public statement at this stage. It remains unclear at the moment whether Google will seek a compromise like Meta. However, in the current situation, the company’s choice is to fight the claims directly through court.
In addition, it is noteworthy that no court in the USA has so far ruled to pay compensation to the plaintiff in a libel case filed due to content produced by an artificial intelligence chatbot. This indicates that the boundaries of the legal liability of artificial intelligence systems are still not clear.
On the other hand, these cases seem to bring about broader legal and social discussions on the ethical use of artificial intelligence technologies, content security and developer liability. The issue of under what conditions artificial intelligence models involved in content production can be held responsible is becoming an increasingly critical issue for both technology companies and legal circles.