Google has released updates that will make it harder to detect deepfakes as part of its long-running fight against manipulated images that appear realistic. These updates aim to make it easier to remove fake images created without consent from Search results.
Google’s fight against deepfake images
Under Google’s policies, users have long been able to request that such images be removed. But with the new update, once a person’s removal request is approved, Google will also filter out all explicit results from similar searches. The company’s systems will scan for duplicates of the image and remove them as well. This update aims to reduce the concern that victims may find the same image reappearing on other sites.
In addition, Google’s ranking systems have been updated. If a user searches for explicit deepfakes with a specific person’s name, the results have been changed to show “high-quality, non-explicit content.” For example, if there are news articles about that person, those will be included in the results. These results may also include results discussing the societal impact of deepfakes, Google said.
Google doesn’t want to eliminate legitimate content in its efforts to remove deepfakes from search results. For example, it doesn’t want to remove legitimate content, such as a nude scene from an actor’s movie. In this context, Google acknowledges that it still has a lot of work to do to distinguish real and fake explicit images. One solution it has implemented is to lower sites that receive a high volume of removal requests for manipulated images from its search results. Google notes that these types of sites are not “high-quality sites,” adding that this approach has been successful in the past for other types of malicious content.