Social media platform X has taken action to block searches for Taylor Swift after explicit artificial intelligence images of the singer-songwriter went viral. The deepfakes flooded various social media sites, leading to renewed calls for legislation around the misuse of AI for sexual harassment. Here’s what you need to know about the Swift episode and legality surrounding deepfakes.
On Wednesday, sexually explicit, AI-generated images of Taylor Swift began circulating on social media sites, particularly gaining traction on X. One image of the megastar was seen 47 million times during the approximately 17 hours it was live on X before being removed on Thursday. The deepfake-detecting group Reality Defender reported tracking down dozens of unique images that spread to millions of people across the internet before being removed. X has since banned searches for Swift and queries relating to the photos, while Instagram and Threads display a warning message when specifically searching for the images.
In response to the incident, X’s safety account released a statement reiterating the platform’s zero-tolerance policy on posting nonconsensual nude images. Meta also condemned the content, while OpenAI and Microsoft stated they have safeguards in place to limit the generation of harmful content on their platforms.
Deepfakes are a form of synthetic media manipulated through artificial intelligence, and can be misused to create fake videos and images that appear real. While legislation addressing deepfakes varies by country, some countries have implemented laws aimed at prohibiting the distribution of harmful deepfakes. However, there is hesitance around stricter regulation due to concerns about holding back technological progress and the potential impact on free speech.
The White House expressed alarm at the images while Swift’s fanbase has mobilised to take action against them, calling for social media companies to play a role in enforcing their own rules to prevent the spread of nonconsensual imagery of real people. US lawmakers have also expressed the need to introduce safeguards to protect individuals from the misuse of deepfakes.
