Sexually explicit Taylor Swift AI images circulate online, prompt backlash
A slew of sexually explicit artificial intelligence images of Taylor Swift are making the rounds on X, formerly Twitter, angering fans and highlighting harmful implications of the technology.
In one mock photo, created with AI-powered image generators, Swift is seen posing inappropriately while at a Kansas City Chiefs game. The Grammy award winner has been seen increasingly at the team's games in real life supporting football beau Travis Kelce.
While some of the images have been removed for violating X's rules, others remain online.
Swift has not commented on the images publically.
USA TODAY has reached out to Swift's rep for comment.
AI images can be created using text prompts and generated without the subject's consent, creating privacy concerns.
AI-generated deepfakes — manipulated video produced by machine-learning techniques to create realistic but fake images and audio — have also been used increasingly to create fake celebrity endorsements.
Fans online were not happy about the images.
"whoever making those taylor swift ai pictures going to heII," one X user wrote.
"'taylor swift is a billionaire she’ll be fine' THAT DOESN’T MEAN U CAN GO AROUND POSTING SEXUAL AI PICS OF HER ..." another user wrote.
The phrase "protect Taylor Swift" began trending on X Thursday.
A wide variety of other fake images have spread online in recent years, including photos of former President Donald Trump being arrested, tackled and carried away by a group of police officers that went viral on social media last year. At the moment, it's still possible to look closely at images generated by AI and find clues they're not real. One of the Trump arrest images showed him with three legs, for example.
George Carlinis coming back to life in unauthorized AI-generated comedy special
But experts say it's only a matter of time before there will be no way to visually differentiate between a real image and an AI-generated image.
"I'm very confident in saying that in the long run, it will be impossible to tell the difference between a generated image and a real one," James O'Brien, a computer science professor at the University of California, Berkeley, told USA TODAY. "The generated images are just going to keep getting better."
Meanwhile, a bipartisan group of U.S. senators has introduced legislation called the No Artificial Intelligence Fake Replicas And Unauthorized Duplications Act of 2024. Supporters say the measure will combat AI deepfakes, voice clones and other harmful digital human impersonations.
Contributing: Chris Mueller, USA TODAY; Kimberlee Kruesi, The Associated Press
Artificial intelligence in music:Tennessee governor unveils legislation targeting use
Disclaimer: The copyright of this article belongs to the original author. Reposting this article is solely for the purpose of information dissemination and does not constitute any investment advice. If there is any infringement, please contact us immediately. We will make corrections or deletions as necessary. Thank you.