The escalating threat of deepfake pornography 

Technology is undeniably evolving and AI – the latest and most controversial form – has taken over to empower companies, platforms, and industries. However, this transformative change comes with hazardous consequences that threaten our fundamental human rights;  deepfake pornography being one of them. 

Text: Lada Vasiliki

Although deepfake pornography has existed for years, the quantity and accessibility is increasing more and more in today’s world. There are many cases of teenage girls waking up one day and discovering that their face has been edited on somebody else’s body and featured in illegal content. This is not a mere concern or threat anymore, but a humiliation and violation of privacy and dignity of thousands of victims. The most disturbing part is that AI-generated deepfakes mainly target women and among them, underage girls.

The evolution of deepfake pornography

The phenomenon of pornographic deepfakes was first observed in 2017 on forums like Reddit, a platform on which people express their opinions on different matters. Initially female celebrities, influencers, and streamers became victims. However, later it turned into something much more dangerous, by targeting teenagers. 

The arrival of apps like DeepNude and Telegram bots in the late 2010s marked only the beginning of the circulation of perverted content. In the case of the latter, Sensity AI –  a research company that tracks online deepfake videos –  found that these platforms had caused at least 100,000 victims, including underage girls. The company also discovered that between 90% and 95% of those videos were nonconsensual porn, the majority being noncensual porn of women. 

Another analysis that was conducted by an anonymous researcher in 2023, showed that over 113,000 videos were uploaded to 35 different websites – a major increase compared to 2022, when 73,000 videos were uploaded. These figures reveal how colossal the issue of deepfakes has become.

The challenge of justice

Experts are afraid that victims of deepfake pornography are very unlikely to achieve justice, since it’s almost impossible to detect deepfakes on the internet and to identify their creators. On the one hand, the modern digital space is enormous, the availability of free AI-tools is endless, and anonymity is as common as breathing. On the other hand, this type of content is on demand from pornography platforms and illicit online networks. So, is it really a dead-end?

International response

A case that finally raised significant concern around nude deepfakes around the US and Europe, was the circulation of AI-generated nude deepfakes depicting Taylor Swift in illegal, sexualized content. Access to the content  was consequently blocked on “X” and most search engines.  

This also served as a wake-up call for Europe, by sealing a deal that such content will be criminalized across the EU by the middle of 2027. Safeguarding and boosting women’s rights is  now one of the European Commission’s priorities. Moreover, victims of explicit deepfakes in Europe currently have to rely on the General Data Protection Regulation (GDPR), Digital Services Act (DSA), national laws on defamation, and to the world’s first comprehensive AI law, the European AI Act. According to the latter legislation, deepfakes are regulated through transparency obligations.

Temporary solutions

Deepfake pornography is a form of digital abuse, a pressing and evolving challenge that demands attention. As legal measures are being put in place, individual users and technology giants must collaborate to stop the circulation of explicit content. 

But how can you protect yourself against deepfake pornography? First of all, some experts recommend users to educate themselves on these new technologies, the consequences of AI and the harm it can cause. It is therefore very important to have media and digital literacy. Secondly, changing your privacy settings and opting for a private account rather than a “public” one, is another way of protecting your personal content from being abused. Furthermore, users can stop the circulation of illegal content by not supporting AI tools that edit images such as face-swap apps. 

However,  actions need to be taken by search engines like Google and Microsoft themselves, so the content is harder to be found through their ranking system. Keeping that in mind, there is hope for a future where deepfake pornography is relegated to a dark memory instead of  a vicious reality.

0 Comment