This year, the world of deepfake pornography is growing at an alarming rate, thanks to advances in AI and ML. Deepfakes make it look like victims are part of explicit content without their knowledge or permission. Search engines like Google and Microsoft's Bing are unintentionally making it easier for people to search for this type of harmful content.
A new study shared with WIRED magazine sheds light on this grim reality. Over the past seven years, more than 244,625 deepfake porn videos have been uploaded to some of the most popular websites that host such content. And just within the first nine months of 2023, 113,000 videos were uploaded, marking a 54% increase from the total number of videos uploaded in the entire previous year. The study anticipates that by the end of 2023, the number of videos uploaded will surpass the total number of videos uploaded in all previous years combined.
But this is just a part of a bigger problem. Many apps allow users to swap faces in still images or even remove clothes from pictures with just a few clicks. These apps and websites mainly target women and create a culture of fear and violation. Sophie Maddocks, a researcher on digital rights, highlights the need to make these technologies less accessible to prevent potential crimes.
The study identified 35 websites dedicated to hosting deepfake porn videos. It also found 300 other websites that, although not exclusively for deepfake porn, host such content. The websites are quite popular and some videos have been watched millions of times, showing the large scale and demand for this disturbing content.
In a shocking incident in Spain, over 20 young girls came forward after AI tools were used to create explicit images of them without their knowledge. This is a stark example of how destructive and harmful deepfake porn can be.
Search engines play a significant role in directing people to these harmful websites. The study showed that most visitors found these websites through simple search queries. Google and Microsoft have taken steps to combat this issue, but their efforts are still in the early stages. Google has a form for reporting involuntary fake pornography, and Microsoft allows users to report deepfakes through its web forms.
The explosion of deepfake pornography, alongside the ease of creating such content and the lack of stringent laws, creates a dangerous mix. Experts believe that stronger laws, better public awareness, and more responsible actions from tech companies are crucial to tackle this menace. In the meantime, deepfakes are powerful phishbait, and open up your org to all kinds of downtime risks like ransomware or Business Email Compromise.
The impact on the victims is beyond measure. The report from WIRED narrates the distressing experiences of female Twitch streamers who have been targeted by deepfakes, leading to more harassment and violation of privacy. The act of misusing someone’s image in explicit content not only violates their digital identity but also causes severe emotional distress.
Deepfake pornography is a growing social engineering danger that needs immediate attention. The issue is global. Using a VPN, the researcher tested Google searches in Canada, Germany, Japan, the US, Brazil, South Africa, and Australia. In all the tests, deepfake websites were prominently displayed in search results. Celebrities, streamers, and content creators are often targeted in the videos.
As we move into a future where an AI-fueled boundary between real and virtual continues to blur, it's crucial to strengthen all defenses against such harmful and intrusive activities to ensure a safer digital environment for all. Making sure your web filters are updated and training your workforce are critical steps.