Artificial intelligence (AI) is being extensively used to create fake explicit photos and videos. Unlike in the past where you had to have talent and advanced image editing skills to do it, today fake nudes can be automatically created in a few moments. These are then used for both bullying and sextortion. The US Federal Bureau of Investigation (FBI) has issued a warning about this cyberthreat. It can happen to anyone, including celebrities, successful businesspersons, as well as ordinary citizens.
In sextortion schemes, scammers usually try to extract money or other benefits from the victim by threatening to publish falsified compromising material. As the FBI points out, a particularly disturbing aspect of this threat is that scammers often use innocent photos and videos that people share on their social media profiles. They take an image of the victim’s face and using AI-enabled deepfake technology they generate pornographic material.
Victims report that criminals take their benign social media posts (photos, videos, etc.) and edit them using AI content generators. This isn’t a new type of attack, it’s just that professional photo editing tools such as Photoshop are no longer necessary. Today, criminals don’t need any image editing skills, experience, or hours of tedious manual editing work to produce such fakes. All they need now is a few photos and AI.
With the emergence of ChatGPT in November 2022, AI tools have recently become generally available and easily accessible to everybody, which revolutionised this type of attacks.
The FBI is receiving more and more complaints from victims whose photos or videos have been explicitly edited. This material is then publicly circulated on social media or pornographic websites to harass and extort victims. “Advances in generative content creation technologies and the easy availability of personal photos online provide a new opportunity for criminals seeking and targeting their victims,” says Peter Kocmich, Global Cyber Security Delivery Manager at Soitron.
Extortionists most often demand money from victims and threaten to share photos and videos with the victim’s family members and friends on social media unless the victim pays. Alternatively, they demand actual sexually-themed images or videos from the victim. This new trend of deepfake scams is becoming more and more widespread, not only in sextortion but also, for example, in scams related to virtual calls.
No one is completely safe from these extortion techniques unless they remove all their images from the internet. Even then, however, they may not be completely safe, as someone may secretly photograph or film them in real life and then exploit the footage for sextortion or other fraudulent purposes. With deepfake technology becoming increasingly advanced and accessible, it is essential that governments and legislative bodies take appropriate measures to protect their citizens. Some jurisdictions already have laws that penalise the creation of fake images without the consent of the person depicted. The UK, for example, plans to take action against the dissemination of such material as part of the forthcoming Online Safety Bill.
“If you become a victim of sextortion, we recommend keeping calm. Definitely do not pay anyone, as this will not guarantee the removal of explicit material from the internet. Contact the Czech Police instead. As a precaution, we recommend observing standard social network safety practices, such as refusing friend requests from strangers and using all available privacy settings,” advises Kocmich.
The first AI-generated nude deepfake photos and videos began circulating online in 2017, when users on forums such as Reddit tested the potential of AI to create sexually explicit content of female celebrities. While there have been attempts to stop the spread of such content, online tools and websites for creating deepfake nude photos are still easily accessible to anyone.
We are in the process of finalizing. If you want to be redirected to our old version of web site, please click here.