Blog

How AI Video Tools Can Be Exploited to Create Non-Consensual AI-Generated Intimate Videos

March 7, 2025

CAVEAT – We previously wrote an article about how AI was being used to generate non-consensual intimate images, which some teens are now weaponizing against their peers. (1) Now, AI has advanced to the point where this threat has escalated, making it possible to create highly realistic, non-consensual AI-generated short video clips. (2)

The rapid evolution of artificial intelligence has brought remarkable advancements in video generation, with tools enabling users to create high-quality AI-generated videos combined with simple text prompts. While these technologies hold incredible potential for creative expression and content production, they also present new concerns and dangers, particularly when misused to create questionable content or even non-consensual AI-generated intimate pictures and videos.

Recently, we wrote this article about how AI hugging and kissing video generating apps can be weaponized to target students and even teachers:

However, AI-generated intimate videos, often called deepfake pornography, have been a longstanding concern for us as well. In the past, producing such content required advanced technical skills and complex deep learning models. Today, however, AI video generators like PixVerse and Pika to name just two, have made it alarmingly simple to create explicit material without consent, using nothing more than a single image of the target, before weaponizing and posting  it publicly online. As an example, we have seen how an AI video generating program can take the two pictures we used in the kissing and hugging video we mentioned above, and show me instead reach over to remove Beth’s clothing and then show me fondling her naked breasts.  

Many of these AI tools include safeguards designed to prevent the creation of sexually explicit content. However, individuals looking to bypass these restrictions have found ways to manipulate the systems by using coded language, alternative phrasing, or incremental modifications to their prompts. This means that even AI tools designed with ethical limitations can sometimes be tricked into generating inappropriate content. As mentioned in the 404 article (2), there are some companies that, “lack the most basic guardrails that prevent people from generating nonconsensual nudity and pornography, and are already widely used for that exact purpose in online communities dedicated to creating and sharing that type of sexualized content.” 

We have visited several popular online forums where users have been sharing guides on how to manipulate AI video generators into producing non-consensual explicit content. After spending some time in these forums and reading the “how-to” comments, we have been able to identify three ways that such manipulation is presently being used

  • Instead of directly asking for explicit content, users substitute terms with innocuous-sounding alternatives that the AI does not flag.

  • Users generate partial images or videos and refine them progressively to bypass AI content moderation systems.

  • Some bad actors manipulate the input prompts based on known biases or gaps in the AI’s moderation capabilities.

Parents, caregivers, and educators need to be aware of how rapidly advancing AI technology is intersecting with online safety risks. The creation and distribution of AI-generated explicit content can violate personal privacy and dignity, particularly when it involves youth and teens or unsuspecting adults as well. As AI tools become more sophisticated, the ability to manipulate images and videos without consent raises serious ethical and legal concerns.

Beyond privacy violations, AI-generated explicit content can also contribute to  digital peer aggression (cyberbullying), harassment, and significant reputation damage. Youth and teens, in particular, may find themselves targeted by peers or malicious actors who use these falsified images and videos as weapons to humiliate and harm. This is also true for adults, especially those who are leaving an abusive relationship where the ex-partner will use this technology as a form of tech based facilitated abuse.

Another alarming consequence is the potential for blackmail and sextortion, both of which are a huge concerns among law enforcement agencies. Criminals and online predators can exploit AI-generated content to coerce and extort victims into compliance, further endangering those who may already feel vulnerable. We here at the White Hatter helped two teen girls last summer where this in-fact  happened using AI technology.

Perhaps most troubling is the broader societal impact, normalizing the production and consumption of non-consensual explicit content. As these AI tools become more accessible, there is a real risk of desensitizing individuals to the ethical violations involved, perpetuating harmful online behaviours that undermine consent and personal autonomy.

One of the most important steps parents, caregivers, and educators can take is to educate themselves and their children about AI video generation tools and their potential risks. Understanding how these technologies work and the ways they can be misused is crucial. Having open and ongoing conversations with youth and teens about these concerns can help them recognize potential dangers and make informed choices online.

Monitoring your child’s online activity is another key step in protecting them from AI-related threats. Be aware of the digital spaces they frequent and the platforms where harmful AI-generated content may be shared – do you know what apps are on your kids device and how they are using them? While respecting their privacy, maintaining an open dialogue about their online experiences can help them feel comfortable coming to you with concerns.

Encouraging ethical AI use is also an essential discussion that needs to take place with your child given today’s onlife world. If your child is interested in AI and digital creativity, guide them toward responsible use of these tools. Teach them about digital literacy, digital ethics, and the importance of reporting unethical or harmful AI-generated content when they encounter it.

If you come across non-consensual AI-generated content, take immediate action by reporting it to the platform hosting it. In more serious cases, such as when the content involves threats, blackmail, or minors, consider reporting it to law enforcement. Quick intervention can help mitigate harm and prevent further misuse. If you live in British Columbia, connect with the provincial “Intimate Image Protection Service” who can help guide you in how to get these images and videos taken down if posted online, which is a free service (3)

Advocating for stronger protections against the misuse of AI-generated content is a proactive way to address the broader issue. Supporting initiatives that push for stricter AI regulations and improved content moderation can contribute to a safer digital environment for everyone. Parents and caregivers can also encourage schools and policymakers to address these concerns through digital literacy education and implement measures to prevent the spread of harmful AI-generated media if it is brought to the attention of the school, this is something that we here at the White Hatter excel at.

The rise of AI-powered video generation tools like PixVerse and Pika underscores the double-edged nature of AI technological progress. While these tools offer incredible creative potential, they also introduce new risks, particularly when used to generate non-consensual explicit content. As AI capabilities continue to evolve, so too must our collective efforts to safeguard individuals, especially young people, from the dangers posed by this misuse.

Protecting against the exploitation of AI-generated content requires a multi-faceted approach. Parents, caregivers, and educators must take an active role in educating youth and teens about the ethical implications of AI, fostering responsible use, and encouraging open discussions about online safety. Platforms developing these technologies must strengthen their moderation efforts and implement more robust safeguards to prevent misuse. Additionally, policymakers and law enforcement must continue to adapt to the rapidly changing digital landscape, ensuring that regulations and legal frameworks keep pace with emerging threats.

Addressing this issue is not just about preventing harm, it is about shaping a digital future where consent, privacy, and ethical responsibility are prioritized. By staying informed, advocating for stronger protections, and fostering a culture of responsible AI use, we can help mitigate the risks posed by these powerful tools and create a safer online environment for everyone.

Related Article:

Digital Food For Thought

The White Hatter

Facts Not Fear, Facts Not Emotions, Enlighten Not Frighten, Know Tech Not No Tech

References:

1/ https://thewhitehatter.ca/deepnudes-undressing-ai-generated-intimate-image-abuse-material/ 

2/ https://www.404media.co/chinese-ai-video-generators-unleash-a-flood-of-new-nonconsensual-porn-3/ 

3/ https://www2.gov.bc.ca/gov/content/safety/public-safety/intimate-images/intimate-images-support 

Support The White Hatter Resources

Free resources we provide are supported by you the community!

Lastest on YouTube
Latest Podcast Episode
Latest Blog Post
The White Hatter Presentations & Workshops

Ask Us Anything. Anytime.

Looking to book a program?

Questions, comments, concerns, send us an email! Or we are available on Messenger for Facebook and Instagram

Your subscription could not be saved. Please try again.
Your subscription has been successful.

The White Hatter Newsletter

Subscribe to our newsletter and stay updated.

We use Sendinblue as our marketing platform. By Clicking below to submit this form, you acknowledge that the information you provided will be transferred to Sendinblue for processing in accordance with their terms of use