Disturbing Trend – Some Teenagers Using AI to Create Child Sexual Abuse Material to Target Other Teens/Students

November 3, 2023

In today’s onlife world, the increasing and ubiquitous accessibility of technology has brought about positive opportunities and negative challenges, especially when it comes to the safety and well-being of our children. A particularly disturbing trend has now emerged – some youth, are utilizing artificial intelligence (AI) to create child sexual abuse material (CSAM) that specifically targets their peers within school environments (1)(2). This concerning development calls for urgent attention from parents, educators, and law enforcement to address and prevent these harmful and criminal AI pictures and videos from being produced, weaponized, and shared with others.

The use of AI to generate explicit content is a new and deeply troubling development. By leveraging AI technologies, some teens are manipulating images and videos to create forged content that targets their classmates, exploiting their trust and relationships to perpetuate what are actual criminal offences here in Canada.

Although these AI-generated images may not depict actual real events, Canadian courts have determined that presenting AI-generated content portraying and held out to be an individual under the age of 18, is still considered to be Child Sexual Abuse Material, and is illegal to create, distribute, and possess here in Canada (3). Also, here in British Columbia, creating/posting/distributing non-consensually distributed AI generated images would be unlawful under the new Intimate Images Protection Act. Teens need to understand that although they may think that the creation of these AI generated images is “funny”, or just a “joke”, they aren’t and can create significant legal consequences to the person who created such an image or even possesses one.

Unlike the past where morphed or photoshopped images were more easily identified, todays AI generated CSAM images and video are often difficult to identify from authentic content, thus amplifying the potential for devastating consequences that these images can create within school communities.

As parents, educators, and law enforcement, it’s essential to employ a multifaceted approach when it comes to tackling this significant issue head on before it really becomes rooted – sooner rather than later in our opinion!  Here are some of our thoughts:

Open Dialogue and Education: Parents, teachers, and law enforcement must engage in open, honest, and age-appropriate discussions with teenagers about the responsible use of technology. We need to create discussions surrounding the ethical implications of AI, and the severe criminal consequences of creating, sharing, or engaging with inappropriate AI content. Education is pivotal in raising awareness about the dangers and legal ramifications of such actions – something we do here at the White Hatter through our school programs (4).

Digital Literacy and Online Safety Training: Incorporating digital literacy and online safety as part of the educational curriculum is crucial. Educators, parents, and caregivers need to teach students not only how to use technology, but also about its potential risks and ethical usage. Training should focus on critical thinking, discernment of authentic content, and the ethical use of AI and other technological tools – Again something that we do here at the White Hatter through our school programs (4).

Empowerment Through Reporting Channels: Schools and communities should establish and widely publicize anonymous reporting channels where students can safely report concerning online activities such as the distribution of the AI generated CSAM images. By encouraging a culture of responsibility and reporting, teens can play an active role in safeguarding their peers and their school community from potential harm. Here in British Columbia, we have the Erase Bullying Reporting portal that students can submit a report specific to this topic (5).

Investigate Then Litigate: before a youth is hauled into a principal’s office or confronted by law enforcement, investigative due diligence needs to take place to ensure that the picture being shared is in fact real, or could it be an actual AI generated image. Not to do so will only further traumatize a youth who has been targeted.

In today’s onlife world, the responsibility to create a safer online environment for youth falls upon a collective effort. By fostering a culture of responsible technology usage, advocating for education and awareness, and establishing stringent consequences for these egregious and criminal actions, we can take significant steps toward safeguarding our children and school communities.

It’s crucial that parents collaborate with educators, law enforcement, and technology companies to address this worrying trend, thus working as a collective to safeguard the safety and welfare of our children in their online interactions. Unfortunately, many technology companies are not doing their part and therefore should be legislated by government to act! Working together, we can create a safer and more responsible onlife world where our kids can flourish, learn, and engage safely.

Digital Food For Thought

The White Hatter







Support The White Hatter Resources

Free resources we provide are supported by you the community!

Lastest on YouTube
Latest Podcast Episode
Latest Blog Post
The White Hatter Presentations & Workshops

Ask Us Anything. Anytime.

Looking to book a program?

Questions, comments, concerns, send us an email! Or we are available on Messenger for Facebook and Instagram

Your subscription could not be saved. Please try again.
Your subscription has been successful.

The White Hatter Newsletter

Subscribe to our newsletter and stay updated.

We use Sendinblue as our marketing platform. By Clicking below to submit this form, you acknowledge that the information you provided will be transferred to Sendinblue for processing in accordance with their terms of use