
When tech giants like Apple, and more recently Google, announced tools designed to detect and warn youth and teens about nudity in messages, many parents and digital safety advocates thought, “finally”. At first glance, features like Apple’s “Communication Safety” application (within iMessage) and Google’s “Sensitive Content Warning” application (within Messages on Android) appear to be important steps forward in helping children avoid exposure to sexually explicit content. However, while these features are a welcome addition, they fall significantly short of addressing one of the most serious and complex issues facing youth online today, the sending and receiving of both consensual and non-consensual intimate images.
Despite the promise of AI-powered nudity detection and on-screen warnings, these tools do very little to actually prevent young people from becoming victims or participants in digital sexual harm. In this article, we’ll explore exactly why these technologies don’t go far enough, and what parents need to understand in order to truly protect their kids.
Both Apple and Google use artificial intelligence to analyze images sent through their native messaging platforms. If nudity is detected:
- Apple’s Communication Safety (1) will blur the image, provide a warning message, and offer guidance. Note that parents are not notified
- Google’s Sensitive Content Warning (2) works similarly, offering a blurred preview and the option to “View anyway” or “Delete,” often accompanied by links to support resources. Note that parents are not notified
On the face of it, these application sounds helpful. However, here’s the reality, the final decision still rests entirely with the youth or teen, with a click of the screen they can still see the image.
These tools are designed to inform, not intervene. They ask the youth or teen to pause and consider, but if the youth or teen still wants to view or send the image, they can simply tap a button and move forward. Let’s be honest, what youth or teen would not click the “view only” option out of curiosity?
There’s no enforced barrier, no mandatory pause, and no system-level block that stops the behaviour outright. In emotionally charged or high-pressure situations. common scenarios for sextortion or peer manipulation, this kind of warning is easy to ignore.
For example:
- A youth or teen may feel pressured by a boyfriend, girlfriend, or online acquaintance to send an explicit photo.
- A youth or teen may receive a nude photo and feel too embarrassed or uncertain to delete or report it.
In both cases, the system may warn them, but it doesn’t stop them. And that’s the problem. There’s No Accountability or Follow-Up
Another major flaw is the lack of a feedback loop. Even when a warning is issued:
- Parents and caregivers are not notified by default.
- Notifications are only available for younger children and require specific family sharing settings.
- There is no post-action education or follow-up for youth and teens who choose to view or send the image.
This creates the illusion that if nothing happens after the fact, then no real harm was done, a belief that can dangerously normalize risky behaviour over time.
When parents and caregivers think of harmful image sharing, many imagine strangers sending unsolicited nudes. Yes, this is an issue for sure, but in reality, intimate image sharing is often much more subtle and manipulative where such pictures can be sent consensually or coerced by peers or romantic partners within a relationship as well.
A warning message like, “Are you sure you want to send this?” does nothing to address the potential emotional manipulation at play. In cases like these, where a youth or teen is scared or confused, such warnings may feel more like a speed bump than a stop sign
Apple’s Communication Safety and Google’s Sensitive Content Warnings are very small steps in the right direction for some youth and teens, but they are not solutions to the deeply personal and socially complex issue of non-consensual image sharing among youth.
These tools may blur a photo, but they don’t blur the manipulation, fear, confusion, or shame that often lead teens to engage in sometimes risky behaviour. They don’t offer enough support, education, or intervention to truly keep youth and teens safer in emotionally charged online situations.
Despite having the advanced on-device technology to detect nudity in images, Apple and Google have chosen NOT to implement hard restrictions that would prevent anyone under the age of 18 from sending or receiving nude images through their messaging platforms. Instead, both companies rely on warning systems, Apple blurs images and displays a caution message via its Communication Safety feature, while Google presents a Sensitive Content Warning. In both cases, the youth or teen retains full control to bypass the alert and “View” or “Send Anyway.” These choices are framed by both companies as respecting user privacy and autonomy, but they fall short in truly protecting minors from non-consensual image sharing or sextortion.
The reality is, Apple and Google likely could implement stronger protections. Their AI models are already capable of detecting nudity at a high level of accuracy. They could, for example, auto-block any image flagged as explicit from being sent or received on accounts tied to users under 18. So why don’t they? The answer likely lies in a mix of legal caution, privacy philosophy, and fear of overreach. Blocking content outright could be seen as overstepping parental rights, violating user freedoms, or misstepping in countries where age-of-consent laws and cultural views differ. However, the cost of this caution is that youth and teens, particularly those vulnerable to manipulation, pressure, or coercion, are left without a real safeguard, despite the existence of tools that could offer it.
While Apple’s Communication Safety and Google’s Sensitive Content Warning represent well-intentioned efforts to address youth and teen exposure to explicit content, they ultimately fall short of meaningfully protecting them from the risks of consensual and non-consensual intimate image sharing. These features function more as polite suggestions than protective barriers, offering warnings that are easily bypassed with a single tap. In emotionally charged moments, where manipulation, coercion, or peer pressure is involved, such warnings do little to prevent harm and even less to support the youth afterward.
The absence of mandatory intervention, parent or caregiver notification, or follow-up education makes it clear that these tools prioritize user autonomy and corporate caution over the safety of minors. This hands-off approach may align with broader privacy values, but it fails to address the complex social and emotional dynamics teens face when navigating intimate exchanges online. Apple and Google have the technological capability to do more, far more in our opinion, but have chosen to tread lightly to avoid legal, cultural, and ethical backlash.
However, this cautious stance comes at a cost, that being the normalization of risky behaviours and the continued vulnerability of young people to online sexual exploitation. Blurring a nude image is not the same as preventing its harm. Real safety would require not just smarter AI, but braver decisions, ones that recognize the difference between surveillance and support, between restriction and responsible protection.
Until tech giants are willing to take those steps, the burden remains on parents, caregivers, educators, and advocates to fill the gap by having hard conversations, setting expectations, and fostering environments where digital literacy and safety is not just about what youth and teens can do, but about what they should do, and why it matters. Because ultimately, it’s not about limiting youth or teen freedom, it’s about empowering them with real safeguards and real support when it matters most.
Digital Food For Thought
The White Hatter
Facts Not Fear, Facts Not Emotions, Enlighten Not Frighten, Know Tech Not No Tech
References:
1/ https://support.apple.com/en-ca/105069
2/ https://support.google.com/messages/answer/15724426?hl=en