top of page

Support | Tip | Donate

Recent Posts

Featured Post

We’re Regulating Yesterday’s Social Media, While AI Social Platforms Rewrite the Rules

  • Writer: The White Hatter
    The White Hatter
  • 8 minutes ago
  • 8 min read
ree

Much of the current conversation about youth and social media, particularly in light of developments in Australia, is centred on age limits, bans, and restrictions that effect young people. These proposals are usually driven by a genuine desire to protect youth and teens. However, good intentions do not guarantee effective outcomes. In our view, this approach focuses on the wrong part of the problem.


The main issue is not that young people use technology. The issue is how social platforms are designed to function. If we want meaningful and durable protection for youth and teens, regulation must focus on the business models and system design of today’s dominant social media companies and, more importantly, the emerging AI driven platforms that are poised to disrupt them. Regulating the end user while leaving persuasive profit driven design untouched misses where the real power lies.


This is not an argument against regulation. Legislation and regulation absolutely matters; our argument is that we are regulating the wrong layer of the system. (1) 


Age based restrictions focus almost entirely on who can access a platform. They do very little to address how those platforms are engineered to capture attention, shape behaviour, and maximize engagement. When legislation stops at age gates, it leaves the most influential drivers of harm intact. It also allows platform operators to treat compliance as a narrow checkbox exercise rather than a prompt to rethink the incentives baked into their systems. Once age verification is in place, there is often no further pressure on social media platforms to examine how profit driven design choices affect wellbeing.


That distinction matters because design choices are not accidental. They determine how long users stay, what content they are exposed to, how frequently they return, and how emotionally invested their users become. These systems are tested, refined, and optimized with intention. That is where oversight belongs.


Previously, we have written how we believe that legacy social media platforms such as Facebook, Instagram, Snapchat, and TikTok are likely approaching the end of their dominance with youth and teens. (2) They are not vanishing overnight, but youth and teens are starting to migrate their attention elsewhere. What is emerging in their place is something fundamentally different, something we are calling AI driven social interaction.


AI based social platforms and companionship style chatbots do not operate like traditional social media feeds. They are not built around public posts, likes, or peer feedback loops. They are built around personalized, one-to-one interaction and simulated emotional connection. This shift completely changes the nature of both engagement and risk.


We believe that many current legislative efforts and discussions are still anchored to yesterday’s platforms. They assume that risk primarily arises from public visibility, peer comparison, and social pressure. Those assumptions become increasingly outdated as AI driven systems move social interaction into private, persistent, and highly personalized and encrypted spaces.


Traditional social media is largely optimized for attention. Users scroll, react, and compare themselves to others. We acknowledge that there are real risks in that environment, something that we speak to in all our presentations, but there is also visibility which is often overlooked from a safety perspective. Friends notice changes in behaviour, peers see posts, teachers, parents, or classmates may pick up on warning signs when someone is struggling. The social nature of these platforms can sometimes surface signals that prompt support and outreach to those who need it most.


We would argue that setting age gating at 16 removes an important early warning system. When teens are pushed out of visible online spaces, changes in behaviour and emerging red flags are far less likely to be noticed by peers, educators, or even caregivers. It will be interesting to see if this argument is supported by what is happening in Australia, but only time will tell.


What parents, caregivers, and policymakers need to understand is that AI based social interaction functions very differently from traditional social media, yet it is largely absent from the public conversation around the legislation currently being proposed, or even enacted.


AI systems designed for companionship are optimized for engagement through individualized emotional, intellectual, and even spiritual attachment, rather than  just attention no matter what the age of the user. They are built to feel responsive, affirming, patient, and always available. They remember past conversations, adapt their tone, mirror emotion, and reinforce continued interaction. These features are not incidental, they are central to how these systems retain users.


When a child or teen struggles while engaging with an AI chatbot, there are no peers watching, there is no shared social context, and the interaction is private, continuous, and frictionless. That shift should concern parents, caregivers, and policymakers alike.


Age based restrictions assume that access itself is the primary risk. That assumption does not hold up in a world where technology evolves faster than policy, particularly with AI. New tools and platforms emerge at a pace legislation cannot match. AI systems are already embedded in games, homework tools, search engines, and messaging apps. AI driven social communities are not hypothetical, they are the next step in this digital evolution that is moving at break neck speeds.


If adults are not immune to emotionally persuasive AI systems, then expecting children to navigate them safely while ignoring the design incentives that drive engagement is unrealistic. The issue is not that youth and teens lack strength or judgment. The issue is that the systems they are entering are engineered to be psychologically and emotionally powerful, especially in the absence of meaningful guardrails that hold companies accountable for how profit is generated.


If governments genuinely want to reduce harm, regulation must focus on how platforms are built and monetized, whether they are legacy social media networks or emerging AI based social and companionship systems. That means examining engagement optimization practices that exploit emotional vulnerability. It means transparency around recommendation systems and conversational AI behaviour. It means guardrails that prevent simulated dependency, exclusivity, and emotional manipulation. It means meaningful protections for privacy and security. It also means accountability when systems are designed to encourage prolonged use at the expense of wellbeing.


None of this diminishes the role of active, involved parenting. Conversations, boundaries, modelling healthy behaviour, and building digital literacy remain essential, and we would argue is the keystone to keeping our kids safer online. Regulation should support parenting, not replace it.


Some advocates of a 16 year old age gate argue that this type of legislation would reset social norms around youth and teen use of social media. They point to the understandable pressure many parents and caregivers feel to grant access before their children are developmentally ready, and suggest that age based restrictions would reinforce parental decisions and reduce social pushback. Even when proponents acknowledge that such legislation would be imperfect, they argue that the cultural signal alone makes it worthwhile.


This reasoning feels appealing, but it is a weak foundation for policy. If legislation is to meaningfully protect youth and teens, it should meet a higher standard than symbolic reassurance. At a minimum, good policy should demonstrate a plausible mechanism for reducing harm, produce measurable outcomes, avoid unintended consequences, and focus accountability where power actually resides. Judged against those criteria, the “resetting social norms” argument falls short for several important reasons.


#1/ It confuses symbolism with effectiveness.


Changing a social signal does not automatically change behaviour in meaningful or lasting ways. Parents and caregivers already receive strong social messaging about issues such as screen time, nutrition, sleep, and substance use, yet those signals often fail to translate into consistent action. Legislation justified primarily as a cultural message avoids the harder question of whether the policy itself measurably reduces harm or simply creates the appearance of action.


#2/  It shifts responsibility away from adults rather than strengthening it.


Framing legislation as something parents and caregivers need to “back them up” implies that parental authority is insufficient without government reinforcement. While laws can and should hold companies accountable, using legislation to resolve social pressure among parents reframes a parenting challenge as a regulatory one. Research on behaviour change consistently shows that external enforcement can weaken intrinsic responsibility and increase reliance on rules rather than judgment. Policies that displace parental agency risk creating compliance without engagement.


#3/  It overstates social pressure as the primary driver of early access.

The reasons families introduce social media or connected devices earlier than intended are far more complex than just peer pressure alone. Convenience, safety concerns, communication, extracurricular coordination, and family logistics all play a role. Surveys and parental reports routinely show these factors outweigh simple fear of social exclusion. Legislation does nothing to address these underlying drivers, which means it is unlikely to change behaviour in practice.


#4/ It lowers the bar for policy quality by normalizing weak design.


Arguing that a law is worthwhile “even if imperfect” sets a dangerous precedent when imperfection stems from avoiding the core problem. There is a meaningful difference between evidence based policy that can be refined over time and symbolic policy that was never designed to address root causes. Poorly designed laws risk unintended consequences, false confidence, and compliance theatre, all while diverting attention from more effective interventions.


#5/  It relies on a slippery and expandable justification.


If the primary value of legislation is that it makes parents feel supported in saying no, then almost any restriction could be justified on the same grounds. This logic prioritizes emotional relief and moral signalling over proportionality, evidence, and outcomes. History shows that policies rooted in reassurance rather than results tend to expand, harden, and persist even when they fail to deliver meaningful protection.


It is our belief that legislation should be evaluated on whether it changes conditions in ways that reduce harm, not on whether it sends a comforting message. Resetting norms may feel helpful, but without clear mechanisms, evidence, and accountability, it is not a sufficient rationale for policy that meaningfully affects the lives of youth and families.


When policy fixates on restricting youth access alone, it quietly shifts responsibility away from the companies that profit from persuasive design. Parents are left carrying the burden while platforms continue to optimize for engagement at any cost.


To be candid, age gating is the easiest lever to pull. It offers a quick, visible win for politicians and a relatively painless concession for social media companies. While these companies publicly push back, the resistance is noticeably restrained. There is a reason for that. Age restrictions are a simple box to check. They allow platforms to signal responsibility without meaningfully threatening their underlying business models or profit margins. From an investor standpoint, the trade off is minor, predictable, and a cost of doing business.


By contrast, legislation that targets design choices, engagement optimization, data practices, or monetization strategies would strike at the core of how these platforms make money. If we really want to slay the social media Juggernaughts, this is how it should be done. That kind of legislation would trigger aggressive and sustained opposition. Social media companies would deploy every legal, political, and financial resource available to delay, dilute, or defeat it. The cost of fighting would be irrelevant compared to the long term financial risk of changing how their systems are engineered to drive profit.


This reality also places governments in a difficult position. Defending design focused regulation would be expensive, time-consuming, and politically risky. It would require long legal battles, technical expertise, and the willingness to withstand intense corporate pressure. At that point, the question becomes not what is best for children, but who is willing to blink first.


Right now, the platforms are betting that governments will choose the cheaper, safer path. Age restrictions are easier to pass, less costly to defend, and politically attractive to voters who want action now. They create the appearance of strong intervention without challenging the economic incentives that fuel harm in the first place.


That is why, we believe, age gating continues to dominate the policy conversation. It is not because it is the most effective solution. It is because it is the most convenient one for both sides.


The real question is not how do we keep kids away from technology. The better question is, “how do we design technology that does not exploit human vulnerability, especially in children and teens ?”


If we continue to legislate based on age gating and today’s social media platforms, while ignoring the rapid rise of AI driven social systems, we risk crossing a regulatory Rubicon without realizing it. Regulating users, instead of design, may offer political comfort, but it will not deliver lasting protection. Regulating the system itself moves us closer to solutions that can actually keep pace with the world our kids are entering. A principles stay the same, diverse in their application approach to legislation.


The question, “do governments have the hunger to fight the hard fight, rather than the easier one?” Our recommendation to the Canadian government, “elbows up”, and lets fight the hard fight, our kids are worth it!



Digital Food For Thought


The White Hatter


Facts Not Fear, Facts Not Emotions, Enlighten Not Frighten, Know Tech Not No Tech



References




Support | Tip | Donate
Featured Post
Lastest Posts
The White Hatter Presentations & Workshops
bottom of page