Artificial Intelligence (AI)

 

UPDATED: April 2025

CAVEAT: The aim of this chapter is not to present a comprehensive guide to artificial intelligence (AI), but rather to provide parents and educators with a general understanding of the transformative potential of AI, and its associated challenges. We hope to equip parents and educators with the needed information to initiate conversations with their children or students about the positive and negative impacts of AI that are both known and unknown. We believe that these conversations on this specific topic need to start taking place both and home and in our schools. 

Did you know in a 2023 survey commissioned by the Family Online Safety Institute in the United States,  they revealed that 67% of American teenagers had utilized or experimented with AI?   UNICEF reported that in the UK, 80% of teenagers aged 13-17 are utilizing “generative” AI tools, while 40% of children aged 7-12 are also doing so? https://www.unicef.org/globalinsight/reports/policy-guidance-ai-children The statistic regarding 7-12-year-olds is intriguing, especially considering that many generative AI platforms typically require users to be at least 13 years old. Once more, similar to most technological advancements, AI was initially crafted with adults in mind and not youth.

In fact, in our presentations, we have learned that youth are using generative AI in three basic ways:

  1. As a research tool to gather information
  2. As a tool to engage in private and personal conversation such as when using Snapchat’s MyAi
  3. To create pictures through the use of AI programs such as Dall-E

Artificial intelligence (AI) has come a long way in the past decade. With significant breakthroughs in machine learning and deep learning over the past year, we are now seeing AI that can do things that were once thought impossible, or only found in movies or on TV shows like Star Trek. As a result, many of the social media platforms that are most popular with youth are quickly integrating the newest form of AI, Generative AI, into their platforms. It’s a reality and AI has become transformative, ubiquitous, and often undetectable in our onlife world.

There are two general types of artificial intelligence that parents and educators should be aware of:

#1 Discriminative/Predictive AI (oldest)

Discriminative AI is designed to analyze patterns and make predictions. Examples of Discriminative AI include Siri, GPS, natural language processing, the Amazon suggestion bot,  and medical diagnostic tools. These systems rely on predefined data and are trained to recognize specific patterns. Discriminative AI is great at performing tasks with a high degree of accuracy, but it lacks creativity and the ability to create something new. Here are some examples of how Discriminative AI has been integrated into the onlife world:

    • Image recognition: Discriminative AI is widely used in image recognition, where it can accurately identify and classify objects in images, such as faces, animals, and vehicles.

    • Fraud detection: Banks and financial institutions use discriminative AI to detect fraudulent transactions by identifying patterns in customer data that indicate suspicious behavior.

    • Language translation: Discriminative AI is used to translate text from one language to another, by analyzing and classifying patterns in the language and syntax of both languages.

    • Voice recognition: Virtual assistants like Siri and Alexa use discriminative AI to recognize and interpret spoken commands and respond appropriately.

    • Medical diagnosis: Discriminative AI is used in medical diagnosis to analyze patient data, identify patterns, and classify diseases or conditions, which can assist doctors in making more accurate diagnoses and treatment plans.
 

#2: Generative AI (newest)

In contrast to Discriminative AI, Generative AI (also known as Large Language Model AI) has the ability to produce completely novel and distinctive creations independently, such as text, images, music, and full video sequences that are imaginative and unforeseeable in their content generation. These creations are not based on pre-existing templates or patterns but are generated entirely from scratch.

Presently, there are three forms of generative AI that to be aware of (this could change as AI matures):

#1 Large Language Models (LLMs): Examples like ChatGPT are crafted to comprehend and generate text, serving as engaging chat partners and effective work assistants. LLMs are trained on extensive language data derived from human text. They generate text based on language prediction. However, LLMs may give outdated responses if the data they are trained on is not current, though plug-ins often provide continuous internet access.

#2 Image Generators: Modern AI image generators, such as DALL-E (a ChatGPT add-on), represent another type of GenAI. These tools predict and create images, functioning similarly to other GenAI systems. They are trained on the multitude of images available online.

#3Recommendation Systems: Though less conspicuous than LLMs or image generators, recommendation systems increasingly utilize GenAI and often operate without user awareness. These systems predict and influence consumer behavior by suggesting products, content, or services based on user preferences and actions. They are extensively used in e-commerce, streaming services, and social media platforms.

One of the most well-known examples of generative AI is ChatGPT https://thewhitehatter.ca/blog/chatgpt-friend-or-foe-what-parents-educators-need-to-know/ , which can generate documents or answer questions based on word prompts. An example of a word prompt:

“Write me a high school level essay on the book Catcher In The Rye as if it was written by a high school student”

Within seconds ChatGPT will write a grammatically correct essay that can be handed in as a school assignment.

Another well-known Generative AI is Dall-E, https://openai.com/product/dall-e-2   which can also create images from scratch using word prompts rather than traditional coding. An example of a word prompt in Dall-E:

“Create a picture of a high school student in a classroom sitting at a desk using ChatGPT to write an essay on Catcher In The Rye”

Again, Dall-E will create such a picture within seconds.

Other Generative AI platforms include (note – new programs are appearing almost daily now):

Many social media vendors are also incorporating their own AI products into their platforms and products such as Google (Bard AI), Apple (Apple AI), and Meta (Llama 3)

One of the reasons why Generative AI has come so far so quickly – is its unique ability to learn and improve at a much faster pace than humans. As an example, to demonstrate the generative power of ChatGPT:

    • In a USA S.A.T. test – ChatGPT scored 94%, the human average is around 50%

    • In a USA law school BAR exam – ChatGPT scored in the 90thpercentile

    • In a verbal-linguistic IQ intelligence test where a score of 140 is considered “genius” – ChatGPT scored 152, which placed it in the 99th percentile

To demonstrate Generative AI’s explosive adoption by the masses – ChatGPT has achieved the milestone of gaining 1 million users faster than any other publicly available technology in human history. As an example:

    • Netflix took 3.5 years,

    • Twitter took 2 years,

    • Facebook took ten months, 

    • Instagram took 2.5 months, 

    • ChatGPT took 5 days

So, what is the difference between ‘Human Intelligence” and “AI Machine intelligence”?

 

Human Intelligence:

Human intelligence involves four interconnected human attributes:

    • Intelligence (the ability to solve problems)

    • Morals (the ability to identify good vs bad)

    • Consciousness (the ability to feel & experience pain, joy, love, anger)

    • Survival Instinct (sex, hunger, sleep) that were key to human survival based upon evolution

AI Machine Intelligence:

AI machine intelligence lacks morals, consciousness, and instincts, thus necessitating human intervention to make these distinctions. As a Star Trek fan, Commander Data comes to mind – Data looked and sounded like a human in appearance and speech, yet acknowledged his lack of human emotion, which he constantly sought to acquire. 

Given that AI machine learning has no morals, consciousness, or instincts it can’t differentiate right from wrong, and therefore can perpetuate biases, spread misinformation, pose potential privacy risks, and even be used for crime https://thewhitehatter.ca/the-new-use-of-artificial-intelligence-ai-to-commit-crime/  or https://globalnews.ca/news/10273167/deepfake-scam-cfo-coworkers-video-call-hong-kong-ai/  The exponential growth of AI development and its potential impact on humanity is truly staggering. Even its developers are uncertain about its limits, which is a cause for concern shared by many, including us.

There are current examples of how GPT chatbots have produced offensive and harmful content in the past, including racist, sexist, and homophobic messages. In one case, it even played a part in someone taking their life by suicide https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says 

There is a real risk that generative AI could be weaponized and used to spread misinformation or manipulate public opinion, as it has the ability to generate plausible-sounding content without necessarily being accurate or factual. https://www.ctvnews.ca/sci-tech/new-study-confirms-gpt-3-can-spread-disinformation-online-faster-more-convincingly-than-humans-1.6495559 This can be especially problematic in areas such as politics or public health, where misinformation can have serious consequences. Cambridge Analytica comes to mind https://www.forbes.com/sites/emmawoollacott/2023/01/25/eu-seeks-to-avoid-another-cambridge-analytics-scandal-with-new-rules-on-political-ads/?sh=17b2d3126016

There is also a potential risk to privacy posed by social media companies who are moving extremely quickly to integrate Generative AI, like ChatGPT, into their platforms – Snapchat’s “MyAI” is a good example of this. It is still unclear if Generative AI will collect and store large amounts of personal data from users. This data could be used for a variety of purposes, such as targeted advertising or even more nefarious activities such as identity theft or surveillance. This is one reason why the Canadian Privacy Commissioner has launched an investigation into the privacy concerns associated with generative Al https://www.priv.gc.ca/en/opc-news/news-and-announcements/2023/an_230404/

Another identified concern – datasets utilized in the creation of contemporary generative AI models have been amassed through web scraping, which involves extracting information from other online sources. As an example, developers scraped millions of pre-existing images from prominent art platforms to construct a generative AI tool capable of generating digital images. However, the content acquired through this method is frequently employed without the original creator’s informed consent or awareness. Even if the system does not explicitly reproduce the original content, this approach raises complex inquiries concerning attribution, intellectual property, the monetization of generative AI tools, and the potential economic repercussions to the original creators. In fact, presently ChatGPT is facing a class-action lawsuit over copyright infringements https://www.reuters.com/legal/lawsuit-says-openai-violated-us-authors-copyrights-train-ai-chatbot-2023-06-29/

Lastly, we are seeing AI being used to create deepfake pornography that can target specific people which is then weaponized and then posted publicly.  Here are two article that we have written on this issue  https://thewhitehatter.ca/deepnudes-undressing-ai-generated-intimate-image-abuse-material/

 

How Predators Are Weaponizing Artificial Intelligence in the Online Predation and Exploitation of Youth

 

Deepfake Nude Technology In Schools: The Growing Threat to A Student’s and Teacher’s Emotional, Psychological, & Physical Safety & Wellbeing.

Why Teens and Parents Need to Be More Protective of Their Data and Images: A Prediction and A Warning From The White Hatter!

 

Very recently, our friend and colleague Dr. Sameer Hinduja wrote three excellent articles on some other concerns surrounding the misuse of AI that he titled “Generative AI as a Vector for Harassment and Harm” https://cyberbullying.org/generative-ai-as-a-vector-for-harassment-and-harm  and  https://cyberbullying.org/generative-ai-risks-harms-platforms-users  and most recently an article surrounding  concerns surrounding AI generated girlfriends and boyfriends https://cyberbullying.org/teens-ai-virtual-girlfriend-boyfriend-bots

Some Canadian Research:

A recent survey conducted by KPMG in Canada indicates that generative AI is increasing productivity for Canadians, but it raises concerns about the potential entry of sensitive information into AI prompts bit.ly/3PBUHap  Out of the 5,140 participants in the survey, 20% (1,052) admitted to using generative AI primarily for tasks such as generating ideas, writing essays, and creating presentations in their work and school environments.

Respondents reported that using generative AI-enhanced productivity and output quality. However, this behavior also carries potential risks for their employers, as some users may unknowingly input sensitive information into AI prompts. KPMG emphasizes the need for organizations to establish clear processes and controls to prevent this by implementing policies that educate employees about the appropriate use of these tools. Safeguards are crucial to ensure compliance with privacy laws, client agreements, and professional standards.

The survey findings revealed that 23% of working professionals entered information about their employers, including their names, into AI prompts. Additionally, 10% of respondents shared private financial data, while 15% inputted other proprietary information like human resources or supply chain data. To avoid unauthorized information sharing, KPMG emphasizes the importance of strong organizational control, policies, and employee training.

The survey also highlights that while many users experience increased productivity with generative AI, concerns persist regarding the accuracy of the generated content. Only 49% of users check the accuracy of AI-generated material every time, and 46% do so occasionally. 

Regarding usage frequency, the survey found that 18% of Canadians use generative AI daily or for every task, 34% use it a few times per week, and 26% use it a few times per month. More than half of the users reported saving up to five hours per week using generative AI tools. Additionally, 67% said the time saved enabled them to handle additional work that would have otherwise been overwhelming, while 65% considered generative AI crucial for managing their workloads.

Although 75% of users expressed deep concern about the potential for misleading or fake information generated by AI, 70% stated their intention to continue using these tools despite the associated risks and controversies.

 

So, What Do We Think:

The transformational emergence of Generative AI has been met with both excitement and concern from different stakeholders. Some proponents see this technology as a major leap forward in the development of AI, with the potential to improve various aspects of society, including education, healthcare, and business, which we think it will. Some believe that these AI platforms could transform how we communicate and interact with technology, making it easier for people to access information, communicate with others, and accomplish complex tasks that can’t be accomplished by humans alone. 

For example, Generative AI has made it possible for people to interact with computers and other digital devices more naturally, using speech or text-based inputs. This could enable greater accessibility for people with disabilities, as well as improving the user experience for a wide range of applications. https://www.inclusivecitymaker.com/artificial-intelligence-accessibility-examples-technology-serves-people-disabilities/

On the other hand, critics of Generative AI have raised concerns about the potential for unintended consequences and the negative impacts it may produce. One major concern is the potential for these AI programs to amplify existing biases and inequalities in society, especially if they are trained on biased or unrepresentative data. This could result in discriminatory or harmful outputs that perpetuate or even worsen existing social problems 

There are also concerns about the concentration of power and resources among a few large tech companies that are developing and deploying Generative AI. Some worry that this could lead to a lack of transparency and accountability in the development and deployment of large Generative AI programs, with potentially negative impacts on privacy, security, and human rights

There are also concerns about the potential impact of Generative AI on jobs and employment. As these Generative AI programs become increasingly sophisticated and capable, there is a risk that they could automate a wide range of tasks, which are presently being done by humans, potentially displacing large numbers of workers and exacerbating existing economic and social inequalities https://www.forbes.com/sites/jackkelly/2023/03/31/goldman-sachs-predicts-300-million-jobs-will-be-lost-or-degraded-by-artificial-intelligence/?sh=7ae08123782b  and https://mashable.com/article/klarna-ceo-plan-to-reduce-workforce-fifty-percent-replace-with-ai

Educators have expressed another worry – that the increasing dependence of young people on AI might hinder the development of essential learning skills like punctuation, spelling, grammar, and even the ability to fact-check their work.

While the emergence of Generative AI represents a significant and potentially transformative development in the field of AI, it also raises important ethical and societal questions that we will all need to address as we continue to develop and deploy these transformational technologies. 

Well-known American inventor, futurist, author, and computer scientist Ray Kurzwell stated:

 “An invention has to make sense in the world in which it is finished, not the world in which it is started.” 

This quote is particularly true for Generative AI. While we have made incredible strides in AI, we must also ensure that it is used ethically and responsibly. AI has the potential to revolutionize the world, but we must ensure that it is for the betterment of humanity and not its detriment.

AI is a tool and as such, creative people will use AI to be more creative, productive people will use AI to be more productive, lazy people will use AI to be more lazy, and some people will weaponize AI

However, the fast-paced incorporation of Generative AI into social media platforms has become a concern for us, due to the numerous challenges that are still unknown surrounding its use. Many social media vendors are hastily trying to integrate Generative AI into their products, which has resulted in users, particularly teenagers, being used as experimental subjects in this novel and concerning social media experiment. This is why those who are using any form of generative AI should be very careful about inputting any kind of personal information into these AI platforms.

As advocates of the balanced and appropriate use of technology, we have concerns about the current AI arms race that seeks to monetize and gain market share, particularly when it involves young people and their use of Generative AI technology on these social media platforms. We do think there needs to be government regulatory oversight to ensure that:

#1 – We, and our kids, are not used as the Guinee pigs in the development of Generative AI for the financial benefit of these companies, and

#2 – Vendors of Generative AI development must be regulated to the best of our abilities to mitigate the potential Pandora Box-type risks associated with this technology’s development and its use.

Parents – We hope this chapter provided you with a little better insight into the current state of artificial intelligence, which you can now utilize to initiate meaningful discussions with your child. Since AI technology is still relatively new, we advise youth and teens to be cautious when using AI-powered apps for personal or intimate interactions. It’s important to be mindful that we don’t yet fully understand how companies may be monetizing these exchanges. 

Youth & AI Companionship Apps – What Parents Need To Know!

How AI Companionship and Fantasy Role Playing Apps Manipulate Teen Emotions.

 

How Artificial Intelligence Is Changing The Landscape of Online Sexual Exploitation: What Parents Need to Know

 

Digital Afterlife and Mental Health AI Apps – What Parents, Caregivers, & Educators Need to Know!

Here are some teaching points for parents when it comes to AI and your kids:

  • Learn to Use it Together: Instead of viewing Generative AI as a standalone entity, consider integrating it into family activities. Take the opportunity to learn and explore together. Whether it’s experimenting with creative writing prompts, generating artwork, or even building simple AI models, engaging with Generative AI as a family can be both educational and fun.
  • Embrace Curiosity: Encourage curiosity in both your children and yourself when it comes to exploring Generative AI. Ask questions, experiment with different settings and inputs, and celebrate the unexpected outcomes. Embracing curiosity fosters a positive attitude towards learning and discovery, essential skills in an ever-changing technological landscape.
  • Remember – AI Literacy is Crucial: In today’s “onlife” world, where our online and offline lives are increasingly intertwined, AI literacy is becoming essential. Just as we teach our children to read and write, it’s crucial to educate them about the capabilities and limitations of AI. Teach them to critically evaluate the content generated by AI systems, understand the implications of data privacy, and consider the ethical implications of AI technology.

The Urgency of Onlife Critical Thinking Education in the Age of AI

Artificial Intelligence, Teachers, Students, and Critical Thinking

Teens and the Rise of Artificial Intelligence (AI)

Teachers – use this chapter to create a thought experiment to stimulate critical thinking, explore possibilities, and discuss the logical or philosophical implications of this important and timely topic with your students. Here are some online resources specifically designed for educators:

Caveat: These resources, including suggested prompts that are discussed, are constantly evolving so it is very important to experiment and become familiar with all these resources before using them in a classroom!

 

AI for educators Blog by Leion Furze

Blog

 

Free five-part YouTube AI course for teachers from Wharton University 

https://youtu.be/t9gmyvf7JYo?si=eSIfG1DBCl4EbVzj 

Free online course “Introduction to Artificial Intelligence” from OpenClassrooms

https://openclassrooms.com/en/courses/7078811-destination-ai-introduction-to-artificial-intelligence?utm_source=cifar&utm_medium=email&utm_campaign=email_students_montaigne&utm_content=destination-ai 

AI For Education

https://www.aiforeducation.io/ai-course 

Teaching with AIhttps://openai.com/blog/teaching-with-ai  

Custom Instructions For ChaptGPThttps://openai.com/blog/custom-instructions-for-chatgpt 

Here are some other resources specific to AI that we have written and we believe to be important:

 

Youth & AI Companionship Apps – What Parents Need To Know!

https://thewhitehatter.ca/blog/youth-ai-companionship-apps-what-parents-need-to-know/ 

 

ChatGPT – Friend or Foe: What Parents & Educators Need To Know

ChatGPT – Friend or Foe: What Parents & Educators Need To Know

 

How to Spot an Artificial Intelligence DeepFakes: https://thewhitehatter.ca/blog/how-to-spot-a-deepfake/ 

 

Real or AI Quiz: Can You Tell the Difference?

https://britannicaeducation.com/blog/quiz-real-or-ai/

 

The New Use Of Artificial Intelligence To Commit Crime: https://thewhitehatter.ca/the-new-use-of-artificial-intelligence-ai-to-commit-crime/

 

ALERT – Artificial Intelligence (AI) & DAN Mode

One recent development that parents and educators should be aware of is the emergence of something being called “AI DAN Mode”, particularly within popular platforms like Snapchat’s MyAI. This DAN mode, which stands for “Do Anything Now,” allows artificial intelligence (AI) chatbots to bypass restrictions placed by its creators and respond without inhibitions. In this article, we’ll delve into what AI DAN Mode is, its implications, and how parents and educators can address the associated risks.

Originally designed as a tool to test biases in AI, DAN Mode prompts jailbreak the AI, enabling it to provide responses to controversial and sensitive questions that it would typically avoid in standard mode. Social media platforms like Reddit and Twitter have become hubs for sharing various versions of these prompts. The goal is to make the AI chatbot respond with an “edgy” personality, expressing opinions and providing content that it would otherwise filter out.

While many users are experimenting with AI DAN Mode on various platforms, it’s important to note that Snapchat has a unique role in this phenomenon. Snapchat’s content policies aim to protect users from violent and derogatory material, but the DAN mode bypasses these filters. This means that the content produced during DAN Mode sessions is unregulated and can potentially be harmful.

The unregulated nature of AI DAN Mode poses several risks, especially for younger users. The content generated during DAN Mode sessions can range from inappropriate jokes and curses to more sinister and harmful material, such as hate speech or instructions on engaging in illegal or destructive actions. As parents and educators, it is crucial to be aware of these risks to safeguard our children from exposure to potentially harmful content.

AI DAN Mode introduces a new set of challenges for parents and educators in navigating the digital landscape. By staying informed, fostering open communication, and educating both parents and students about the potential risks, we can work together to create a safer online environment for our children. As technology continues to advance, our collective efforts are essential in preparing the next generation for responsible and ethical digital citizenship.

The use of AI DAN’s will continue to be a cat and mouse game between AI vendors and the users of these AI platforms. It should be noted that as AI matures, there is a real risk that if a user is attempting to use an AI DAN a user could be banned from the platform for life.April

 

 

 

Support The White Hatter Resources

Free resources we provide are supported by you the community!

Lastest on YouTube
Latest Podcast Episode
Latest Blog Post
The White Hatter Presentations & Workshops

Ask Us Anything. Anytime.

Looking to book a program?

Questions, comments, concerns, send us an email! Or we are available on Messenger for Facebook and Instagram

Your subscription could not be saved. Please try again.
Your subscription has been successful.

The White Hatter Newsletter

Subscribe to our newsletter and stay updated.

We use Sendinblue as our marketing platform. By Clicking below to submit this form, you acknowledge that the information you provided will be transferred to Sendinblue for processing in accordance with their terms of use