top of page

Support | Tip | Donate

Recent Posts

Featured Post

Why Generative AI Has Become a Major Challenge for Law Enforcement

  • Writer: The White Hatter
    The White Hatter
  • 1 day ago
  • 11 min read
ree

Caveat - We at the White Hatter were honoured to partner with Martin Hurst, Inspector, BC Transit Police Innovation and Project Development Leader, in writing this article. Inspector Hurst is also an adjunct instructor at the British Columbia Institute of Technology, where he teaches on the topics of social media and open-source investigative techniques. Not only did we want to ensure the most current information possible for those in Canadian Law enforcement, but also to those they serve in their communities, to provide a better understanding of some of the major challenges for law enforcement when it comes to today’s digital investigations. Again, thank you to Inspector Hurst for co-authoring this article with us, and for bringing his real-world law enforcement insight and subject matter expertise to the topics discussed. The depth of this article would not have been possible without his generous participation.

 

Generative AI (content creation) has transformed how images, videos, and audio can be produced or altered. For many people, these tools fuel creativity and fun. For law enforcement, these same tools have introduced significantly serious investigative challenges. When almost anything can be fabricated in minutes and passed off as real, the stability of visual evidence, a foundation of modern policing, starts to crumble. This article explains why we believe that generative AI will become a significant challenge to law enforcement, how it affects investigations that depend on visual evidence, and what this means for courts, victims, and public safety.

 

Since the intersection of photography and videography with criminal investigation, Law enforcement has dealt with edited photos and videos.  Today’s generative AI is different in both sophistication and scale. Modern tools can not only create fake images and videos from scratch but can also alter genuine footage in ways that are nearly impossible to detect by casual observation. What once took hours of expert editing can now be done in seconds on a phone. The ability to automate and mass produce synthetic visuals also means that investigators may encounter dozens or even hundreds of fabricated files within a single case. This new reality makes it harder for police to take any visual evidence at face value.

 

Here are some challenges we have identified:

 

The Liar’s Dividend: When Real Evidence Can Be Dismissed as Fake

 

Deepfakes do more than create convincing false evidence; they also allow offenders to deny the authenticity of real evidence. Once the public becomes aware that realistic fakes exist, it becomes easier for someone to claim that incriminating footage is “just an AI fake.” This dynamic, widely referred to as the liar’s dividend, is showing up more often in cases involving violence, harassment, sexual exploitation, and fraud. For investigators and prosecutors, this means spending more time authenticating files, calling on expert witnesses, and overcoming skepticism in both courts and the public. Even when the evidence is genuine, the mere possibility of manipulation can create reasonable doubt. In a recent Canadian Case, R v. Rzadkowski 2025 ONSC 2269, the following was noted,

 

“[71] However, I find most of Mr. Rzadkowski’s evidence about the encounter to be unreliable. I make a preliminary observation. The tenor and content of much of Mr. Rzadkowski’s evidence suggested he believes much of the evidence in the case is manufactured as part of a conspiracy to set him up and that some of his conduct was motivated by the forces of artificial intelligence”.

 

The Court eventually deemed that Mr. Rzadkowski’s evidence to be largely unreliable. However, the shadow of Artificial Intelligence will continue to creep into the Court System. Already documented by The University of British Columbia, “AI & Criminal Justice: 2026 Edition”(1) :

 

A substantial number of cases across a range of legal areas involved persons who appear to have mental health disorders attributing various behaviours to their belief of influence by artificial intelligence. Some examples include the following:

 

·      LH (Re), 2024 CanLII 86443 (ON CCB)

·      CC (Re), 2023 CanLII 112236 (ON CCB)

·      WH (Re), 2022 CanLII 48555 (ON CCB)

·      FY (Re), 2023 CanLII 49765 (ON CCB)

·      LA (Re), 2022 CanLII 34498 (ON CCB)

·      KH (Re), 2024 CanLII 79391 (ON CCB)

·      AD (Re), 2017 CanLII 141956 (ON CCB)

·      AF (Re), 2023 CanLII 131945 (ON CCB)

·      Centre intégré universitaire de santé et de services sociaux de l'Ouest-de-l'Île-de-Montréal (St-Mary's Hospital Center) c R.C., 2024 QCCS 845

·      Kandasamy v Canada (Attorney General), 2022 FC 1111

·      PR (Re), 2020 CanLII 32708

·      LH (Re), 2024 CanLII 56179 (ON CCB)

 

Authenticating Evidence Is Slower and More Complex

 

Most modern investigations rely heavily on digital evidence from sources like phones, home security cameras, social media accounts, and public CCTV systems. The presence of generative AI complicates every step of this process. AI detectors exist, but many have real limitations; they struggle with compressed or low-quality footage and can fail when presented with advanced or newer deepfake techniques. Investigators also face a lack of standardized tools that work reliably in real world environments.

 

There are emerging systems designed to verify authenticity, such as technology that “signs” a file at the moment it is captured, but these systems are not yet widely adopted across consumer devices, police equipment, or online platforms. Until they are, police must rely on traditional forensic investigation, which takes time and requires specialized skills. As a result, authenticating visual evidence has become a slower and far more complex task.

 

While inputting “AI influenced” data is daunting, also outputting AI data, working in concert with policing efficiencies, has become challenging too. AI-transcribed reports in policing use speech-to-text and NLP to convert bodycam audio into draft narratives, saving officers significant time on documentation by generating structured reports from spoken words. While promising efficiency, these systems require officers to review and edit for accuracy, and raise concerns about potential inaccuracies, legal issues, and the need for regulation, balancing automation with ensuring justice and transparency in the criminal justice system.

 

The Volume of Digital Evidence Is Exploding

 

Well before generative AI entered the picture, law enforcement agencies were already overwhelmed by the massive volume of digital files collected in cases. Body cameras, surveillance systems, and smartphones produce hours of footage that must be reviewed for nearly every incident. Generative AI adds to this burden by introducing even more files, many of which are false leads or intentionally misleading. Every suspicious file must now be validated, cross-referenced, and examined for signs of manipulation. A recent LAW360 Article claimed that a lawyer ordered to pay costs for non-disclosure of generative AI use and citing fake precedents in court. (2) 

 

Smaller agencies often lack dedicated digital-forensics units, meaning frontline officers or general investigators must shoulder this work. This creates bottlenecks and delays, stretching already limited resources even thinner.

 

AI-Generated Child Sexual Abuse Material (CSAM): A Devastating New Front

 

One of the most troubling developments is the rise of AI generated child sexual abuse material. Offenders can now create sexualized images of minors without ever encountering a real child, or they can take images of real children and morph them into exploitative material. Some tools even “age regress” adult content to resemble minors. These techniques fuel new crimes and re-victimize existing survivors whose real images have been used as source material.

 

This phenomenon makes investigations more complicated. Police must determine which images depict real children in need of urgent protection and which are Ai generated, and they must do so in cases where thousands of files may be involved. Some jurisdictions have had to update their laws to criminalize AI Generated CSAM explicitly, Canada is in the process of doing so with Bill C-16 that is currently making its way through the Parliamentary process. Others are still working to close loopholes that offenders exploit. The result is a rapidly evolving crime landscape that demands new legal, technical, and investigative responses.

 

Fraud, Extortion, and Harassment Are Easier to Commit

 

Generative AI now plays a direct role in a wide range of crimes. Voice cloning is commonly used to impersonate parents, executives, or partners, enabling fraudsters to trick people into sending money or disclosing sensitive information. AI-generated explicit images are being used in sextortion schemes, particularly against teens, even when the victim never created any intimate content. AI generated videos are also being weaponized to harm reputations, influence workplace decisions, or manipulate relationships. These crimes often cross borders, involve cryptocurrency, or rely on anonymous platforms, making them difficult to investigate and even harder to prosecute.

 

Courtrooms Are Struggling to Keep Up

 

Courts are under increasing pressure to adapt to digital evidence that may or may not be authentic. Judges and lawyers now face difficult questions about when video or photo evidence can be trusted, how to evaluate AI detection tools, and what standards forensic experts must meet. Because generative AI is advancing so quickly, legal precedents often lag behind the technology. In its September 2025 published, “Interim Principles and Guidelines on the Court’s Use of Artificial Intelligence, the Federal Court publicly identified the following in a statement,

 

“The Federal Court will follow the Principles and Guidelines in this policy when using Artificial Intelligence (AI). The Court will not use AI, and more specifically automated decision-making tools, in making its judgments and orders, without first engaging in public consultations. For greater certainty, this includes the Court’s determination of the issues raised by the parties, as reflected in its Reasons for Judgment and its Reasons for Order, or any other decision made by the Court in a proceeding”. (3)

 

Juries are also grappling with these issues. Some jurors may place too much faith in digital evidence, assuming it must be accurate. Others may distrust anything that comes from a digital device, assuming AI could have altered it. Both extremes make it harder to ensure fair trials and consistent decisions.

 

Law Enforcement Faces Structural Barriers

 

Even when agencies recognize the need to respond to AI driven threats, they face several structural challenges. Many police services lack the funding needed to purchase advanced forensic tools or to train staff in specialized digital investigations. Finding and retaining analysts with expertise in AI detection is difficult, especially for smaller or rural agencies. Legislation is also inconsistent, with different regions taking different approaches to regulating synthetic media, protecting privacy, and defining what counts as evidence. These issues create confusion and inconsistency, leaving law enforcement trying to solve a rapidly developing global problem without the necessary infrastructure or support.

 

However, in Canada, even provincial differences have risen in how emerging technologies are addressed:

 

Technology Use (e.g., Facial Recognition): Police agencies in Ontario and British Columbia have used technologies like facial recognition (FRT) and automated licence plate readers (ALPRs). However, the approach to regulation has varied. The BC and federal privacy commissioners jointly investigated the use of certain branded technology by the RCMP and municipal police in BC, finding it created a significant risk to individuals. Conversely, Ontario police services also use FRT, which has led to calls for regulations. There is a general consensus among stakeholders that legislative gaps exist and new unified legislation specifically governing police use of FRT is required across Canada.

 

In a recent article from Benjamin Perrin, he had spoken at the Ontario Bar Association’s Artificial Intelligence and Technology in Law Enforcement and Criminal Matters program. He shared some of the latest research on AI in policing, with specific implications for lawyers and judges.

 

Prof. Perrin framed his presentation around three key observations about AI's evolving role in modern policing:

 

#1 - AI in policing is less a sudden disruption than a high-stakes realignment that subtly shapes how individuals are policed, how investigations are conducted, and how evidence takes shape.

 

#2 - This shift is occurring with limited formal legal oversight, relying instead on police self-regulation and governance frameworks that remain insufficiently robust.

 

#3 - In this evolving context, lawyers and judges have an essential role in safeguarding the integrity of the criminal justice system, making AI literacy and legal competency in this area vital. (4)

 

There are promising efforts underway to support law enforcement. Industry groups and technology companies are developing standards that allow devices to embed cryptographic proofs of authenticity in photos and videos at the moment they are captured. More advanced AI-detection tools are being built to withstand real-world conditions, such as low resolution and high compression. Governments are starting to pass laws that directly address AI generated CSAM, deepfake abuse, and the use of AI-generated evidence in court. Training programs aimed at police officers, prosecutors, and judges are also emerging, helping professionals understand how to assess digital evidence in a world where manipulation is common. These reforms will take time to become widespread, but they represent important steps forward.

 

Concurrent to these strides in “usage reformation” is also the discipled need for a measured approach too.  In his December 11, 2025 article, “If You Want to Sell AI to Police and Public Safety Agencies, Information Security Is Your Product; to a Police Chief, it’s your Assurance”, Dr. Joseph Lestrange stated, “In policing and public safety, your AI isn’t “just software.  It belongs in the same risk category as firearm policy, pursuit policy, or a use-of-force reporting system. When it fails, it’s not a simple tech glitch, it can get someone hurt, put an innocent person in custody, and land agency in court and under civil-rights investigation”.

 

Socio-Economical-Political Impacts of Producing and Using Artificial Intelligence

 

Regardless of this next topic being at arms length from policing, its potential impacts on addressing legality, ethics, morality, and safety can’t be ignored either. AI is poised to drive a massive surge in both electricity and water consumption, primarily due to the power-hungry data centers required for training and running AI models. Global data center electricity consumption is projected to more than double by 2030, with AI as the most significant factor.

 

Projected Surge: The International Energy Agency (IEA) projects that global electricity consumption for data centers will reach around 945 terawatt-hours (TWh) by 2030, up from approximately 415 TWh in 2024. Goldman Sachs estimates a 160% increase in data center power demand by 2030.

 

AI as Key Driver: The rapid growth of AI applications, especially large language models and generative AI, is the primary driver. An AI query can require nearly 10 times more electricity than a traditional internet search.

 

Regional Impact: In the United States, data centers could account for nearly half of electricity demand growth through 2030, rising to about 8-9% of total US power use. In some regions like Virginia, data centers already consume significant portions of the local electricity supply.

 

Grid Strain and Investment: The unprecedented demand growth is expected to strain existing power grids, potentially leading to bottlenecks and higher power prices. This necessitates massive infrastructure investment, with some estimates suggesting a need for $50 billion in new generation capacity for US data centers alone. (5)

 

Environmental Concerns: The rush to build new generating capacity to meet AI demand could challenge Canada's climate goals and strain existing resources. (6)

 

Integrating Indigenous Knowledge: Incorporating Indigenous Knowledge systems is seen as crucial for developing projects more sustainably and with a more complete understanding of environmental and social effects. This approach values a hyperlocal perspective on land and water stewardship.

 

Generative AI has dramatically reshaped the landscape of digital evidence. It allows anyone with basic tools to fabricate convincing images, videos, and audio, making it harder for police to rely on visual evidence and easier for offenders to deny wrongdoing. The volume of digital files continues to grow, while courts and investigators struggle to keep pace with the speed of technological change. While new solutions are emerging, they will require cooperation between law enforcement, policymakers, industry, and the justice system. Until then, visual evidence must be treated with caution, verified carefully, and supported by strong corroborating facts. The age of simply trusting what the camera shows is over, and police, the Courts, and the Court of Public Opinion must now adapt to an environment where seeing is no longer believing.

 

 

Digital Food For Thought

 

The White Hatter & Inspector Mike Hurst

 

Facts Not Fear, Facts Not Emotions, Enlighten Not Frighten, Know Tech Not No Tech

 

 

References

 

 

 

 

 

 

 

Support | Tip | Donate
Featured Post
Lastest Posts
The White Hatter Presentations & Workshops
bottom of page