Why Every AI Bill in Criminal Justice Is Missing the Same Critical Clause

CRIMINALYTICS · EDITION 1 · AI IN CRIMINAL JUSTICE


Criminalytics is a weekly newsletter at the intersection of algorithms and accountability in the justice system. Written by Pramod Kunju — AI Strategist, Author of AI in Criminal Justice, and Founder of Nakunj Inc.

📩 Subscribe Free on LinkedIn →


Why Every AI Bill in Criminal Justice Is Missing the Same Critical Clause

In the past 18 months, more than 20 U.S. states have introduced legislation aimed at regulating artificial intelligence in criminal justice — predictive policing tools, risk assessment algorithms, facial recognition systems used in investigations, and AI-assisted sentencing guidance.

Most of these bills share the same good intentions: protect civil liberties, ensure fairness, prevent algorithmic bias.

Almost all of them share the same critical flaw.

They regulate the outcome. They say nothing about the model.

The Gap Every Legislator Is Missing

Here is a sentence you will not find in virtually any AI governance bill currently moving through state legislatures:

“Any AI system used in criminal adjudication must be accompanied by a model card disclosing the training data composition, validation methodology, known failure modes, and demographic performance differentials across protected classes.”

Model cards — standardized documentation of how an AI system was built, what it was trained on, and where it fails — are industry standard practice in responsible AI development. Google publishes them. Hugging Face requires them. The federal government’s NIST AI Risk Management Framework recommends them.

Your local DA’s office has likely never heard of them. And the legislator writing the bill that governs your DA’s AI tool almost certainly hasn’t either.

What Bills Actually Say (And What They Don’t)

Current AI bills in criminal justice tend to focus on three things:

  1. Prohibition lists — banning specific uses like real-time facial recognition in public spaces.
  2. Impact assessments — requiring agencies to study the effect of AI before deploying it.
  3. Appeals rights — giving defendants a path to challenge AI-driven decisions.

These are not wrong. They’re incomplete.

An impact assessment that doesn’t require access to the model’s training data is like auditing a restaurant’s food safety without being allowed in the kitchen. You can survey the dining room all you want — the contamination is elsewhere.

An appeals process that doesn’t require the vendor to disclose how the algorithm reached its conclusion is theater. You cannot mount a legal challenge against a black box.

You cannot audit what you don’t understand. And right now, most legislators don’t understand what they’re regulating.

The Three Clauses Every Bill Needs

Based on years working at the intersection of AI deployment and high-stakes decision-making — and the research behind AI in Criminal Justice — here are the three provisions missing from nearly every bill reviewed:

1. Mandatory Model Documentation

Any AI system used in criminal proceedings must publish a model card. This is non-negotiable. It should disclose: what data the model was trained on, performance metrics broken down by race, gender, and age, known edge cases where the model fails, and who validated it and how. Vendors publish model cards for commercial products every day. There’s no legitimate reason they shouldn’t be required to do so for tools that inform whether a person goes to prison.

2. Independent Algorithmic Auditing

Self-certification by vendors is not auditing. A requirement for an “impact assessment” conducted by the agency purchasing the tool is not auditing. Real auditing means an independent third party — with full access to the model, the training data, and the deployment logs — issues a public report. Several jurisdictions have moved in this direction. It needs to become the floor, not the ceiling.

3. AI Literacy as a Prerequisite for Procurement

No law enforcement agency should be permitted to procure an AI system for use in criminal adjudication without at least one staff member who has completed a certified AI literacy curriculum. Understanding what a training dataset is, what overfitting means, why a model that’s 94% accurate overall can be 78% accurate for Black defendants specifically — this is table stakes.

This Is a Solvable Problem

AI, deployed responsibly, represents one of the most significant opportunities to reduce wrongful convictions, identify systemic bias in charging decisions, and allocate limited public defender resources more effectively.

But “deployed responsibly” requires more than good intentions at the bill-signing ceremony. It requires that the people writing the rules understand what they are governing well enough to write rules that actually work.

The good news: this is not complicated. The AI research community has already solved the documentation problem. The auditing frameworks exist. The literacy curricula exist.

What’s missing is the political will to require them — and enough informed voices in the room when these bills are being written.

The window to get this right is narrow. Once these tools are embedded in court systems, dislodging them will take a generation.

What You Can Do

If you work in criminal justice — as a DA, public defender, judge, or administrator — ask your vendor for a model card. If they don’t know what that is, that tells you everything.

If you work in technology policy, add “mandatory model documentation” to your standard legislative checklist.

And if you’re a legislator or legislative staffer reading this: a briefing is available at no charge. This matters too much to gatekeep.


>What I’m Watching

  • Several state legislatures are moving AI governance bills this session with no requirement for training data disclosure.
  • The Vera Institute continues to publish essential research on algorithmic accountability in criminal courts.
  • NIST’s AI Risk Management Framework is the closest thing to a federal standard — and it’s voluntary.

CRIMINALYTICS is a weekly newsletter covering AI in criminal justice — where algorithms meet accountability and data meets due process.

📩 Subscribe Free — 1,600+ readers →


Pramod Kunju is the Founder & CEO of Nakunj Inc., an AI strategy and data analytics consulting firm based in Irvine, California. He is the author of AI in Criminal Justice and the creator of Criminalytics. Contact: pramod@nakunj.com | nakunj.com

Leave a Reply

Discover more from Criminalytics

Subscribe now to keep reading and get access to the full archive.

Continue reading