Artificial intelligence is poised to reshape our world in countless ways and in almost every field. This includes the criminal justice system. Algorithm-based, data-driven decision-making is being increasingly used in pre-trial risk assessments in the United States as a tool to calculate a defendant’s risk of reoffending. Proponents argue that this removes inherent bias present in criminal justice figures such as police, judges or prosecutors.
However, a new paper by Concordia Ph.D. student and criminal defense lawyer Neha Chugh calls this assertion into question. In it, Chugh argues that AI risk assessments, while not yet being used in Canadian courts, raise multiple red flags that the justice system needs to address. She says Indigenous defendants, who are already overrepresented in the criminal justice system, are especially vulnerable to the tool’s deficiencies.
Writing in the IEEE Technology and Society Magazine, Chugh points to the landmark case Ewert v. Canada as an example of the problems posed by risk assessment tools in general. Jeffrey Ewert is a Métis man serving a life sentence for murder and attempted murder. He successfully argued before the Supreme Court of Canada that tests used by Corrections Services Canada are culturally biased against Indigenous inmates, keeping them in prison longer and in more restrictive conditions than non-Indigenous inmates.
“Ewert tells us that data-driven decision-making needs an analysis of the information going in—and of the social science contributing to the information going in—and how biases are affecting information coming out,” Chugh says.
“If we know that systemic discrimination is plaguing our communities and misinforming our police data, then how can we be sure that the data informing these algorithms is going to produce the right outcomes?”
Subjectivity is needed
Using AI to drive risk assessments would, she says, simply transfer biases from humans to human-created algorithms. Bad data in leads to bad data out. “Proponents of using AI in this way are shifting responsibility to the designers of the algorithm.”
Chugh points out that AI is already being considered for use in some Canadian courts. As a member of the Board of Governors of the Law Commission of Ontario, she admits to reservations about the ways the commission has considered the use of AI for matters like administrative court proceedings or by police as investigative tools.
One of the principal issues Chugh identifies with an overreliance on AI for risk assessments and other considerations is the absence of subjective discretion and deference. These, she notes, are key pillars of an independent judiciary. Laws and statutes provide parameters within which judges can operate and leave them some leeway while they consider relevant factors like individual histories and circumstances.
“I firmly believe in the guidance from our courts, that sentencing and bail are community-driven, individualized processes,” she says.
“We appoint our judges and our decision-makers based on their knowledge of the community. Do we need to outsource that decision-making to a broad and generalized system? Or do we want to rely on a system where we are having individualized conversations with offenders? I prefer the latter because I believe that courts can have a great impact on individuals.”
Chugh insists that she is not completely against using AI in the court system, only that she believes more research is needed.
“Are we there yet? In my opinion, no. But if I can be proven wrong, I would be the first to change to my mind.”
More information:
Neha Chugh, Risk Assessment Tools on Trial: AI Systems Go?, IEEE Technology and Society Magazine (2022). DOI: 10.1109/MTS.2022.3197123
Provided by
Concordia University
Citation:
It is still too early to use artificial intelligence for criminal justice, claims new paper (2022, November 22)