The Use of New Algorithms to Determine the Sentence Can Reduce the Length of Prison Sentences

The Use of New Algorithms to Determine the Sentence Can Reduce the Length of Prison Sentences

US prisons currently hold about 2 million detainees – many held while awaiting trial, and others serving extremely long prison sentences. New research by Professor Christopher Slobogin, Milton R. Underwood Chair in Law at Vanderbilt Law School, indicates that a risk prediction algorithm can help dramatically reduce these numbers.

According to the professor, there is a big problem with incarceration in the US, as none of the current solutions work. To solve the problem, he suggests using algorithms to help find out who poses a danger to the community if released.

The United States currently incarcerates 0.6 percent of its population—a rate six times higher than in European countries.

Research shows that measures like decriminalization and elimination of mandatory minimum sentences barely made a dent in the incarceration rate,” Slobogin said. “That said, the public will not buy any reform unless you can assure them of their safety.”

An ideal algorithm would indicate the probability of a particular individual committing a serious crime over some time in the absence of intervention.

In a recently published survey, Slobogin explained that by making criminal punishment decisions more transparent, algorithms could force a long-standing re-examination of the purposes and objectives of the criminal justice system. He argues that risk assessment algorithms can:

  • to help reduce pre-trial detention and the length of prison sentences without increasing the risk to the public – a fundamental goal as COVID-19 currently spreads across penal establishments;
  • mitigate excessively punitive bails and sentences, which disproportionately affect low-income people;
  • allocate correctional resources more efficiently and consistently;
  • provide the springboard for evidence-based rehabilitation programs that aim to reduce recidivism to divert those candidates who are most likely to succeed from prison.

Even with its advantages, the use of algorithms to decide the fate of a human life is controversial. Critics say the algorithms are not effective in identifying who will offend and who will respond to rehabilitation efforts. Critics also argue that algorithms can be racially prejudiced, dehumanizing, and antithetical to criminal justice principles.

Slobogin said that while the criticisms had merit, current risk prediction methods could be worse. “At least algorithms consistently structure the analysis.”

He added that unstructured decision-making by judges, probation officers, and mental health professionals is demonstrably biased and reflexive and is often based on stereotypes and generalizations that ignore the justice system’s goals. Algorithms can perform better, he said, if only to a limited extent and if they are designed to offset the influence of racialized policing and prosecutorial practices.

Suppose the algorithms are proactively validated and used during the pre-trial process. In that case, most people arrested “can keep their jobs, keep their families intact and help their lawyer with their defense, helping to track witnesses,” Slobogin said. “Using algorithms to report the sentence, we can release people sooner, which could help them become productive rather than languishing in prison where they lose all hope and learn how to be a better criminal.”


This post was first published on the Phys. Read the original article.

More information: Christopher Slobogin, Just Algorithms: Using Science to Reduce Incarceration and Inform a Jurisprudence of Risk, www.cambridge.org/us/academic/ … dence-risk?format=PB

Share this post