Blog


Are Algorithms Increasing Bias? A Discussion of the Use of Risk Assessment Tools in Kentucky’s Criminal Courts

Blog Post | 112 KY. L. J. ONLINE | April 23, 2024

Are Algorithms Increasing Bias? A Discussion of the Use of Risk Assessment Tools in Kentucky’s Criminal Courts

By: Savannah Molyneaux, Staff Editor, Vol. 112 

Kentucky has long served as a trailblazer and trendsetter for the rest of the country in its use of risk assessment tools in the criminal justice system to determine issues like monetary bail, pretrial release, and sentencing.[1] Since 1976, Kentucky has utilized some type of risk assessment tool.[2] Modern risk assessment tools are an  AI  algorithm which analyzes various inputs , such as education level, employment, past convictions and prior sentences and provides a score, often in terms of “low risk” “moderate risk” or “high risk.”[3] Typically, it is up to the judge to decide whether to accept or reject the tool’s recommendation. In 2017 Kentucky began using the risk assessment algorithm to release criminal defendants that are considered “low risk,” without the involvement of a judge.[4] Prior to the PSA risk assessment tool, Kentucky utilized a few different forward-thinking algorithms.

One of Kentucky’s past risk assessment tools, the Kentucky Pretrial Risk Assessment Tool (“KPRA”) was created in-house by the Kentucky Pretrial Services Agency [5] This tool was used from 2009 to 2013, until it was replaced by the PSA in 2013.[6] In 2011, HB 463 made using a pretrial risk assessment tool mandatory for judges when considering bail and pretrial release. [7] Prior to this bill, it was up to judges to decide if they wanted to use the tool or not, and many chose not to use it.[8] The KPRA specifically calculated the risk that the defendant would fail to appear on their court date, and the risk that they would reoffend upon release.[9] The original intention behind HB 463 was to reduce the amount of people incarcerated in Kentucky, and to provide judges with an improved method of deciding which defendants are safe to release.[10] HB 463 asserted that defendants are found to be “low risk” or “moderate risk” should be released without cash bail.[11] However, judges could always override the recommendation of the KPRA , but they had to provide a reason for not following the recommendation.[12]

Though these risk assessment tools are well-intentioned, multiple recent studies have shown that the use of these AI risk assessment algorithms negatively impacts groups of criminal defendants .[13] Specifically in Kentucky, once HB 463 took effect, moderate risk black defendants were less likely to be granted pretrial release compared to white defendants of the same risk level.[14] Additionally, disparities favoring white defendants were not observable prior to HB 463 , but were evident afterward.[15] Judges were also found to be more likely to deviate from the risk assessment tool’s recommendation for moderate risk black defendants than for moderate risk white defendants.[16] Judges deviated from the algorithm two-thirds of the time for all defendants, using their judicial discretion to ignore the risk assessment tool recommendations.[17]

Nationwide, risk assessment tools have been shown to negatively impact poor defendants and make them more likely to be incarcerated than their affluent peers .[18] In  a study conducted with judges from all over the U.S., participants  were presented with a hypothetical defendant and tasked with deciding whether to sentence them to incarceration.[19] When the risk assessment score information was not provided, poor defendants were less likely to be incarcerated.[20] But when judges used the risk assessment tool score, affluent defendants were less likely to be incarcerated, and poor defendants were worse off.[21] These results show that while the algorithm itself is intended to increase fairness for criminal defendants, how judges interpret these scores from these tools can result in greater disparities .[22]

These studies beg the question of whether AI risk assessment tools are actually effective in reducing bias in the criminal justice system, or whether we just think they should be.[23] There is a lack of data and information on risk assessment tools and how exactly judges utilize these in their decision-making. But the disparities between racial and socioeconomic groups show that sometimes reducing people to a number can result in worse outcomes.    

As Kentucky has historically led the country in bail reform and other efforts, it should also lead the way in revitalizing risk assessment tools and how they are used. One solution is to change the risk assessment algorithm itself so that one single score is not combining  multiple factors or outcomes.[24] Instead of the present tool which creates a risk score that is meant to reflect multiple different outcomes, like failure to appear and  risk of rearrest , the tool could instead be separated to create a different score for each of these.[25] This could increase fairness because there are a variety of reasons behind why someone might fail to appear versus reoffend, and this would prevent a non-violent defendant from being grouped with someone that is more likely to reoffend and commit a violent crime.[26]

Another solution is to provide more education for judges about these risk assessment tools. Judges should receive regular training to learn how to utilize these risk score assessment algorithms in their decision-making to promote fairness for criminal defendants.[27] This is essential because these studies indicate that perhaps the algorithm itself is correct, but judges are overriding it, thereby introducing bias.[28]

Kentucky cannot stand idly by in the face of these disparities that are the result of risk assessment tools. Kentucky should continue to lead the way forward by promoting more research into these tools, educating judges on their use, and perhaps completely revitalizing the algorithm itself.

[1] Megan Stevenson, Assessing Risk Assessment in Action, 103 Minn. L. Rev. 304, 342 (2018).    

[2]      Tom Simonite, Algorithims Should’ve Made Courts More Fair. What Went Wrong?, Wired, (Sep. 5, 2019 7:00 AM), https://www.wired.com/story/algorithms-shouldve-made-courts-more-fair-what-went-wrong/#:~:text=The%20system%20used%20in%20Kentucky,%2D%2C%20or%20high%2Drisk.

[3] Stevenson, supra note 1 at 328, 342.       

[4]Simonite, supra note 2.    

[5] Stevenson, supra note 1 at 343.

[6] Alex Albright, If You Give a Judge a Risk Score (Sept. 3, 2019) (available online at https://thelittledataset.com/about_files/albright_judge_score.pdf).    

[7] 2011 Bill Text KY H.B. 463

[8] Id.

[9] Simonite, supra note 2     .

[10] Id.

[11] Id.

[12] Albright, supra note 8, at 18.

[13] See Albright, supra note 7; Stevenson, supra note 1.

[14] Albright, supra note 7, at 27.

[15] Id. at 25.

[16] Id.

[17] Stevenson, supra note 1 at 308.

[18] Jennifer Skeem, Nicholas Scurich & John Monahan, Impact of Risk Assessment on Judges’ Fairness in Sentencing Relatively Poor Defendants, 44 Law and Human Behavior 1, 51 (2020).

[19] Id. at 15.

[20] Id.

[21] Id.

[22] See Id.

[23] Stevenson, supra note 1.

[24] See Partnership on AI, Report on Algorithmic Assessment Tools in the U.S. Criminal Justice System, https://partnershiponai.org/wp-content/uploads/2021/08/Report-on-Algorithmic-Risk-Assessment-Tools.pdf.

[25] Id. at 22.

[26] Id.

[27] Id. at 21.

[28] Stevenson, supra note 1 at 369.