Govt-Trusted Criminal Justice AI Programs Turns Out Racist

32
AI
AI

Implication of AI has led us to tremendous efficiency in operations; in every industry we can think of now. The future that once Alan Turing thought his Thinking Machines would bring is now here and it will only spread further. Even the most humanly work that once thought impossible to be accomplished by programs, like driving and performing surgeries are already possible with Artificial Intelligence.

Artificial Intelligence in Criminal Justice, on the other hand, would leave you wondering if a program would be able contemplate the law and order. What else it would consider other than the book principals when it delivers “fair” sentencing to the defendants based on their history, motives, and if their criminal activities can recur?

AI is already doing it. Not sure (yet) if you should be worried or excited about this news but Artificial Intelligence has already been implemented by California, Wisconsin, New York, Florida’s Broward County and many other jurisdictions.

Is Artificial Intelligence Efficient in Accessing Criminal Justice?

Depending on the use of the AI in Criminal Justice, each process that a defendant has to bear turns out to be in some way interdependent on these computer algorithms. Assisting to maximize efficiency of a human-established system is one thing and making humanly decisions that directly impact their lives is something we need to bug in before implementing it on even one defendant, let alone a state or nation-wide.

But since it is already being used in courtrooms today, at least we have the data to see the patterns of accuracy from the data points fed in and the results of these algorithmic predictions.

2016 study of ProPublica shows something unexpected from the results of these AI programs upon implementation in jurisdictions across the country. As machines built on codes, it is highly unanticipated for them to be biased on the race of the criminals but the patterns say the opposite.

According to ProPublica, the risk assessment programs used to predict the future activities of the defendants appearing in court shows that 44.9% of the black defendants were predicted to reoffend but did not, as opposed to the 23.5% of the white defendants. Now, this was only the start of exposing incompetency behind the use of Risk Assessment Software. Soon after, non-offending judgments for the blacks were 28.1% as opposed to whites being 47.7%.

These numbers clearly further solidify the doubts of the dubious eyes of our data scientists, such as Hany Farid and his students who began questioning the developers of these risk assessment algorithms. Unfortunately, the algorithms used such as COMPAS and PredPol are kept in secrecy and even the courts don’t have access to the developer-end of these AI programs.

Hany Farid although, wouldn’t hold back even with that hurdle. He reverse-engineered the program to make the simpler models of the bigger programs used and fed the same data through his “classifiers” that he built to test those results. But before considering the metrics of his own system, he opted out to survey; to match the human accuracy in jurisdiction.

He took a survey from 400 random strangers on the internet and they were presented a 10-seconds-long paragraph to come up with a verdict for the description provided. Rest assured-no details on the color of the defendant were provided. The results?

These 400 random strangers on the internet were 67% of the time accurate, in contrast to the program being 65.2% of the time accurate. How can a verdict from an AI program be as racist as humans if there are no details om the race of the defendant? These tests were least anticipated to reflect prejudice on the race of a particular prospect. It explicitly demonstrates that the algorithms that were preprogrammed to not even comprehend racism, somehow did ended up being biased because of the metrics set up.

The same metrics that depends on the racially biased information coming from strict targeted patrolling of Black and Hispanic neighborhoods, increased police violence and strict sentencing of the Black community. Algorithms developed on racist metrics produce racially biased results.

Soon after results exposed Artificial Intelligence to be incompetent in forming just verdicts and having racially biased Risk Assessment Scores-Hany Farid found the deceit behind prominent developer companies that developed these AI’s. These more complex-looking and sugar-coated as “built on latest analytics” systems are based on ridiculously simple classifiers that only proves AI to be in its infancy.

Criminal Justice AI
Criminal Justice AI

Can Artificial Intelligence Overcome This Error?

To put it simply-Yes. Can this be done with the current technology? No, because the erroneous judgments are due to the fact that the metrics themselves are data histories showing racism against Blacks. Even though there is no such prominent field of race; the numbers carry the history within themselves that later emerges as patterns when applied to hundreds of thousands of people.

These problems can be solved with a new set of data that is wild to our current jurisdiction-an alien history to the one of ours…because ours is corrupted with racism. Developers and mathematicians can’t figure that out yet-and neither our second most important problem; which is to have AI smart enough to be self-aware of its own choices and map out if it’s budding a pattern.

Should AI be Extracted from Every Process of Jurisdiction?

Extracting AI from every process would be unfair and un resourceful of the advancements of machine learning capabilities that have helped many counties decrease the rate of criminal activities nationwide.

Terry Sees at Modesto Police Department reports that since their implementation of PredPol in 2014 they have seen a drop in burglaries to 600 by 2018 from an average of 1600 in 2013. The officers further stated that “this has acted as a force multiplier for their patrolling activities”.

Considering that there has been a decrease in criminal activities since 2013-14; when the AI software were first used, reported by Modesto Police Department. However, there’s still no way to prove that the adaptation of AI was the reason in crime reduction.

The contradicting research of Northpointe although states the AI programs used in jurisdiction are just and fair, as opposed to ProPublica saying they’re not. They both are right. The programs are fair and just, they aren’t programmed to discriminate between suspects. The problem truly lies in the information fed to the programs which helps them decide what “type” of people pose more danger. And that information, has a history of racist jurisdiction and policing.

Author Bio: Here goes Ahsun bashing the internet and ranting about tech (again). Doesn’t he just love criticizing the blind spots in tech, science, and business? Read more from him on Hitechwork.