An algorithm will decide whether a suspect should be detained during a legal case.
The new justice system will apply in California in October 2019. Cash bail [*] will be replaced by an algorithm, which qualifies people to three groups – low, medium and high risk of committing a crime. Based on a recommendation made by an algorithm the judge will decide whether a person should be detained during a legal case.
The aim of a new tool is to reduce a bias and a personal prejudice of judges, who pass judgements. The decision should be driven by data-driven recommendations rather than person’s gut feeling.
The second argument pro is that algorithms and databases can be helpful in allocating resources in the future. The government could decide which districts or groups of people should be under control of police to prevent future crimes.
How does it work?
An algorithm uses historical crime data and statistics in order to find a correlation and pattern. Based on machine learning it can predict a likelihood of committing a crime by every individual.
But…
As we know algorithms are not perfect and some of them make mistakes. In this case people spotted, that an algorithm is much more rigorous for a dark-skinned individuals. According to analysis made by reporter Julia Angwin “Blacks are almost twice as likely as whites to be labeled a higher risk but not actually re-offend,” and “{an algorithm} makes the opposite mistake among whites: They are much more likely than blacks to be labeled lower-risk but go on to commit other crimes”. In July more than 100 organisations (such as ACLU, NAACP) protested against using such tools during lawsuit and signed a statement against it.

“Distribution of defendants across risk categories by race. Black defendants reoffended at a higher rate than whites, and accordingly, a higher proportion of black defendants are deemed medium or high risk. As a result, blacks who do not reoffend are also more likely to be classified higher risk than whites who do not reoffend.”
However, the algorithm takes into consideration 100 factors (such as: age, sex and criminal history). It is worth mentioning that race factor is not used. Another factor which increases a probability of signing to higher risk group (almost as much as race factor) is being in low income bracket.
“If an algorithm found, for example, that low income was correlated with high recidivism, it would leave you none the wiser about whether low income actually caused crime. But this is precisely what risk assessment tools do: they turn correlative insights into causal scoring mechanisms.”
In my opinion a history of the USA have a huge impact on its citizens. Because of it they are very sensitive to race discrimination. A created algorithm works on data influenced by long-lasting unequal access to education, work and social system and all consequences of this inequality. A social inequality, lack of education and bad financial situation are factors which cause a higher rate of crimes all around the world. Summarising, I think that the result of an algorithm’s work is rather a consequence of the USA history rather than a mistake of an algorithm or current race discrimination in this country.
[*] cash bail – payment of money or pledge of property to the court which may be refunded if suspects return to court for their trial, bail practice vary in the USA from state to state (source: Wikipedia)
https://www.technologyreview.com/s/612775/algorithms-criminal-justice-ai/
https://www.theatlantic.com/technology/archive/2018/01/equivant-compas-algorithm/550646/
https://www.statista.com/topics/1750/violent-crime-in-the-us/
https://en.wikipedia.org/wiki/Bail_in_the_United_States
Photo: http://www.sjsu.edu/justicestudies/degrees/graduate-degrees/index.html
Wow, that’s actually a change that I didn’t see coming. AI, algorithms and tech in general are being injected into our life’s, more and more everyday. I would love to see the results it will give us, I’m really intrigued. On the other hand, I’m not 100% certain, whether it will suggest the best calls to the judges every time. Although I think that in the long-run, it should be more effective than humans. Looking forward to see how it will work irl.
This is another example of how the algorithm and database are slowly adapted to our lives and how its scope will change and affect our functioning. It seems evident that technology is beginning to be legally implemented in order to facilitate many fields of our lives. At the time, I’ve written about facial recognition of criminals based on artificial intelligence, which by obtaining their DNA made it possible to show probable facial features of offender. Even though it seems like a huge step forward, which constitutes a very useful tool for identifying criminals, they were obviously many opponents of such solution. As they claimed that there’s a big chance of putting under arrest innocent people, the solution converted only into a small reference point for the further investigation, which works on the basis of a tip, not a final proof. I believe that this also should be the solution of this case – results of such algorithms qualifying people for a specific risk of re-committment, ought to be only a help in making decision. Taking the results into account will undoubtedly lead to the growth of appropriate decisions, however the final one belongs to the man, not the machine.
I absolutely agree with you. In this case the results of AI work should be only considered as a tip not as a final decision. In the future I think that technologies such as AI will be used quite often as a help in a workplace.