Computer program that calculates prison sentences is even more racist than humans, study finds


[ad_1]

A computer program used to calculate the risk of committing crimes is less precise and more racist than random humans assigned to the same task, according to a new study from Dartmouth.

Before being sentenced, people who commit crimes in certain states in the United States must take a quiz of 137 questions. The questions, which range from questions about a person’s criminal history to substance use by their parents, to “do you sometimes feel discouraged?” Are part of a software called Correctional Offender Management Profiling for Alternative Sanctions, or COMPAS. Using a proprietary algorithm, COMPAS is intended to calculate a person’s life counts, determine their risk of reoffending, and help a judge determine a sentence based on that risk assessment.

Rather than making objective decisions, COMPAS is in fact playing on racial prejudice in the criminal justice system, activists say. And a study released last week by researchers in Dartmouth found that random, untrained people on the internet could make more accurate predictions about a person’s criminal future than expensive software.

Private software, COMPAS algorithms are a trade secret. His conclusions confuse some of the people he assesses. Take Eric Loomis, a Michigan man arrested in 2013, who pleaded guilty to attempting to flee from a police officer, and arguably driving a vehicle without its owner’s permission.

While neither offense was violent, COMPAS assessed Loomis’ background and flagged him as “high risk for violence, high risk of recidivism, high risk before trial.” Loomis was sentenced to six years in prison based on the finding.

COMPAS came to its conclusion with its 137-question quiz, which asks questions about the person’s criminal history, family history, social life and opinions. The questionnaire does not ask for a person’s race. But the questions – including those regarding parental arrest history, neighborhood crime, and a person’s economic stability – seem unfavorably biased against black defendants, who are disproportionately impoverished or incarcerated in the United States. .

A 2016 ProPublica survey analyzed the results of the software in 7,000 cases in Broward County, Florida, and found that COMPASS often overestimated a person’s risk of committing future crimes. These incorrect assessments nearly doubled among black defendants, who often received higher risk scores than white defendants who had committed more serious crimes.

But COMPAS isn’t only often wrong, according to the new Dartmouth study: Random humans can do a better job, with less information.

The Dartmouth research group hired 462 participants through Mechanical Turk, a crowdsourcing platform. Participants, who had no experience or training in criminal justice, were given a brief description of the age and gender of a true criminal, as well as the crime they committed and their criminal history. The person’s race was not given.

“Do you think this person will commit another crime within 2 years,” the researchers asked participants.

The untrained group correctly predicted whether a person would commit another crime with an accuracy of 68.2% for black defendants and 67.6% for white defendants. This is slightly better than COMPAS, which reports an accuracy of 64.9% for black accused and an accuracy of 65.7% for white accused.

In a statement, COMPAS parent company Equivalent argued that Dartmouth’s findings were in fact good.

“Instead of criticizing the COMPAS assessment, [the study] in fact adds to a growing number of independent studies that have confirmed that COMPAS achieves good predictability and meets the increasingly accepted AUC standard of 0.70 for well-designed risk assessment tools used in the industry. criminal justice, ”Equivalent said in the statement.

What he didn’t add was that the humans who had slightly outperformed COMPAS weren’t trained – whereas COMPAS is an extremely expensive and secretive program.

In 2015, Wisconsin signed a contract with COMPAS for $ 1,765,334, documents obtained by the Electronic Privacy Information Center reveal. Most of the money – $ 776,475 – went towards licensing and maintenance fees for the software company. In contrast, the Dartmouth researchers paid each study participant $ 1 for completing the task, and a $ 5 bonus if they answered correctly more than 65% of the time.

And for all that money, the defendants are still not sure COMPAS is doing its job.

After COMPAS helped him sentence him to six years in prison, Loomis attempted to overturn the ruling, claiming the algorithmic ruling violated his due process rights. The secretive nature of the software meant it couldn’t be trusted, he claimed.

His offer failed last summer when the US Supreme Court refused to take his case, allowing the COMPAS-based sentence to remain.

Instead of throwing himself at the mercy of the court, Loomis was at the mercy of the machine.

He might have had better luck in the hands of random internet users.

[ad_2]

Gordon K. Morehouse