ICE changed an algorithm to prevent the release of detained immigrants

A child holds a sign during a rally in New York.
A child holds a sign during a rally in New York.

Momentarily put aside your positions on immigration policy, if you will, and consider this case of alleged algorithmic rigging. Whatever your feelings on people seeking legal status in the US, you may find cause for concern about humanity’s growing reliance on machines to determine liberty.

Last week, the Bronx Defenders and New York Civil Liberties Union filed a complaint in New York federal district court against local Immigration and Customs Enforcement (ICE) authorities. They allege that the agency adjusted the algorithm it uses to decide when someone should be released on bond. Now, detainees being held on civil immigration offenses overwhelmingly remain in custody even when they pose no flight or public safety risk and regardless of medical conditions. The advocates explain in a statement:

Federal law requires ICE officers to make individualized custody determinations based on whether the person poses a flight risk or threat to public safety. Since 2013, the agency has used a risk assessment tool that considers factors like a person’s family ties, connections to community, time in the country and community, and criminal history. However, the data shows that this tool, which ICE offices use nationwide, was manipulated, most recently in mid-2017, to remove its ability to recommend anyone be released. The tool can now only make one substantive recommendation: detention without bond.

Prior to this adjustment, about 47% of detainees were released on bond while their immigration cases unfolded. Between June 2017 and September 2019, however, only 3% were let out, according to the complaint. The adjusted tool recommends detention almost invariably even as immigration-related arrests are on the rise, which means that more and more New Yorkers are being held in custody for civil offenses without a proper risk assessment and nowhere to turn for help.

Although the algorithm can refer a detainee’s case to the New York Field Office for review, the complaint alleges that local ICE officials effectively treat the algorithmic recommendation as binding and have departed from the machine’s determination in less than 1% of cases. So people are being held in custody for weeks or months before they see an immigration judge and can make their case to a person rather than a rigged machine.

The humans, it turns out, are a little more nuanced in their decision-making. About 40% of those who were held without bond by ICE do end up released by a judge, the complaint says.

This is one indication that the algorithm isn’t doing the job it was designed for. Theoretically, the machine’s results shouldn’t be so extreme or they wouldn’t vary from the magistrates’ assessments.

Realistically, however, the New York ICE office’s algorithm seems to have been adjusted to advance a “no-release policy” that the complainants say violates federal immigration law and the US Constitution’s Due Process Clause. The advocates seek declaratory and injunctive relief on behalf of a class of detainees being held without bond on the basis of these contrived mechanical assessments.

The limits of machine learning?

Their case highlights a growing debate in the US and beyond about reliance on algorithms in justice systems, whether to do the delicate and awkward work of assessing human risk or pretty much anything else important.

On the same day that immigration advocates filed this complaint, the Washington Post ran a piece by Stanford University computational policy researchers on the superiority of machines in making bond determinations in the criminal justice context. Notably, their work shows humans tend to significantly overestimate risk and can’t assimilate new information as quickly as machines with brains.

In other words, a machine assessment could be better than human insofar as it accurately predicts risk…but of course, the tool would have to be used by people operating in good faith and not to advance a deliberately restrictive no-release policy in order for that to be true.

The Stanford researchers concede that algorithms do have limitations though, as they can advance the biases of those who designed the programs. And they don’t suggest people submit all decisions to machines, concluding, “When, whether and how to consider algorithms, however accurate, remain larger issues of policy and ethics—and those decisions are firmly in human hands.”

France, for example, last year banned machine learning for ‘judicial analytics,” barring use of statistics to predict and study judges’ behavior. But it seems highly unlikely that a similar resistance to algorithms will prevail in the US, which is why cases like the one brought against ICE in New York are so important.

As people learn more about how machine learning works—for better and worse—activists and advocates now have to monitor the bots that are making critical decisions about the freedom of human beings.

 

Sign up for the Quartz Daily Brief, our free daily newsletter with the world’s most important and interesting news.

More stories from Quartz:

Advertisement