Brooklyn Boro

Bias and secrecy among pitfalls of NYC’s algorithm use, experts say

May 3, 2019 Mary Frost
The city's Automated Decision Systems Task Force had its first hearing on Tuesday. AP Photo/Gerry Broome
Share this:

Computer algorithms are increasingly making decisions affecting New Yorkers’ lives, but the city is struggling to get a handle on how fair those systems are and who — if anyone — is being held accountable.

In New York City, the NYPD and other city agencies use advanced algorithms called Automated Decision Systems to predict human behavior. Police use the systems to anticipate future crimes and perpetrators, social workers to evaluate potential child abuse risks, and courts to compute “risk scores” to inform decisions affecting sentencing, parole or probation, and more.

Mayor Bill de Blasio set up the Automated Decision Systems Task Force a year ago with the mandate to issue a report in 18 months. One year later, however, the group seems to have gotten off to a slow start, with its members not even sure how to define Automated Decision Systems.

Subscribe to our newsletters

Even without a working definition, city workers, police and judges are already using Automated Decision Systems to make life-changing decisions.

At the task force’s first public meeting Tuesday at New York Law School, technology and civil rights experts urged the city to take a number of concrete steps to rid the systems of built-in bias, make underlying algorithms transparent and keep humans in the decision-making process.

According to users, these algorithms, at their best, yield benefits that expedite and moderate complex processes. Last year, for example, the city’s Administration for Children’s Services tested technology in Brooklyn that helps caseworkers investigate family situations and prevent abuse.

“Carrying tablets with these apps and software helps us prioritize our work and complete investigations faster and more efficiently,” Eric Blackwood, a child protective specialist who was part of the Brooklyn pilot phase, said in a statement. Tablets and software have since been distributed throughout the system.

However, experts warned, the algorithms used in automated decision making also have the potential to hurt people and misdirect resources. A ProPublica study concluded, for example, that software used across the country to predict future criminals is biased against black people.

At Tuesday’s hearing, task force Chairperson Jeff Thamkittikasem, director of the Mayor’s Office of Operations, said the theme for the night’s forum was “fairness and accountability.” (A second public forum will be held May 30 on the theme of transparency.)

But Thamkittikasem said the group had a hard time defining Automated Decision Systems because the official definition was so broad, “it could even include a pocket calculator.”

The task force does realize, Thamkittikasem acknowledged, that the term applies to more complex processes, like the system the city’s Department of Education uses to match students to schools when making placements.

These systems have the potential to perpetuate bias, he said. “In fact, they can hurt people and they can misalign resources.” They can also “improve benefits and make decisions more fair and equitable for the people they were meant to serve.”

Peter Koo, chairperson of the City Council’s Committee on Technology, told the task force members that he has been met with many questions about their progress — and the complaint that progress is slow.

Ultimately, the goal is to “bring transparency to an overlooked process that has existed behind closed doors, but one that has tremendous power in city government,” Koo said. Algorithms are now responsible for everything from school allocations to rezonings, he added.

At a City Council committee hearing on April 4, Koo said that there have been studies “that detail situations in which algorithms produce biased outcomes. In addition, algorithms remain hidden from the public view, making it unclear when and why agencies use algorithms.”

“The developers do not disclose their predictive models or algorithms, nor do they publish the source code for their software, leaving little transparency for the public,” Koo said at that meeting.

Last year, the city lost a lawsuit, brought by the Brennan Center, in which the state Supreme Court ordered the NYPD to produce records about the testing, development and use of predictive policing tools.

According to the Brennan Center, these analytic tools rely on historical policing data to generate their predictions. Without transparency, the software may simply “recreate and obscure the origins of racially biased policing,” they said.

Concrete suggestions from the public

New York City is the first city to establish such an oversight task force. The inaugural panel is therefore charged with eking out definitions and priorities, point by point.

But those who testified had very immediate concerns and specific recommendations.

Janai Nelson, associate director-counsel of the NAACP Legal Defense Fund, said her organization was worried about the “discrete, durable and racial impact Automated Decision Systems threaten to impose in the area of policing and law enforcement.”

The NAACP LDF has “deep concerns about NYPD’s increasing reliance on machine-learning algorithms based on biased data, which threatens to exacerbate inequity in NYC,” she said.

Nelson suggested the city adopt a uniform definition of Automated Decision Systems. She referred to an Aug. 17, 2018, letter in which a group of experts recommended the following definition:

“An ‘automated decision system’ is any software, system, or process that aims to aid or replace human decision making. Automated decision systems can include analyzing complex datasets to generate scores, predictions, classifications, or some recommended action(s), which are used by agencies to make decisions that impact human welfare.”

Nelson said that biased policing was already incorporated into the city’s algorithms. For example, “The NYPD conducts investigations relying on a secret database that inaccurately designates thousands of New Yorkers as members of gangs or local street crews, often without informing the individual, or offering any due process protections.”

“The NYPD data sets are infected,” Nelson added. The algorithms used in predictive policing “will reinforce these disparities.”

Human intervention

Andrew Nicklin, futurist-at-large at Johns Hopkins University’s Center for Civic Impact, threw a number of considerations at the panel.

“Is there human interaction to review decisions?” Nicklin asked. “In the criminal justice system, for example, judges may follow the recommendations of the Automated Decision System without understanding what went into making the decision.”

He suggested that the panel “operationalize” human intervention on a regular basis, prioritizing interventions by looking at outcomes and those affected by algorithmic decisions. Since the task force can’t do everything at once, he suggested attacking issues in tiers.

“Will people lose access to housing or will it affect their incarceration? Will they be kept in prison longer?” he asked. “We might want to focus on criminal justice first.”

Another issue, Nicklin said, is how to address systemic discrimination built into the algorithms. Contractors hired by the city should cooperate in algorithm evaluations as part of their contracts, he suggested. Intellectual property considerations are important, but source code transparency is also necessary. “It’s important to know how decisions are made over time,” he said.

And to keep politics out of it, Nicklin suggested a framework be set up separate from the Mayor’s Office for people and the media having concerns and questions regarding automated decisions.

Correction (4 p.m.) — This article has been updated with the correct acronym for the NAACP Legal Defense Fund.  


Leave a Comment


Leave a Comment