NYC Algorithm Bill

LEGAL • TECHNOLOGY 02.02.18

Last month the New York City Council passed the first bill in the country to investigate data bias and in automated systems used for targeting services and criminal justice.

James Vacca, the bill's sponsor told epic.org "If we are going to be governed by machines and algorithms and data, well, they better be transparent."

Algorithms may always contain bias but their makers don't have to explain how AIs make determinations.

ALGORITHMS USED BY automated decision systems can and do produce bad decisions. Ryan Calo, Associate Professor of Law, University of Washington, speaking at the Future of Privacy Forum's (FPF) Privacy Papers for Policymakers this February 27th, Capitol Hill, writes in his winning paper, ArtificiaI Intelligence Policy: A Primer and Roadmap, the "prospect of bias" can happen from "inequality in application," like a translation engine that matches engineers to men and nurses to women; and "consequential decision-making" systems that determine material decisions about rights and liberties. But, the creators of the algorithms "rarely disclose any detailed information on how AIs have made particular decisions," reported by The Guardian.

They are used in courts all over the country for setting bonds, sentencing, and determining recidivism. In Wisconsin vs Loomis (2016), the Wisconsin Supreme Court affirmed a state judge's use of a private risk assessment algorithm to sentence a defendant didn't violate the defendant's rights.

The defendant argued on due process grounds the state judge used a private company’s proprietary software to "help determine his fate" yet he wasn't able to challenge the validity or scientific accuracy of the proprietary algorithm. The supreme court ruled it legal, deciding the state judge did not rely only on  the software to make the decision, but used it along with other tools and resources, reported by the State Bar of Wisconsin.

Debate over their reliability and value. Some say they're no better than non-expert human predictions, others say they can improve decisions, but it's not a complete fix.

The automated decision system in the Wisconsin case is a risk assessment  algorithm used in other jurisdictions.  The program, known as Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) by Equivant formerly Northpointe,  has been under scrutiny. MIT Technology Review reports an analysis conducted by Pro Publica showed "significant racial disparities," biased to "falsely flag black defendants." The debate over it continues, researchers at Dartmouth College and Duke University have shown the program  is no better at predicting than are people with minimal  or no command of criminal justice, reports The Atlantic. On the other hand, Sharad Goel's research at Stanford University finds  algorithms can improve decisions, but warns they're not a "complete fix," reported by Stanford Engineering. Equivant maintains the  Dartmouth research contains errors, questions the testing procedure, and believes that the research  "actually adds to a growing number of independent studies that have confirmed that COMPAS achieves good predictability."

Data transparency is necessary to make sure we are not building biased systems.

John Giannandrea, Google's  AI chief says “ if we give these systems biased data, they will be biased, it's important to be transparent about the training data and look for hidden biases, otherwise we are building biased systems,"  reported by MIT Technology Review.

Analytic models may always contain subtle errors that go unnoticed and actively discriminate against certain groups.

"Data sets are not objective."  Hidden biases "are as important to the big data equation  as the numbers themselves," says Kate Crawford, co-founder of the AI Now Institute at New York University (NYU) in the Harvard Business Review.  There can be biases, errors that are often “subtle or go unnoticed,”  leading to biased models that “limit credibility” and “actively discriminate against certain groups of people,”  finds Elder Research. Machine learning algorithms only know the data they're given, many things about the world are not explicitly represented in data such as complex human behavior and personal motivations, so "analytic models may always contain bias," finds Elder Research.

Deep Neural Networks advancing the AI the explosion are becoming a central concern in AI research.

Neural Networks are sets of algorithms loosely modeled after the human brain designed to recognize patterns, cluster and classify. They classify data when they are given "labeled datasets to train on." Deep Neural Networks (DNNs) are "stacked neural networks" that learn by themselves and from each other in a “hierarchy of increasing complexity and abstraction."  They differ from “traditional machine-learning algorithms” because they can perform automatic feature extraction without human intervention. This reduces the amount of time and resources required to describe a large set of data,  making data science teams more productive, explained by Eclipse Deeplearning4j.org.

They are used for smart photo albums, customer-relationship management (CRM), and personalized entertainment picks. That's all fun and useful,  but when it comes to serious aspects of our lives and liberties, "the inablility to discern exactly what machines are doing when they’re teaching themselves novel skills" has "become a central concern in artificial-intelligence research," says psychologist and data scientist Michael Kosinski, Assistant Professor of Organizational Behavior, Stanford School of Business, reported by The New York Times.

Elaine Sarduy is a freelance writer and content developer @Listing Debuts