Somebody's going to make the kids be accountable.


Artificial Intelligence (AI) is being asked to comply with fairness and due process.

Algorithm Impact Assessments.

LAST WEEK, AI NOW Institute proposed an early stage framework structured around Algorithm Impact Assessments (AIAs) for consideration in NYC's planned Automated Decision Systems task force. It will be the first task force of its kind in the country to recommend "how each city agency should be accountable for algorithms and other advanced computing techniques to make important decisions." Its mission. To instill trust in government agencies. The AIA process will challenge agencies to adequately address concerns, rectify practices and give a voice to the public.

The "broad approach complements similar domain-specific proposals," like Andrew Selbst's work on Algorithm Impact Statements for predictive policing systems.

AI Now calls for an end to black box systems in core public agencies.

Automated decision systems are integrated in core institutions and affect serious aspects of our lives. The problem is there is no framework to hold them accountable. "Even many simple systems operate as 'black boxes' as they are outside the scope of meaningful scrutiny and accountability." Effectively, they're obscuring government decisions from the public. A dangerous precedent that keeps public agencies from their affirmative duty to "protect basic democratic values, such as fairness and due process, and to guard against threats like illegal discrimination or deprivation of rights."

AIAs will shed light on systems before deployment and while in use. They will have four initial goals.

Provide the public with information about systems that decide their fate.

1. To preserve fundamental government accountability and due process by requiring agencies to make public "all existing and proposed" automated decision systems that "play a significant role in government decisions;" describe their "purpose, reach, and potential impacts on identifiable groups or individuals;" and provide a "practical and appropriate" definition of "automated decision making" that comprises software capabilities, "human factors", "any input and training data."

Give meaningful access to external researchers to audit systems.

2. To provide ongoing access and monitoring opportunities for external researchers from a variety of disciplines to audit systems, asses how they work, and use methods that allow them to distinguish problems. Affected communities and researchers can develop access programs through a notice and comment process. The lightning speed of advancing technologies, evolving research around accountability, and mutable "social and political contexts" where these systems are working will require that external researchers have flexibility to "adapt to new methods of accountability as new technologies drive new forms of automating decisions."

Further the ability of public agencies to assess fairness, due process, and disparate impact in their systems.

3. To increase the internal ability of public agencies to self-assess their systems. If they are to ensure public trust, agencies must be experts on their own systems, ascertain how "a system might impact the public, and show how they plan to address any issues, should they arise." This requires a commitment to fairness, transparency and accountability internally and extends to vendors and outside companies. "Companies that are best equipped to help agencies and researchers study their systems would have a competitive advantage over others."

Affirm due process by giving the public a turn and a voice.

4. To give the public opportunities to engage with the AIA process before, during and after assessment. Oversight will be required to make sure accountability is not watered down by simply checking a compliance box and moving on. AIAs should give the public opportunities to question and dispute how an agency is implementing algorithmic accountability by providing a path to pursue cases with an oversight body, and if necessary, a court of law.

The research around algorithmic accountability is young and has great promise.

Although the research in this area is young, the Algorithmic Impact Assessment framework can serve as a foundation to define meaningful algorithmic accountability. A "great opportunity for public and city agencies to come together to make New York the "fairest big city in America." In the coming months, NYC Mayor Bill de Blasio will announce the task force. AIA Now will publish further research on AIAs and welcomes feedback.

Elaine Sarduy is a freelance writer and content developer @Listing Debuts