Algorithm accountability through multi-layered impact assessments and collaborative governance, using dignitary, justificatory and instrumental rationales.

LEGAL • TECHNOLOGY 02.24.20

Algorithm Impact Assessments under the General Data Protection Regulation as a "link" connecting risk mitigation to outward-facing rights, forming the substance of explanations.

Future of Privacy Forum held its 2019 Policy Papers event this month. One of the award-winning papers, Algorithm Impact Assessments under the GDPR: Producing Multi-layered Explanations, by Margot E. Kaminski and Gianclaudio Malgieri, focuses on "the unexplored question" of how GDPR's dual governance approach: individual rights and systemic governance "interact and overlap."

The paper explores how Algorithm Impact Assessments (AIAs) can function as an accountability tool alongside "GDPR's array" of systemic tools. Asserting that the AIA is "crucial" for "connecting" internal fact-finding and risk mitigation to "outward-facing rights" and "forming the substance" of explanations.

GDPR's Data Protection Impact Assessment plays a "special role."

Data Protection Impact Assessment (DPIA) acts as a "version" of AIAs in the automated decision-making (ADM) context and plays a "special role" in GDPR's dual system of governance.

Article 35(3)(a) requires a DPIA when there is,

“a systematic and extensive evaluation of personal aspects relating to natural persons which is based on automated processing, including profiling, and on which decisions are based that produce legal effects concerning the natural person or similarly significantly affect the natural person.”

The Guidelines on ADM mandate DPIAs for all automated decision-making, creating a categorical requirement that applies to decisions even if they are "not wholly automated, as well as solely automated."

GDPR describes a DPIA as "'an assessment of the impact of the envisaged processing operations on the protection of personal data.'"

Article 35(7)(Rec.84.90) requires the following:

  1. a description of the “processing operations,” (the algorithm) and the purpose of the processing;
  2. an assessment of the necessity of processing in relation to the purpose;
  3. an assessment of the risks to individual rights and freedoms;
  4. and "importantly," the measures a company will use to address these risks and demonstrate GDPR compliance, including security measures.

A continual process running "multiple times."

Article 22 provides "both" systemic governance and a "suitable safeguard of individual rights." However, DPIA's "current focus to the right to an explanation is too narrow." Algorithm explanations shouldn't be treated as "static statements, but as a circular and multi-layered process."

The assessment must happen before implementation of the ADM, and provide risk-mitigation before the system is launched. However, GDPR "also envisions iteration" when "risk posed by a system changes."

The DPIA Guidelines suggest "as a matter of good practice," they be updated throughout the project lifecycle, and re-assessed or revised at least every 3 years.

It's a "continual process, not a one-time exercise,” that involves assessing risk, deploying risk-mitigation measures, documenting their efficacy through monitoring, and feeding that information back into the risk assessment and ongoing process. The DPIA Guidelines envision this process as running “multiple times."

DPIA seen as lacking due process, but procedural requirements require input from impacted persons.

Compared to other impact assessments found in legal literature, the GDPR text and DPIA Guidelines provide "little specific guidance" on what companies have to "put in a DPIA report in the context of algorithmic impact assessments."

GDPR requires consultation with a Data Protection Officer (DPO). And, instead of public or formal stakeholder consultation, it requires consultation "where appropriate" with impacted individuals.

This creates a way to obtain external input from impacted persons instead of experts and the public.

A company is left to decide for itself if it should submit to regulatory oversight during the assessment. Consultation with independent experts or oversight by a public authority isn't required, unless there is a "high risk."

Companies don't have to make impact assessment results public, but advised that it is a "good practice" when the public is impacted. They don't have to make complete findings public, summaries are sufficient.

Although this has lead to discussion that DPIA lacks "individual due process mechanisms," the authors say that "is not entirely correct" in the ADM context.

DPIA as a form of "meta-regulation" can be seen as a "link" between individual rights and systemic collaborative governance.

DPIA is "both" a tool in systemic governance, and an element of individual rights protection. It links collaborative governance with individual rights "through the imposition of systematic accountability measures" like audits and external review.

Understanding DPIA as "nexus" between the two governance approaches "clarifies content" and leads to further observation about DPIA's potential, implementation and improvement.

As part of GDPR's collaborative governance of algorithms, DPIA is a form of "monitored self-regulation" or "meta-regulation" organized to "change internal company processes."

Understanding DPIA as "source material."

It can also be understood as a documentation requirement or "precursor" to reporting requirements. This establishes records that can be inspected under GDPR's "extensive information-forcing capabilities."

DPIA's "unexplored role" in GDPR's system of individual rights is that it can provide "source material" for individual notification and access rights.

Guidelines frame oversight as necessary for risk-mitigation and "expand suitable safeguards."

The interpreting Guidelines on ADM "envisions DPIA's", as "a form of commitment-making" to protect and "even enable" individual algorithmic due process rights by characterizing them as risk mitigation measures.

Although the general DPIA Guidelines, suggest but don't require external consultation, "in the context of algorithm decision-making," external expert involvement and oversight "is more like a requirement," framed as a "necessary risk-mitigation measure for algorithmic decision-making."

Recital 71 requires "technical and organisational measures appropriate to ensure, in particular, that factors which result in inaccuracies in personal data are corrected and the risk of errors is minimised…and that prevents, inter alia, discriminatory effects.”

This expands "suitable safeguards" from those "due-process-like-protections" listed in the text, to a "broader set of systemic accountability measures, including third party auditing."

The greater implication is that the Guidelines "link" individual rights protection with collaborative governance techniques, by "characterizing third-party and expert oversight" as "a form of 'suitable safeguard' or 'suitable measure' to protect individual rights."

Companies have an obligation to prevent harms to individuals rights. External oversight on how to address this obligation "is imposed," and this "external oversight itself" becomes a "crucial" aspect, "standing in" for persons to protect them from the harms of an "erroneous system."

"Expert oversight in the DPIA process serves two, or even three, roles: it watches the companies as they come up with ways of addressing problems with algorithmic decision-making, and it reassures individuals that their dignity and other rights are being respected by a fair system. And it also "provides legitimation, or justification."

Shortcomings of the DPIA.

The "biggest shortcoming" is that it doesn't "implicate" a mechanism for mandatory public disclosure, widely recognized as an "essential element" of the tool.

This eliminates a person's ability to choose to "avoid companies with bad policies" and "elect representatives" who will put laws in place to prevent it.

By "failing to mandate public disclosure," it "fails to trigger" public feedback and regulatory feedback, both "essential components of a functioning collaborative governance regime."

It fails to "involve serious stakeholder input, unless companies understand the Guidelines on ADMs emphasis on expert boards and third-party audits to be mandatory."

A "more attenuated way of getting at the same outcome as public disclosure" would be individual notification and access rights. If people knew the logic that was used, it would lead them trust a system.

If someone feels they've been discriminated against they can disclose the information they received about the system's decision to a civil group to bring attention to, "triggering market mechanisms or regulatory feedback."

This approach, however, could fail if companies "disaggregate the DPIA process from individual disclosure rights."

Impact assessments as one tool in the ecosystem.

Impact Assessments are not "best understood" as stand-alone mechanisms, but "one tool" in the "ecosystem" that is not as effective when deployed alone," rather it should be "instead understood as entwined" with other tools.

They "serve as a connection" between collaborative governance and individual rights because the data produced during the process "feed into" what is made known to the public. This "dual role" doesn't only take on error, bias and discrimination. It "legitimizes a system" and respects "an individual's dignity within it."

"As part of a larger system of governance, there are unexplored connections between the GDPR's DPIA and its underlying substantive individual rights and substantive principles."

It's true that many of GDPR's individual rights are "articulated in broad" and "aspirational" terms, but there is "substantive backdrop" in Recital 71, advising data controllers to "minimize the risk of error and prevent discriminatory effects."

"The oddity is the GDPR’s circularity: the AIA helps not just to implement but to constitute both these substantive backstops and the GDPR’s individual rights. Thus there is a substantive backstop to company self-regulation through impact assessments—but it is a moving target, in part given meaning by affected companies themselves."

Since it "links individual and systemic governance," it can be understood that GDPR's version of the AIA is both "the potential source of," and the "mediator between...multi-layered explanations."

The "collective dimensions of surveillance and data processing."

GDPR's "system of individual rights threatens by itself to miss the impact of surveillance, or, in this case, automated decision-making, on groups, locations, and society at large."

A recent AI Now report gives an example of this problem. Providing an "individualized explanation for a single 'stop and frisk' incident in New York City would not have shown that 80% of those subjected to stop and frisk by the NYPD were Black or Latino men."

The Impact Assessment's systemic approach to risk assessment and risk mitigation "requires data controllers" to analyze how individuals and groups are impacted. Therefore, systemic and group-based explanations "uncovered during an AIA can and should be communicated to outside stakeholders." And, "a case can be made that such release is required under the GDPR."

A GDPR-specific model assessment with multi-layered explanations would "not stretch" GDPR's purpose and would "fill a current gap."

"Deliberately widening the lens from algorithms as a technology in isolation," to a "systems embedded in human systems."

The authors call for a model establishing Impact Assessments "specific" to the GDPR that would serve as a "basis" for "multi-layered explanations" of ADM.

It should involve an "interdisciplinary" approach, including technologists, lawyers and ethicists to help define and frame discussions about discrimination or bias.

There needs to be a "deeper exploration" of the link between the corpus created during the DPIA process, and the individual disclosure requirements of the GDPR.

"There is a growing awareness" that contemplating bias and unfairness in the abstract will be "inadequate" in practice. Risks are not only created by the technology itself and the humans who embed their values into it during construction and training.

Risks arise from "how humans using the algorithm are trained and constrained, or not constrained, in their use of it."

Assessment should be a "continuous" process and ongoing performance evaluation, "especially for those algorithms that change quickly over time."

A model assessment conducted on a system-wide basis to mitigate social harms that go beyond individuals, could "root out discrimination" not only against certain persons, but against "marginalized communities, identifying discrimination patterns that would be impossible to find through individual disclosures alone."

A model that would "explicitly" require assessing performance metrics on an ongoing and system-wide basis, and require metrics be disclosed to external experts, "would not stretch" its purpose, and would "fill a current gap" in algorithmic accountability.

"Striking similarities" between the GDPR text on DPIAs and Guidelines on ADM.

The "required content" of DPIA's could be used as "the basis for disclosures controllers are required to make to individuals."

According to Article 35(7), a DPIA should contain:

  1. a systematic description of the envisaged processing operations and the purposes of the processing; (…)
  2. an assessment of the necessity and proportionality of the processing operations in relation to the purposes;
  3. an assessment of the risks to the rights and freedoms of data subjects (…); and
  4. the measures envisaged to address the risks, including safeguards, security measures and mechanisms to ensure the protection of personal data and to demonstrate compliance with this Regulation taking into account the rights and legitimate interests of data subjects and other persons concerned.

There are "striking" similarities between GDPR's algorithm accountability requirements and the requirements for DPIAs.

1. The requirement for a systematic description of the processing operations in a DPIA is similar to the algorithm transparency duty to clarify the categories of personal data used and how the algorithm profiling is built.

2. The controller's duty to assess the necessity and proportionality of the processing operations in the DPIA is similar to the algorithmic transparency duty to explain the pertinency of personal data used and the relevance of the profiling.

3. The controller's duty to assess the data processing risks and impacts on individuals is similar to the transparency duty to explain the impact of the profiling use in automated decision-making.

4.The controller's duty to establish safeguards of individual rights in case of automated decision-making (under Article 22(3) and (4) GDPR) is similar to the duty to find and describe measures envisaged to address the risks in DPIA.

The authors conclude that AIAs are "crucial" for establishing algorithm accountability and creating a model for "multi-layered explanations". They suggest that since the GDPR requires "several layers of algorithmic explanation," it follows that data controllers should disclose relevant summaries, using the DPIA process "as a first layer" of explanation, which can be "followed by group explanations and more granular, individualized explanations."

However, they point out more research is needed on how the different layers of explanations: systemic, group and individual "can interact" as tools to develop an AIA "that might be re-used" towards "GDPR-complying explanations and disclosures."

Binary Governance: Lessons from the GDPR's approach to algorithm accountability.

In a separate paper, "Binary Governance: Lessons from the GDPR's approach to algorithm accountability," Kaminkski explores the "quickly growing" divide around individual rights and public accountability.

She examines why both are "not just goods in an of themselves but crucial components of effective governance," examining the Dignitary, Justificatory and Instrumental rationales for regulation.

"Only individual rights can fully address dignitary and justificatory concerns." Without some form of public and stakeholder accountability, collaborative public-private approaches to systemic governance of algorithms will fail."

The Dignitary or autonomous rationale "resonates" with U.S. law and privacy literature.

It's concerned with respecting the rights of persons to exercise freedom, to operate with autonomy and not be treated as "exchangeable with some else."

Algorithmic decisions based on what categories a person falls into by showing correlations between other individuals "can fail to treat that individual as an individual...violates their dignity and objectifies" them and "their traits, rather than treating an individual as whole person."

"The subject of an automated decision should be able to explain why an algorithm’s framing is not the full picture and to introduce individualizing, sometimes mitigating, factors an algorithm has not considered."

Privacy literature raises concerns about the notion of the "data double: a shadow self" made up of data points collected without permission, creating "an objectified version the self."

The right to sue to protect one's likeness; public disclosure of private fact; and defamation "resonates" with U.S. law. The Privacy Act and Health Insurance Portability and Accountability Act (HIPA) "also reflect this dignitary concern."

Americans relate to the dignitary concern of autonomy. ADM based on individual profiling limits choices, freedom and leads to manipulation.

Secret profiling and decision-making leads to predatory lending targeted to ethnic groups.

Concerns about autonomy and manipulation fueled the "indignation" caused by Cambridge Analytica's manipulation of voters in the U.S. 2016 election and "motivated" California to enact the California Consumer Privacy Act in 2018.

The Justificatory rationale leads to individual process and systemic accountability.

The Justificatory rationale calls for "legitimacy of a decisional system" and "resonates strongly with the rule of law." It requires legitimate justifications, not explanations. It's not only about fixing errors or bias. The fix is a "byproduct" of "ensuring" that a system is fair, valid and legitimate.

When people "play a meaningful role in the process," they might trust them more.

ADM "trigger a particular set of justificatory concerns" because they "potentially eliminate" the work individuals do to "fill in and circumscribe decisional context."

Human beings contextualize decisions around cultural knowledge and a sense of what's fair or appropriate.

Although some human decisions about context may be included in the design of ADM, they are absent at the end point when the algorithm is applied to a particular individual.

This rationale can "lead to both calls for individual process and calls for systemic oversight and accountability." Both approaches apply because ADM is encapsulates both. It implicates many people and individual decisions.

Due process and explanations says "'Let me show you how and why this decision was made about you and let you contest it'”.

Third-party accountability says “'Let me assure you that neutral experts are providing oversight to make sure this decision was made fairly and for fair reasons'”.

Different kinds of ADM may require different forms of process.

U.S. laws contain many examples of justificatory obligations, such as warrant requirements, the Daubert standard for expert evidence, and open government law like the Freedom of Information Act and Federal Advisory Committee Act.

The Instrumental rationale is the "dominant" logic but lacks individualized transparency.

The "dominant rationale" for regulation Instrumental. Its concern is to rule out "baked-in" bias, discrimination and errors. This approach emphasizes using systemic accountability mechanisms like audits, ex ante technical requirements, or oversight boards, and "tends to discount the value of individualized transparency or process."

A binary approach combining individual due-process rights and systemic regulation addresses all three rationales.

Dignitary, Justificatory and Instrumental concerns "can overlap" but they also lead to "divergent regulatory solutions" and a single regulatory approach would not "effectively address all three."

Instead, Kaminski proposes a a "two-pronged approach" to algorithm governance. "Individual due-process rights combined with systemic regulation achieved through collaborative governance, (the use of private-public partnerships)."

This approach "goes beyond porting" individual due process and systemic oversight from existing law and data privacy principles to "discuss overall regulatory design."

"In a binary system, systemic accountability measures also serve more than one purpose: they may bolster individual rights by providing oversight in the name of protecting individuals, and provide the accountability necessary to oversee collaborative governance, as companies create and implement rules."

An "individual rights regime" would address dignitary and individualized justificatory concerns. Providing transparency, explanations and participation to address justificatory concerns about legitimacy.

In a system such as this, a fired teacher would receive notice she was subject to and ADM, an explanation as to why the decision was made and a right to challenge it.

U.S. Courts have upheld "due process-like requirements" for employment decisions, membership decisions, and decisions to terminate residence in a nursing home.

Congress gives individuals the right to access and correct information private companies have about them.

The Fair Credit Reporting Act (FCRA) requires credit and consumer information disclosure and creates a right to dispute it and to know if it has been used against you.

This "suggests an intuition" by courts and law makers that "significant decisions made by private parties can be made subject to individual process rights.'"

The Individual rights regime, however, is a "limited tool" for "finding and fixing system-wide problems" and "not a particularly good way to correct a complex, opaque, and evolving system."

The role of law is not only instrumental. Law seeks to legitimize and delegitimize, validate and invalidate other decisional systems. It seeks to "protect individual rights even against private actors."

A systemic approach like collaborative governance would affect the design of the algorithms and the organizational design of the human system around them.

"Binary governance is the scaffolding on which" to build the "right balance between hard law and soft, between flexibility and accountability, between bounded rights and room for private innovation."

"The future of good algorithmic governance is a binary system" that is responsive to subsector-specific systems and decisions. And that addresses "dignity and autonomy, systemic and individual legitimacy concerns and error and bias in algorithmic decision-making."

Elaine Sarduy is a freelance writer and content developer @Listing Debuts