National AI Initiative

Thank you for minding the kids.

They need structure, please.


Administration announces national AI initiative. Researchers, legal scholars call for just, sector-specific frameworks supported by robust accountability.

This month, the U.S. joined 18 other countries that have AI initiatives with an executive order creating the American AI Initiative Program. It "orders the federal government to direct existing funds, programs, and data in support of AI research and commercialization," Wired reports.

Although the US "leads the world" in AI technology, there is no "high-level strategy to guide American investment and prepare for the technology’s effects." Lynne Parker, White House Office of Science and Technology policy, told Wired ''there are a number of actions that are needed to help us harness AI for the good of the American people.” Parker worked with the Obama administration on the societal implications of AI.

A senior administration official told reporters it will have five "key pillars." Research and development, infrastructure, governance, workforce and international engagement. There are no specific details, but the administration "expects to release more information over the next 6 months,", reports.

"What we have to watch out for," University of Washington, law professor Ryan Calo told Wired, is whether they are "aware enough of its social impacts" and "how to address the problems" it creates in the public domains of criminal justice and civil liberties.

What is needed is "a model for industry, by requiring AI algorithms be tested for bias and to be open to external auditing," he added.

Although the executive order "correctly highlights AI as a major priority for U.S. policymaking,” Kate Crawford, co-director of AI Now told, there is concern it appears to lean toward industry, and so far it doesn't include researchers or civic leaders. "Passing mentions of privacy and civil liberties don't dispel worries about the Trump administration’s 'troubling track record' on these issues."

The past year was a "dramatic" year in AI, as examined in AI Now's 2018 Report. From revelations of Cambridge Analytica's "seeking to manipulate national elections," to reporting that Immigration and Customs Enforcement (ICE) modified its own risk assessment algorithm "to only produce one result" which was to " 'detain' for 100% of immigrants in custody."

Notably, Forbes reports, "the initiative missed an opportunity to address two additional issues: immigration and data collection."

The AI Accountability Gap.

AI Now's 2018 Report outlines strategies for "moving forward," based on the "latest academic research."

It examines how the "stark cultural divide" between the "highly concentrated AI sector" and the "vastly diverse populations" where AI systems are unfolding is "growing larger." There is "growing concern about bias, discrimination, due process, liability, and overall responsibility for harm."

The technical community has offered many definitions and strategies for "algorithm fairness," leading to "new algorithms and statistical techniques that aim to diagnose and mitigate bias." But, they don't address "deep social and historical roots."

There needs to be collaboration between technical and non-technical disciplines, and a "tight connection to real world impact" that examines "social impacts at every stage," backed up by "robust accountability frameworks" not only "privacy certification programs."

Systems are being tested on live populations with little oversight.

Silicon Valley's "move fast and break things mentality" has led to "rampant testing of AI systems 'in the wild' on human populations."

Recently, two people died in connection with autopilot technology. An oncology algorithm tested in hospitals worldwide was found to have been "recommending 'unsafe and incorrect cancer treatment.' " An AI-education vendor included "social-psychological interventions" in commercial learning software programs, tracking response to "growth-mindset" messages "without the consent or knowledge of students, parents, or teachers."

AI amplifies mass surveillance and raises civil rights concerns.

AI "raises the stakes" in mass surveillance in three areas: "automation, scale of analysis, and predictive capacity." Automation surveillance capabilities exceed "the limits of human review and hand-coded analytics," they "make connections and inferences" rarely made "before their introduction," and they make "determinations" about "individual character."

Facial recognition "raises particular civil liberties concerns." Systems detect individual faces in images or video and are able to conduct "sophisticated forms of surveillance" like "automated lip-reading" that can "observe and interpret speech from a distance." They can link faces with other personal data like credit scores, criminal records and social graphs.

Need for standards.

No federal legislation seeks to provide standards, restrictions, requirements, or guidance regarding the development or use of facial recognition technology.

Microsoft President, Brad Smith has called for a "principled approach for facial recognition technology, embodied in law, that outlasts a single administration or the important political issues of a moment."

There are "piecemeal" laws that "don't specifically address" it. Illinois passed the Biometric Information Privacy Act (BIPA) in 2008, requiring companies to obtain a written release and notify individuals of the specific purpose and length of biometric data collection. Facial recognition was not widely available in 2008, but many of its "requirements" are "reasonably interpreted to apply."

This month the Illinois Supreme Court held that BIPA requirements apply even when "no harm" occurs, such as data breach, hack, physical or psychological, because "harm to privacy meets the legal requirements for 'harm' " in the "landmark" case Rosenbach v. Six Flags Entertainment Corp.(2019). Civil rights activists have said this "should serve as a basis" for similar laws, reports CPO magazine.

The tech isn't reliable enough yet.

Rick Smith, CEO of Axon, a law enforcement tech company, said “accuracy thresholds” are not “where they need to be to be making operational decisions off the facial recognition.”

The American Civil Liberty Union (ACLU) and UC Berkeley tested Amazon's facial recognition tool, finding the system "falsely matched 28 members of Congress with mugshots." The false positive error rate for non-white members of Congress was almost 40 percent. It was only 5 percent for white members. Amazon admitted that "results can be significantly skewed by using a facial database that is not appropriately representative that is itself skewed."

The New York Police Department and IBM developed a system with a "custom feature" that included "ethnicity search." New Yorkers were never told of "the potential use of their physical data for a private company’s development of surveillance technology" even though New York was the "primary testing area."

The sketchy science of affect recognition.

Affect recognition is a subset of facial recognition. It "promises a type of emotional weather forecasting" that can detect inner emotions or "hidden intentions" by analyzing huge amounts of facial images to detect “micro-expressions,” and then map them to “true feelings.”

Machine learning can "intensify classification and discrimination," even when theories behind it "remain controversial among psychologists."

Affect detection raises "troubling ethical questions about locating the arbiter of someone's 'real' character and emotions outside of the individual."

Psychologist Paul Elkman grouped emotions into basic classes such as "anger, disgust, fear, happiness, sadness and surprise," which he said are fixed, universal, identical across people, and observable "regardless of cultural context."

However, "considering emotions in such rigid categories and simplistic physiological causes is no longer tenable," argues psychologist, Lisa Feldman Barrett.

Researchers raised concerns about a "reemergence of physiognomic ideas in affect recognition applications." The pseudoscience of physiognomy, which fell out of favor following "its association with Nazi race science," claims "facial features can reveal innate aspects of our character and personality."

Although Elkman's theories have been found "not to hold up under sustained scrutiny," AI researchers take it "as fact," using it as a "basis for automating emotion detection."

Structural and systemic problems in Automatic Decision Systems (ADS).

Governments are "routinely adopting" untested ADS to increase productivity and decrease costs, but they can't "explain automated decisions, correct errors, or audit the results of its determination" because they are "shielded" by trade secret law, according to AI NOW's Litigating Algorithms report.

Although the "playbook is still being written." A majority of judges have ruled that

"the right to assert constitutional or civil rights protection outweighs any risk or intellectual property misappropriation," and that failing to provide the "opportunity for public notice and comment" was "potentially unconstitutional."

"Robust and meaningful community engagement" and the ability to "address structural and systemic problems" need to happen before a system is designed and implemented. AI NOW's Algorithm Impact Assessment (AIA) framework gives affected communities the "opportunity to assess and potentially reject" systems that are not acceptable.

Few ADS allow those who are "trained to understand the context and nuance of a particular situation" to use "human discretion," "intervene or override" a determination. Harms are "only recognizable as a pattern across many individual cases."

These systems have been behind "dramatic reductions" in home care for the disabled. Legal Aid of Arkansas attorney, Devin De Liban, sued the state, "eventually winning a ruling that the new algorithmic allocation program was erroneous and unconstitutional. Yet by then, much of the damage to the lives of those affected had been done."

A system used by the Houston Independent School District used a third-party ADS to decide matters of teacher employment, including promotions and terminations.

No one in the district could explain or replicate the determinations, even though they had access to all the underlying data. Teachers who contested determinations were told "the system was simply to be believed and could not be questioned."

The Houston Federation of Teachers filed suit for civil rights and labor law violations. The judge ruled the use of that ADS in "public employee cases could run afoul of constitutional due-process protections, especially when trade secrecy blocked employees' ability to understand how decisions were made." The case settled and the District abandoned the ADS.

What next?

"Fairness formulas, debiasing toolkits, and ethical guidelines" were "rare" only three years ago, but today they are "commonplace," showing that there has been progress, but more needs to be done.

Fairness must include structural, historical, and political contexts; consider systems on local and global scales; examine hidden labor; promote interdisciplinary perspectives; address race, gender and power in workplaces and classrooms; allow for greater support for lawyers and civil activists who represent affected individuals; include broad coalitions that support oversight, and protect conscientious objectors.

Elaine Sarduy is a freelance writer and content developer @Listing Debuts