Somebody is minding the kids

TECHNOLOGY 02.13.18

UPDATED 01.21; 01.24

Artificial Intelligence (AI) is being asked to explain itself.

The Explainable AI Project.

MACHINE LEARNING ALGORITHMS tell us what's likely to happen, but they can't say why. Researchers want answers. They're asking AI to explain its decisions to people. Questions about the "disconnect" between how humans and machines make decisions have given rise to a call for data transparency and novel field of research: Explainable AI (XAI), reports The New York Times.

AI systems and their Deep Neural Networks (DNN’s) are increasingly involved in vital aspects of our lives like housing, employment, education, financial, medical, justice and government. Experts, researchers and AI stakeholders in government, public and private sectors are focusing on AI's explainability, trustworthiness and impact on core institutions. They are looking for ways to make AI systems explain to humans the reasons for their determinations.

DARPA and PARC work on common ground learning system.

Last July, the U.S. Defense Advanced Research Projects Agency (DARPA) selected Palo Alto Research Center (PARC) to develop a “highly interactive sense-making system," Common Ground Learning Explanation (COGLE) to explain the “learned performance capabilities of autonomous systems."

DARPA announced The XAI Project  in 2016, concluding that tech is going to “produce autonomous systems that will perceive, learn, decide, and act on their own,” leading the agency to look for ways to  “enable an end-user who depends on decisions, recommendations, or actions produced by and AI system to understand the rationale for the system's decisions,”  said the agency.

The program is bringing “together the world’s top expertise in machine learning, human cognition and user experience,” reports Nasdaq Globe Newswire

Apple — explainability is at the center of the relationship between man and machine.

The Siri team at Apple says "explainability is a key consideration" for the team as they continue to develop Siri.  Ruslan Salakhutdinov, Apple's AI research director and associate professor at Carnegie Mellon University, believes we must be able to explain interactions between people and intelligent machines. "It's going to introduce trust", he tells MIT Technology Review

AI Now Institute — accountability in rights, labor and safety.

The AI Now Institute at New York University (NYU) was founded last year by Kate Crawford and Meredith Whittaker to "ensure that AI systems are sensitive and responsive to the complex social domains" of "rights and liberties, labor and automation, bias and inclusion, and safety and critical infrastructure." Partnering with the American Civil Liberties Union (ACLU), they are working with "advocates and front-line communities" to address the "concerns of the most vulnerable, as cities are "grappling with the opportunities and challenges presented by the use of automated decision systems."

Partnership on AI — implications of AI in law, economics, sociology, government.

The Partnership on AI to benefit people and society, which includes among others Amazon, Apple, DeepMind, Google, Facebook, IBM and Microsoft and McKinsey, is looking into the "intentional" and "inadvertent" effects of AI on people and society, and "aspirational efforts in AI for socially benevolent applications." They are working to facilitate inclusivity and collaboration across an array of fields - law, policy, government, philosophy, economics, sociology, civil liberties - to foster AI  best-practices, addressing safety, ethics and transparency,reports McKinsey.

Mustafa Suleyman, one of it's co-chairs and co-founder of DeepMind, who started out as a social activist,  predicts that "the study of the ethics, safety and societal impact of AI is going to become one of the most pressing areas of enquiry over the coming year," reported by Wired, UK.

The promise of AI -  humanistic transparency.

Tom Gruber, Siri's co-creator said in a TED talk, when you combine the ability of machines and humans it is possible to create a partnership with superhuman performance. He calls it "Humanistic AI." Instead of asking "how smart can we make the machines" he proposes we ask "how smart can our machines make us."

Tolga Kurtoglu, PARC Ceo, says "the promise of AI is to design and build systems where humans and machines can understand, trust and collaborate together in complicated, unstructured environments," adding that AI's future is "less about automation" more about a "deep, transparent, understanding" between us and  machines.

Elaine Sarduy is a freelance writer and content developer @Listing Debuts