Black Mirror

“If a worker falls behind [their productivity target], they are subject to disciplinary action. In many warehouses, termination is an automated process… irrespective of the personal circumstances that led to their “mistakes.””

AINOW’s 2019 Report reads like a series of IRL Black Mirror episodes. Citing examples from “AI-enabled management of workers, to algorithmic determinations of benefits and social services, to surveillance and tracking of immigrants and unrepresented communities” AINOW finds that AI technologies are “widening inequality, placing information and control in the hands of those who already have power and further disempowering those who don’t.” Their headline concern is that “AI systems continue to be deployed rapidly across domains of considerable social significance — in healthcare, education, employment, criminal justice, and many others — without appropriate safeguards or accountability structures in place.”

The broad ethical challenges posed by AI technologies are a concern of government, public interest organizations, NGO’s, academia, and industry (eg Microsoft and Google) alike. And, in an effort to reconcile this, the last few years has seen the development of a slew of principles documents across the stakeholder spectrum (including by Australia). A PWC study of some 59 principles documents found a general consensus was emerging around accountability, safety, human control, reliability, fairness, inclusion, sustainability, transparency, explainability, stakeholder engagement, compliance and privacy. 

While a seeming preoccupation elsewhere, the US Government has to date, been rather quiet on this front (though they did sign up to the OECD’s AI Principles).

Rather, the clear focus of American AI policy has been sustaining and enhancing the nation’s “scientific, technological, and economic leadership position” in the field. The American AI Initiative for example had five goals: 1) promote sustained AI R&D investment, (2) unleash Federal AI resources, (3) remove barriers to AI innovation, (4) empower the American worker with AI-focused education and training opportunities, and (5) promote an international environment that is supportive of American AI innovation and its responsible use.

That’s changed. In a draft memo launched at the CES tech trade show in Las Vegas, the White House acknowledged that the “The deployment of AI holds the promise to improve safety, fairness, welfare, transparency, and other social goals” (emphasis added) and “developing and deploying AI requires a regulatory approach that fosters innovation, growth, and engenders trust”. Heads of US Executive Departments and Agencies were provided with 10 principles to consider when formulating approaches to the design, development, deployment, and operation of AI. These principles, have three main goals: “to ensure public engagement, limit regulatory overreach, and, most important, promote trustworthy AI that is fair, transparent, and safe.”

The principles are:

·         Public trust in AI

·         Public participation

·         Based on science

·         Risk management 

·         Benefits and costs

·         Flexibility

·         Fairness and non-discrimination

·         Disclosure and transparency

·         Safety and security

·         Interagency coordination

These are not too dissimilar from the principles released in Australia’s AI Ethics Principles or in the OECD’s Principles on Artificial Intelligence. The US’ points of emphasis regarding robustness and consultation are probably a function of intended audience (these are principles to govern the US Government’s use of AI), however it’s interesting that an explicit principle regarding Human/American values is missing. 

The next phase of policy development — globally — will be to move from principles to implementation. (This will be the real fight.) And here the US has gone out somewhat ahead of others. Federal agencies will be required to submit a memorandum to the White House’s Office of Science and Technology Policy to explain how any proposed AI-related regulation satisfies the principles. While the OSTP does not have red light / green light powers, the procedure could still provide the necessary pressure and coordination to uphold a certain standard.

Leave a comment

Blog at WordPress.com.

Up ↑