Photographic memory

AI technology has been around for a surprisingly long time. In the 1980s, the US Army attempted to use neural networks to automatically detect camouflaged enemy tanks. Researchers trained an algorithm to distinguish between photos of camouflaged tanks in trees, and photos of trees without tanks. But when the researchers handed the work over to the Pentagon, results could not be replicated. It turns out that in the researchers’ data set, photos of camouflaged tanks had been taken on cloudy days, while photos of plain forest had been taken on sunny days. The neural network had simply learned to distinguish cloudy days from sunny days.

Advances in machine learning have come a long way since. At MIT’s recent AI Policy Congress, participants were treated to a demonstration of an algorithm that didn’t tell you what a picture was of, but rather which Instagram user had taken the picture!

AI raises all kinds of really probing questions. When an algorithm is making choices about your ability to pay back a loan, your likelihood of developing a medical condition, chance of recidivism, suitability for a job — who is accountable? What right does a citizen have to understand how that decision is made? How do we know that decision was fair?

One of the most fascinating — and complicating — things about machine learning, is that we don’t really understand how algorithms are making the choices they do. (To really get that point, it’s worth having a play with Google’s hands on machine learning site teachablemachine.withgoogle.com).

Suppose for instance, you want to write an algorithm that can identify if you’re looking at a photo of a car, or a dog, or a hamburger. Machines learn the difference through an iterative process. Think of the algorithm as a collection of the thousands and thousands of units, each with a vote on what it thinks the picture is. When you show the algorithm a picture, some of those units will get it right and some will get it wrong. Increase the weight given to all those units that that get it right (making their vote count more), and turn down the weight given to those that get it wrong (making their vote count less). Now show the algorithm a new picture and repeat the process. Do that thousands and thousands of times over and over. Eventually you’ll end up with an algorithm that’s pretty good at predicting which picture is of a dog and which is of a piece of fruit.

Its through this iterative process that the machine learns how to tell pictures apart. At no point, has a programmer or engineer told a unit how to cast their vote. Units are voting independently (randomly?); each latching on to a piece of information that cannot necessarily be observed, understood or explained. Further, any biases that the algorithm produces (towards say race, income or gender) are not the consequence of prejudices in the programmer, but just what the algorithm has deemed relevant or not. The data used to train the algorithm is inherently important and can itself be biased. (An early version of Amazon’s recruitment algorithm for example, immediately rejected all female candidates, because it went looking for candidates that looked like the company’s current workforce.)

The technology and the ethics of AI go hand in hand. Trying to lead the technology without thinking through the ramifications of the operating environment seems short-sighted; and writing the rules without understanding where the technology is heading is likely to be ineffective.

The international community is clambering to get ahead of AI ethics, but the US has been notably quiet. There has been some work done by NIST and the GAO on problem definition, but to date the Trump Administration has been leaning in favour of market solutions. This week however, President Trump signed an Executive Order on Maintaining American Leadership in Artificial Intelligence, citing America’s continued leadership in AI to be of “paramount importance to maintaining the economic and national security of the United States and to shaping the global evolution of AI in a manner consistent with [America’s] values, policies and priorities.” The Initiative contains six strategic objectives, one of which is the development of “reliable, robust and trustworthy systems built on technical standards that minimise vulnerabilities”. The executive order doesn’t use the word “ethics” specifically, but these terms are clearly related to and influenced by ethical considerations. 

Post script: Just FYI, the presentation we saw about the Instagram algorithm? It was produced in 6 weeks by third year undergraduate, as part of an assignment…

Leave a comment

Blog at WordPress.com.

Up ↑