Do you like apples?

Technology — writ large — poses a range of ethical issues. The current tech-lash waging against the Facebook and Googles suggest that perhaps those issues haven’t been handled appropriately in more recent times. 

We’re live in a time of not just rapid technological progress, but of multi-faceted technological progress as well. And while each technology has brings with it its own string of ethical questions that need to be addressed — it’s only the ethics of artificial intelligence that seems to attract attention. The OECD last week adopted Principles on AI. Microsoft, Amazon, Facebook andGoogle have each embraced the AI ethics platform as well (for better or worse).

What’s special about AI? Perhaps it’s because, up until now, we’ve felt in control? Or perhaps, up until now, we weren’t so aware of the broader implications? Whatever the reason, AI technologies are all at once ubiquitous, scalable, opaque, complex and material. That’s probably sufficient enough to warrant close attention. (If you need more, read Risse’s account of how AI will impact on every aspect of the UN’s Universal Declaration of Human Rights.)

We’re entering (entered?) Life 3.0, a world in which technology-augmented humans share spaces with very sophisticated machines, and surveillance capitalism is becoming an increasingly important economic driver. As we do so, we need to ask how do we want technology to enter our lives?

Putting aside the more nefarious uses, probably the main ethical concern about AI is bias. Algorithmic bias comes from a number of sources. Data used to learn the machine is, by definition, historical, and may be based on prejudiced practices. There may be whole cohorts of people missing from the sample. And even if there’s not, the selection of features fed into the algorithm, will contain implicit assumptions about what matters. Class labels like “creditworthiness” for example, make implicit and necessary assumptions about lifestyle choices.

With the right data (both quantity and quality) and some critical thinking, some of these issues can be controlled. But probably the easiest work around artificial unintelligence is to look for an alternative solution. Indeed, the question of whether AI can be an effective solution is not nearly as important as the question of whether AI can be a cost effective solution.

We need to remember that algorithms don’t think — in the same way that submarines don’t swim. AI is more like your most brilliant, yet literal, friend. They’re wonderful on details, but will often miss the bigger picture. They require a clearly defined task, with a clear understanding of the problem, and loads of relevant data.

It’s needless to say that vigilant evaluation and assessment of an AI systems is crucial.

But even then, once you detect bias, it’s not clear what to do about it. Who is to say what is fair? Consider ProPublica’s oft-cited article on bias in the algorithms used to predict the likelihood of recidivism. Their analysis of some 7000 arrestees in Broward County Florida led them to believe there was systematic bias against persons of colour.

On the whole the algorithm was relatively accurate — 61 per cent of those deemed likely to re-offend, were indeed re-arrested within two years. Furthermore, the algorithm got it wrong for whites just as frequently for blacks. The bias was came from the fact that the nature of the error was very different — whites were more likely to be mislabelled as low risk (false negative); and blacks more likely to be labelled as high risk (false positive).

Northpointe, who wrote the offending COMPAS software, acknowledge this discrepancy, butargued that their algorithm was “fair” because the rates of the rates of true positives was the same for both groups.

The complicating factor is that all the cells in the confusion matrix are correlated. In order to achieve more equality in the rates of false positives and negatives will necessarily mean less equality in the rate of true positives and negatives. In other words, getting it wrong better, would mean getting it right worse. And that doesn’t seem fair either. You can’t have it both ways — COMPAS would have been flawed no matter what.

Something like 20 notions of fairness have been proposed over the last few decades. All are statistically logical, none are mutually compatible.

It’s possible that we simply know fairness when we see it. In 2017, the first few pages of a Google image search on the term CEO turned up nothing by white males in suits. Google has since “corrected” their algorithm, and now the same search for CEO shows much more racial and gender diversity. (A search on the term assistant still shows nothing but young females.) But this has almost meant telling the algorithm what to say, and this can’t be relied upon as a bias-free strategy either. 

There are many other questions of course. Many, many more. Those questions are complex, and their answers are not obvious. The reason we need to pay attention to AI ethics, is that we’re living in a Harry Potter world, full of Wizards and Muggles. If the Muggles don’t write the rules, the Wizards will.

Earlier this month I attended a day long forum on the Ethics of AI, hosted by the Harvard Kennedy School. I’ve tried to summarise the day the best I could, and apologise if the week’s edition reads a little more Socratic than usual.

Leave a comment

Blog at WordPress.com.

Up ↑