Record of meeting: Stark, Tony

It’s probably not the best idea to let 3000 machine learning specialists loose on a casino floor, but nonetheless that was the setting for Amazon’s AI conference in Las Vegas, Nevada.

re:MARS is Amazon’s showcase of all their cutting edge AI, space and robotics technologies. A flashy event that lived up to the promised hype, and included an opening keynote by Robert Downey Jr who was indistinguishable from his Tony Stark character. (RDJ even launched a Stark-esque initiative to clean up the environment using AI, robotics and nanotechnology.)

The conference is a conga line of science fiction come reality. Machine vision for autonomous vehicles. Space junk recycling centres for the moon. Cashier-less shopping. The most sophisticated drones ever. Earth observations with millimetre precision. Co-bots. Moon landers.

All very exciting. Made more so by the fact that many of these innovations are well beyond the whiteboard stage, and have turned into business functions.

 Then this happened…

“If you have ever wanted your public policy decisions to be optimised by artificial intelligence, if you have ever wanted your public policy to be driven by historical economic data, personalised to your specific preferences and made transparent and highly collaborative, then this is the session for you.”

Amazon are using machine and reinforcement learning techniques to support automatic, state policy creation. Yep.

The idea being that with enough data, AI technologies can be deployed to identify the narrowest Pareto improvement.

Using a case study of law enforcement and gun control, Amazon researchers combined FBI and Census data with data on the types of policies instituted in different states from a database developed by Dartmouth University. (Its not perfect, but is quite extensive — on gun control alone the database covers over 40 policy iterations like does the state have concealed carry laws? are assault weapons banned? how long is the minimum waiting period?)

Their model then uses deep learning to map economic variables to policy suggestions, and then iterates to identify a policy suggestion that generates the biggest improvement in a targeted outcome.

Sure it’s a fascinating thought experiment, and there are more than a couple of complexities that would need to be addressed. I’m sure 19th Century textile workers were skeptical of the loom in its alpha phase as well.

But even if the algorithm falls (well) short of automating democracy (and it does), there are a number of benefits from this approach. For instance, automatic real time data analysis would be really valuable. So too would be a clear, objective baseline recommendation. Broadly, to the extent that AI is able create policies that are at least as intelligent as the available data, this would be a good thing.

Leave a comment

Blog at WordPress.com.

Up ↑