Machine Learning and Policy Analysis

556

Machine learning, despite its notable constraints, has found incredibly useful applications within medicine, image recognition, and many other areas; it has allowed computers to perform certain “domain-specific” tasks faster, more efficiently, and more accurately than human beings.

As the future moves forward, advancements in brain imaging, quantum computing, and biologically-enhanced intelligence will only propel machine learning forward towards “true” AI capability — greatly reshaping the globe.

How it Works

Modern machine learning is largely achieved through “neural networks,” specialized algorithms that simulate the brain’s functionality. By taking in some number of inputs and then systematically building “neural” connections between those inputs, these algorithms can achieve a complex understanding of data. This allows them to produce some number of outputs as a result — i.e. reading in the pixels of an image, making deductions from those pixels, and then identifying the object in the photo.

When these connections are made, they can occur with the “supervision” of a human being, meaning we say “I have x which yields y, so figure out what connects them,” or without the supervision of a human being, meaning we say, “I have x, y, and z, but can’t figure out what connects them — so derive that for me.” This, however, is precisely why machine learning cannot be effectively used for policy analysis.

The “Black Box” Problem

As I’ve discussed in numerous other technology-, ethics-, and policy-focused articles, this underlying machine learning behavior makes these algorithms more or less a “black box.” While we know the inputs and outputs, we don’t exactly know what happens in between.

This can cause everything from extreme network vulnerability (i.e. manipulating input data to disrupt functionality) to dangerous algorithmic bias (i.e. recommending discriminately longer prison sentences for black convicts). Many research groups, such as one between Cornell, Microsoft, and Airbnb, have worked on getting these algorithms to better display their internal processes, but most of the work is still in its infancy.

Until their goals of ML transparency are achieved, however, these algorithms will remain a mystery. We can’t distinguish between correlation and causation. We can’t be sure if the model’s statistical processes are in some sense flawed. And we can’t be sure how to fix it all even if we identified it in the first place.

Policy Constraints

This is a fundamental problem for policy analysis, as scholars like Stanford Economist Susan Athey have discussed. A field that depends on clear distinctions between correlation and causation simply cannot rely upon techniques that blur or even conflate these properties. But in a world increasingly dependent on technology, this seems like a ridiculous constraint.

We could think of myriad uses of machine learning in policy-making — from analyzing national security intelligence to garnering new insight into welfare economics. It seems that policymakers both at home and abroad should be regularly using machine learning to inform their decisions for the better. And, logically, this makes sense — but it’s prohibited by technical constraints.

So until changes on this front are pushed by the policy and technology communities, policy decision-making will be unable to leverage this powerful technology for the better. It’s time we advocated for that change.