How Machines Make Decisions Is A Human Rights Issue

Aurum Linh
3 min readJan 18, 2021

We launched a project at the intersection of machine learning and human rights. Atlas Lab is a platform that helps legal professionals better understand how machine learning and algorithmic decision making are intersecting with human rights. Here’s why you should care.

Technology is not just an industry; it is a tool that has transformed every industry. What does that actually mean for our relationship to technology, as the people who are building it?

Let’s take a step back. All tools get their meaning from the context in which they are used. If you’re building a house, a hammer is essential. It is not easy to imagine building a house without one. However, suppose your goal is to cut a mirror into a circular shape. In that case, a hammer is not only useless, but it has destructive consequences.

Of course, a hammer is an oversimplified analogy. It doesn’t get to the real power that technology holds. The potential damage that a hammer could do is limited by its power, scale, and use. Applying this understanding back to tech, let’s now ask the question: what is the context in which all technologies exist?

Technology exists in a social and political context from which it cannot be meaningfully separated.

People outside of the technology bubble have this completely wrong assumption that machine decision-making is this higher intelligence free from biases. Yet, the reality is that biased people do not make neutral technology. Systems of oppression, and the anti-Blackness, racism, and colonialism that fuel it, do not go away when technologies are being designed, developed, and deployed.

Humanity’s relationship to free will began with carrying out a divine entity’s will to a shift, where many humans found power in making their own decisions. Another shift is happening today — this time to algorithms.

This shift from human to machine decision-making was an attractive idea at first. However, we have seen time and time again that machine-decision making is not neutral and is inherently biased.

Law has not been able to keep pace with the rapid growth of technology. Lawmakers struggle to grasp how machine learning technology works and which problems need to be addressed. Inaction does not reflect a lack of will, so much as a lack of sharing knowledge between industries.

How machines make decisions is a human rights issue. For this reason, we are launching Atlas Lab — an effort to help close this knowledge gap by educating and connecting lawyers working on the front lines of AI and human rights litigation.

As part of closing this knowledge gap, Atlas Lab has resources for technologists as well. We feature explainer articles on the litigation process, human rights law, and a small collection of summaries of court decisions at the intersection of these two worlds.

In the coming months, outside the activities of DFF and Mozilla, I’ll be organizing an event series focused on the stories of those who have been affected by algorithmic decision-making in the coming months. The series will revolve around the criminal justice system, immigration agencies, and child welfare services. To stay updated, RSVP for the event series on Atlas Lab and follow along on Twitter and LinkedIn! Please contact me at aurum@atlaslab.org if you notice anything incorrect or missing from our explanations.

--

--

Aurum Linh

Creatrix of Atlas Lab • How machines make decisions is a human rights issue