Holding algorithms accountable

Every day, more of our lives seem to fall under the control of computer algorithms. Think about the influence that Facebook’s news feed has on your daily life, for example. Do you have any idea why you see one friend’s posts a lot, but not another’s? It’s impossible to know for sure, because Mark Zuckerberg and co won’t explain how their algorithms work. In other words, your Facebook news feed is a black box. Even if it wasn’t, chances are the algorithms behind your Facebook feed are too complex for anyone but a software engineer to understand anyway.

It’s not just online algorithms that are influencing our daily lives. Algorithms are increasingly being used – sometimes unfairly – by the government and our employers.

In 2015 the Ministry of Social Development (MSD) got into trouble with its new predictive risk modelling tool. The aim of the tool is to predict future abuse, welfare dependency and the likelihood of a child turning to a life of crime in adulthood. When MSD proposed testing this tool with an “observational study” of 60,000 children born in 2015, it was blocked by Social Development Minister Anne Tolley. Her objection was that the study would not intervene and actually stop abuse during the two-year study period. “Not on my watch,” she scribbled into the margins of a Ministry report, “these are children, not lab rats.”

It’s just as well Tolley intervened, because MSD’s next recommendation may’ve been setting up a “pre-crime” unit in our police force (as depicted in the dystopian science fiction movie The Minority Report). If we’re so sure we’re identifying high-risk individuals with predictive risk modelling, why not lock them up before they commit the crimes?

Of course I’m joking, but there are legitimate concerns with these tools. In a July 2015 blog post, the Office of the Privacy Commissioner pointed out that “if PRM data is disaggregated to an individually identifiable level, then people may be flagged as ‘high-risk’ in public service systems forever more.” It also warned about the “potential for an ‘at-risk’ label becoming a self-fulfilling prophecy.”

Employers can also get into trouble with algorithms. Take Uber, for example. According to a former Uber driver named Mansour, his job could be terminated without notice “if a few passengers gave him one-star reviews, since that could drag his average below 4.7.”

That’s straight out of an episode of Black Mirror, a TV series that takes a dark view of new technologies. The opening episode of series 3 portrayed a society where people routinely rank each other on a five-star popularity scale. The main character is a young woman who starts the episode with a respectable 4.2 average rating. However, things start to go wrong when she tries to boost her ranking in order to qualify for a better apartment. Over the course of the episode, she receives a series of poor rankings from random strangers – including a one-star rating from a cab driver. Her popularity tumbles, making her life a misery.

While that Black Mirror episode was obviously an exaggerated satire, it’s not far from reality for an Uber driver. Especially if, as Mansour claimed, a few poor ratings could result in immediate termination.

So what can we do to roll back the power of algorithms in our lives? Part of the solution is to call attention to any inequalities that result from algorithm changes. Uber has definitely been on the end of that kind of attention this year – most infamously when a video surfaced of its (now former) CEO Travis Kalanick arguing with an Uber driver. The driver, Fawzi Kamel, complained to Kalanick about how much money he’d lost because of Uber dropping its prices – seemingly at random.

At one point Kamel says this: “But people are not trusting you anymore. … I lost $97,000 because of you. I’m bankrupt because of you. Yes, yes, yes. You keep changing every day. You keep changing every day.”

The reason Uber drivers like Kamel don’t trust the company is because of these sudden changes, which are reflected in the algorithms that power the Uber app. To drivers like Kamel, the algorithm is both unpredictable and unfair. (In an ironic twist at the end of that story, Kamel gave Kalanick a one-star rating as a passenger.)

So, calling attention to algorithmic inequalities is one way to hold companies like Uber accountable. Another way is to program more transparency into the systems from the get-go.

An interesting example of this is a branch of Artificial Intelligence called “Explainable AI.” The algorithms that power AI are among the most powerful – and inscrutable – in the world. Often even the engineers who program AI software don’t understand how it works. It’s the ultimate black box. But there’s a growing call for AI systems to expose their decision-making process and be ‘explainable’ to non-experts.

Whether it’s even possible to make AI explainable is debatable. But the European Union is already on board with the idea. The EU’s General Data Protection Regulation, which will come into force in May 2018, places restrictions on AI systems which “significantly affect” users. It also creates a “right to explanation,” so that anyone can ask for an explanation of an algorithmic decision that was made about them.

These kinds of measures will become increasingly important. Just as you’re innocent until proven guilty in a court of law, you should have the right to defend yourself if a computer algorithm makes a decision about you that may adversely affect your life.

It’s up to all of us, as users of these systems, to enforce this. If algorithms are holding us accountable – with their five-star rating systems and the like – then let’s hold them accountable too. Or more specifically, let’s hold the organisations which use these algorithms accountable. It’s only fair.