Do algorithms make better decisions?

By Professor Kai Riemer, Unit Coordinator, Managing with Technology, The University of Sydney Business School MBA program

5256 Sydney Business Connect academic photographyRecently, we have heard a lot about algorithms, machine learning and artificial “intelligence” (AI), and the promise these technologies hold for improving decision-making. The argument for the power of algorithmic decision-making rests on two assertions: Algorithms are said to be able to digest greater amounts of data than humans, and thus make more informed decisions. And algorithms are said to be unbiased, and thus make objective decisions. But how realistic are these assertions?

Are algorithms better informed?

First, let us look at the claims about data. Recent research has indeed demonstrated the efficacy of algorithms that implement an advanced form of machine learning, so-called deep learning, for making decisions that rely heavily on pattern recognition. For example, algorithms have been shown to find cancerous cells in vast amounts of images from CT scans faster and with greater precision than human diagnosticians. Microsoft sales agents use deep learning algorithms to decide which lead to contact next, and self-driving cars rely on similar technology for navigating in traffic.

However, in order to judge what such algorithms can (or can’t) do for business decision-making it is important to gain some understanding of how they work. Unlike traditional algorithms where the decision logic is implemented as explicit if-then rules, self-learning algorithms have to be trained with existing data, from which these algorithms learn to infer relevant patterns when later presented with new data of the same kind. There are no rules. The algorithm learns by adjusting a complex, layered network of “neurons” to respond to patterns.

What then are the implications? First, such algorithms only work where the problem domain is well-understood and training data is available. Second, they require a stable environment where future patterns are similar to past ones.

It is easy to see, however, that many business decisions are not like this, in particular not those that matter for the future of a business. Once we realise that the future is rarely an extrapolation of the past, but actively created, we can see that algorithms that lock us into the past are not appropriate when it comes to forward-looking decision-making.

Consider hiring decisions: algorithms will have to be trained with data on which past hires were successful. A learning algorithm would then allow identifying candidates with the same traits as those successful previously. Yet, often hiring more of the same is not what will be best for the company going forward. It will also compromise diversity and be detrimental whenever the organisation has to respond to change or wants to venture into new areas. Also, let’s not forget that such training data relies heavily on hindsight and by its nature is incomplete, because it does not include data on those candidates that were not hired.

Are algorithms unbiased?

Machine-learning algorithms are as unbiased as the data with which they were training. The above example shows that if simply trained with past hiring data, the algorithm would merely perpetuate past biases. Of course, we could ‘clean up’ the training data to remove biases. But who would we entrust this task to? Whoever gets to decide on the training data will embed their biases in the algorithm.

And crucially, no-one knows, not even the creators of these algorithms, how exactly these algorithms reach their decisions. They resemble black boxes that cannot explain themselves and answer the all-important ‘why?’ question in justifying decisions. What we are left with is: “the computer says no!” Entrusting decisions to such algorithms would mean that we transfer accountability for decisions to those in charge of training them, effectively outsourcing our ethics.

So, where does this leave us?

To be clear, machine-learning works for operational pattern-recognition problems, in particular those involving high volumes of unstructured data. But these algorithms require conditions of ‘business as usual’. Ironically, because of the grounding in past data, this supposedly disruptive technology cannot cope well with disruptive change.

As for decision-making, most situations that matter do not present clearly delineated options to be weighed up. What is needed rather is human judgement and expertise. Decision-makers need to commit to a particular course of action, guided by a clear purpose and a shared story of what we want the future to look like, and to motivate and convince others to follow, rather than mechanistically entrusting algorithms to make decisions based on the past.

And regarding bias let’s remember that every decision, by definition, involves enacting preferences, valuing some criteria over others. A decision is always biased in some sense; we might not hire on the basis of gender or race, but we might value some personal traits, degree programs or education institutions more highly than others. Rather than black-boxing decisions in an entity that cannot be held accountable, we should seek to have an open and transparent conversation about which distinctions are in play in making decisions.

This entry was posted in MBA. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s