Criticial thinking is a shield against ‘being hacked’ by algorithms, say authors Simon Mueller and Julia Dhar
Our built-in decision making capacity isn’t adapted to many of the environments we find ourselves in today. The human brain evolved via evolutionary mechanisms over millions of years to perceive and prioritise certain pieces of information, identify relationships and make decisions. Our education and experience teach us methods to enhance these built-in processes in some cases, and overcome them in others. Yet our own common sense, and an ever-growing body of social psychology and decision-science research, shows that we still often go awry. Why? While our evolutionary programming and past experience go a long way towards helping us make good decisions, our professional and personal environments are changing too fast for our education (our software), let alone our brains (our hardware) to keep pace.
Algorithms and digital systems have come to the rescue and to help us make choices in the thicket of options. Amazon, for example, uses past orders and revealed preferences of people similar to us to offer us other products we may like. Waze and Google Maps find the fastest way from A to B, taking into account routes and traffic patterns. And Netflix splits users into more than 2,000 taste groups, based on past viewing habits.
Algorithms have created an immense amount of private and public value. Algorithms can help identify priorities by analysing emergency calls in realtime, detect depression, localise potholes, customise teaching and detect instances of human trafficking. But the potentially dangerous aspects of technology need to be addressed – such as technological dependence, safety concerns, biases and lack of transparency.
Soon the world will be dominated by algorithms which predict fairly accurately what we want, prefer and do next. The more data these algorithms gather, the better their predictions will become. It’s not science fiction to suggest that, in the near future, algorithms will know which of our mental (and emotional) buttons to press in order to make us believe, want or do things. In itself, this is not necessarily bad. After all, we use these algorithms voluntarily because they offer us some kind of benefit. They take some load off our shoulders to sift through options and present us with the most suitable ones – or they present the pieces of information that we are most likely to latch on to. And they don’t have to do that perfectly. It’s enough for them to simply be better than what we can come up with ourselves. Google Maps’ routing feature occasionally leads us to blocked streets and we need to take a detour. But in the overwhelming majority of cases, it leads us along the most time-efficient paths, saving us hours, days or even weeks of lifetime.
From the perspective of their owners (or developers), algorithms are tools to accomplish certain goals, such as selling services or keeping us ‘engaged’ on a website. In solving these goals, algorithms serve us suggestions. Take video sharing platforms, for example. After you’ve watched a video, YouTube’s default setting (dubbed ‘autoplay’) serves an endless repertoire of similar videos. You don’t even have to click “play next” – YouTube does that for you. Of course, you can always decide to close the browser window – but that’s harder than it looks!
This raises two problems: firstly, we get ‘hooked’ and keep on watching. And secondly, algorithms tend to feed increasingly polarizing material, because that is the content that glues us to the screen.
The more we use digital systems, the more information we reveal about ourselves. The more information we reveal, the higher the prediction accuracy of algorithmic models and the more powerful algorithms become. The more powerful these algorithms become, the more we delegate our decision-making authority to them. In doing so, we give up part of our autonomy and we become dependent on algorithms. Just as Google Maps is better than a taxi driver at selecting the fastest route, we surrender to algorithms for much more important decisions: what we read, whom we date, or whom we vote for.
The upshot? These times call for a renewed focus on critical thinking. Mental tactics not only serve as effective tools, but also as means to reflect on our preconceptions, beliefs and biases. Taking this meta view can be an antidote, a shield against ‘being hacked’ by algorithms.
Thinking and decision making have got some serious competition: machine algorithms. Your individual set of mental tactics serves as a check, back-up and corrective so that you can maintain your independence and critical thinking.
The Decision Maker’s Playbook by Simon Mueller and Julia Dhar (pictured below) is out now, by FT Publishing,
It’s not science fiction to suggest that, in the near future, algorithms will know which of our mental (and emotional) buttons to press in order to make us believe, want or do things