How computers and humans can work together most effectively: the psychology
Many efforts to improve the quality of decision-making have focused on replacing human with machine intelligence. But there are flaws in this approach, says professor Mark Fenton O’Creevy, professor of organisational behaviour, Open University Business School
Back in the mid-20th century, cognition re-emerged as a key field of study in psychology. In a reaction against the then dominant behaviourist perspective, psychologists once again began to speculate about, and research, processes going on within the person, rather than simply relying on observation of their external behaviour. This development in psychological thinking was strongly influenced by the simultaneously emerging fields of information and computer science. As a result, a dominant metaphor in cognitive science became the brain-as-computer.
This analogy left little room for the role of emotion, except as a disturbance of optimal cognitive function, or, at best, as a signalling system to indicate the gap between goals and outcomes. Given this history, it is therefore unsurprising that many efforts to improve the quality of financial decision-making have focused on replacing human with machine intelligence.
However, this approach may turn out to have significant limitations – given the growing evidence that humans and computers think in very different ways, and have very different strengths.
As technology and algorithms improve, computers are becoming highly effective tools for tackling small world problems, as they are less subject to failures of probabilistic reasoning.
However, computers are poorly suited to solving large world problems. This is a domain where humans seem to have a distinct advantage.
Small world problems can be characterised as having the following attributes:
- A well-defined task and goal
- A known set of choices and potential outcomes
- Highly replicable processes
- Known (or at the very least knowable) probability distributions associated with outcomes given any choice.
Because small world problems are very tractable to study in the laboratory, they have dominated judgement and decision-making research. However, other fields of study have been more interested in what can be described as ‘large world problems’. These are characterised by:
- Ill-structured problems
- Deep uncertainty (unknown and sometimes unknowable probability distributions)
- Complex and dynamic environments
- Little replicability
- Non-ergodicity (that is, the past is a poor guide to the future)
Placing narrative and meaning-making at the heart of understanding human action gives us a route to understanding the particular advantages that humans have over machines – especially in the conditions of deep uncertainty (ie ‘large world problems’) which humans have evolved to confront. So how can computers and humans work together most effectively?
Computers can be used to ensure consistent, rapid, accurate and bias-free comparison of different action-options in domains which approximate to small world problems; with humans responsible for stepping away from these small world models and questioning the implications of the ‘big approximation’ of a model to the real world (including the biases that algorithms or machine learning may build in to decisions) and the likelihood of a ‘big surprise’.
For large world problems, computers can play a role in supporting and enhancing the human capacity to engage in resilient approaches to decision-making (which we know are well-suited to managing complexity, ambiguity, and rapidly changing conditions). A particular role here is to support humans in spotting when they are getting stuck in fixed narratives about the world which are causing them to disregard a changing context.
Will AI change the world? Of course it will. Is the future one in which machines replace human thinking? I don’t believe so, but it is one in which we increasingly understand the differences between how machines think and how people think, and how they may be used to complement each other.
Placing narrative and meaning-making at the heart of understanding human action gives us a route to understanding the particular advantages that humans have over machines