• Blog
  • About the Author
  • Contact

Science Without Anguish
​Michael Coleman's Blog


Heuristics: making sense of a complex world

10/6/2025

0 Comments

 
Nobel Prize winner Daniel Kahneman and his colleague Amos Tversky changed how psychologists, economists and politicians think about how we think. In 1974, they published a landmark article on flaws in human decision making, titled ‘Judgement under uncertainty: heuristics and biases’. Using familiar examples from everyday life, they communicated what psychologists had recognised for decades and quantified it with some simple but ingenious experiments. Their work helped spark the rise of the self-help industry and, along with their later work on loss aversion, led to the Nobel Prize for economics for Kahneman (after Tversky’s death). He also disseminated these findings in many easily accessible forms, such as his books ‘Thinking, fast and slow’ and ‘Noise’ and many podcast interviews.   
Picture
​Crucially, they warned, “Experienced researchers are also prone to the same biases”. Indeed, Kahneman describes how they developed their hypotheses by looking at, and laughing at, the flaws in their own thinking, an inspirational example of how humility strengthens research.
 
So, half a Century on, what can we learn from Kahneman and Tversky about the flaws in our own thinking? By becoming more aware of them, can we strengthen not only our research but also our wellbeing as we carry out that research? Can we use more critical thinking about our own thinking? 

​The value and cost of simplification
You may have already fallen into a classic thinking trap right at the start of this article. To a scientist, the words ‘Nobel Prize winner’ scream ‘prestige’, ‘eminence’, the pinnacle of human thinking. It sparks our interest in learning from them. In a world with more information than we could ever take in, let alone understand, it’s one of the ways we simplify it to make any kind of sense. ‘Nobel Prize winners are smart people’, we tell ourselves. But some also espouse unfounded and profoundly immoral thoughts so perhaps it’s not quite that straightforward. If we become dazzled by accolades, and lose sight of critical thinking, the world of certainty that we seek becomes one of chaos and confusion instead.  
 
All models are wrong, but some are useful 
Our understanding of the world is based around patterns that we observe and our prediction that they will recur. We simplify its true complexity using heuristics that work most of the time. For example, most Nobel Prize winners are indeed smart people, possibly all, even if some have dumb ideas too. We use heuristics constantly, often without realising. Just like models in research, they are essential for understanding the world but all models have limitations. The danger comes if and when we believe them to represent the world as it truly is, known as naïve realism. Thankfully, few people share James Watson’s racist views, but it illustrates perfectly the risks of becoming blinded by accolades and endorsements. The same applies to ‘big’ journals, prestigious institutions, the ‘big names’ in any research field. As soon as we rely on this alone we risk losing the plot.
 
In their 1974 article, Tversky and Kahneman highlighted three types of heuristics: representativeness, availability and anchoring. So how does each of these play out in the scientific workplace? This article discusses representativeness and the next will cover availability and anchoring.

What’s the evidence?
Representativeness is our tendency to base our understanding and predictions on limited evidence. We cannot possibly know everything we need for a perfect decision. We have to choose between using our intuition to infer a common pattern from limited data or ‘paralysis by analysis’, a futile attempt to understand everything before we take action.

​Hypothesis or conclusion?

Intuition is ultimately educated guesswork. A hypothesis! As scientists we know the dangers of confusing hypothesis with conclusion, opinion with fact, but if we can never have all the data we need we have no choice but to follow our gut sometimes. The problem is when we don’t know we’re doing that, and we slip into ‘black-and-white thinking’ about a greyscale word. We misjudge the level of uncertainty, risk and noise that exist, leading us to focus on the wrong solutions. We wrongly attribute causality when things go wrong, or even when they go right. We unnecessarily beat ourselves up or look for someone to blame, sparking unnecessary conflict.

The ’Reviewer 2’ phenomenon
Tversky and Kahneman also showed that we pay disproportionate attention to how we perceive people’s characters, while discounting other data. We don’t know, for example, who our reviewers are, and we lack the nonverbal cues so vital for understanding intention. But as soon as they suggest an additional experiment that cannot feasibly be done, we so often head straight down the rabbit hole of thinking they are trying to block our paper! A simple smile or frown may have enabled us to judge this better but we don’t have that. All we have is a page of disembodied words sent via a third party but we think we know! Many of us have done this, including me, only to realise on re-reading their comments the next morning that they may actually just be enthusiastic and want to know more.
Picture
​It gets even worse if we begin to think about ‘the reviewers’ as if they were a separate species, forgetting that we are also reviewers, as is the colleague we enjoyed chatting with over coffee at a conference. As in any area of life, a tiny minority of reviewers are genuinely obstructive, a phenomenon I’ve seen maybe just two or three times in 30 years, and which hasn’t always prevented a successful outcome if handled calmly. The vast majority are doing their best under a lot of pressure, just like us. Similarly, we often form unfavourable opinions about ‘administrators’, ‘funders’, ‘the competition’ or ‘editors’ based on anecdotal experience alone, with little or no personal interaction, and little understanding of the pressures they work under. The siloed structure of our institutions does not help but so often we surrender our thinking to the barriers this creates instead of working to overcome them with a Zoom call or a chat over a cup of tea. Apparently, we think this way is easier!

Too judgemental
Our overconfidence in simple explanations leads us to expect individual characters to be consistent and unwavering. We excuse our own day-to-day fluctuations with “I didn’t sleep well” or “I have a lot going on right now” but expect others to be always on top of their game, an example of self-serving bias discussed more in a later article. Similarly, we label people as ‘having what it takes’, or not, at a young age, despite our own stuttering progress at the same age and how much we developed since. Why can they not also grow? 

Guesswork
Similar uncertainty applies in other areas of our research life: which lab to work in, who to recruit, where to send our paper, what to write in our grant application, what to prioritise our time on this week, or whether to develop that new idea into a project or stay focussed on what we are doing.  The only correct answer is: “I cannot know for sure, but I’ll take an educated guess”. The idea that we do not know for sure scares us so we try to ignore it, thinking that we do know, until reality comes along and upsets us by contradicting our expectations. 
Picture

Thinking like a scientist to reduce anguish
The great irony here is that, as scientists, we know what to do. We just don’t apply it to our research lives in the way we do to our experiments. In our science, we do not draw conclusions when we have insufficient data, as we do when guessing the intention of the reviewer. We recognise the concept of biological or technical variability, accepting that multiple replicates are needed to get a true picture, yet we see a single grant or paper rejection as questioning our worth as a scientist. We know what it means to have a working model that we modify as new data emerge, a Bayesian thinking model, but we make snap judgements about other people and stick to them. Part of the solution to our tangled lives could lie in applying the ordered, rational thinking we use every day in our science to how we approach research life 
Picture
​This is not a perfect solution - the uncertainty in life is usually greater than in our science and harder to quantify, and the stakes are sometimes higher. But in formulating our thoughts, we can be open to all sources of data and wary of confirmation bias. When we are job hunting, for example, we can proactively seek both the positive and negative experiences of current and former employees. When writing a grant or a paper, we can ask for feedback from colleagues, putting aside our natural aversion to criticism. And we can improve how we plan our time by reviewing each week how our plans actually worked out last week and learning from it.
 
And when things don’t work out, we can avoid rushing to judgement and being too quick to blame either others or ourselves. We can remember that we started from a hypothesis, based around considerable uncertainty, not from a conclusion. Upsetting events will still happen, sometimes with important consequences, but if we acknowledge the roles of intuition and luck, and the inherent risks in extrapolating from limited data at least the outcome doesn’t have to be so bewildering even when it is not the one we hoped for.
0 Comments

Your comment will be posted after it is approved.


Leave a Reply.

    Author

    Professor Michael Coleman (University of Cambridge) Neuroscientist and Academic Coach: discovering stuff and improving research culture

    Illustrated by Dr Alice White (freelance science illustrator)

      Notify me when a new post is published

    Subscribe to Newsletter

    Archives

    November 2025
    October 2025
    September 2025
    December 2024
    November 2024
    October 2024
    September 2024
    June 2024
    May 2024
    April 2024

    Categories

    All
    Series 1
    Series 2
    Series 3

    View my profile on LinkedIn

    RSS Feed

Proudly powered by Weebly
  • Blog
  • About the Author
  • Contact