• Blog
  • About the Author
  • Contact

Science Without Anguish
​Michael Coleman's Blog


I’m good, we’re good

11/3/2025

0 Comments

 
​Here’s a particularly perceptive comment that has stuck in my mind, from a fellow cyclist on a group ride one Sunday morning: “Have you noticed the wind is either against you or you’re feeling really good today?”. If a ride feels easy, it’s never because there is a howling tailwind blowing you along, you’ve just miraculously become super-fit. A deep comment indeed for a blurry-eyed Sunday morning!
 
Of course as scientists we would never do this, would we? We would never see a funded grant as a sign that we wrote a good project but question the judgement of the committee if our next grant application is rejected. And if we arrive late for a meeting, we would never blame the traffic that held us up, rather than the fact that we left it too late starting our journey there. Of course not!
 
Self-serving bias is everywhere, and this includes scientists – all of us! We don’t usually see it, at least not in ourselves, because like everyone we have a ‘bias blindspot’. Only ‘other people’ are irrational, not us!  
Picture
Self-serving bias has a close cousin: in-group bias. We may tell ourselves our group does better science than our competitors, that our generation has dealt with more hardship than others, that ‘all’ people in other job types (administrators, journal editors, funders, support staff, etc.) are not as committed as we are, or are less reasonable people.
 
As supposedly rational scientists, shouldn’t we make sure we have all the information we need before we form these beliefs? What makes us think we have sufficient knowledge of those other people’s lives, or indeed that we have an objective view of our own life? But every day we find ourselves slipping into these ways of thinking.
 
We are human after all, and these are common thinking errors that occur over and over in human thinking. But here’s the big question: how much anguish do we create for ourselves, and for our colleagues, by thinking this way?

​Why it feels natural

‘I’m good, we’re good’ thinking probably goes back to our origin as a tribal species, extremely deep roots that make it hard to avoid. If our ancestors had not believed in the qualities of their group, they would probably have been thrown out and eaten, along with anyone wallowing in self-pity. Evolution would have selected those who believed in themselves and who favoured their in-groups. We inherited their genes and, to some extent, their culture. Psychologists tell us emotions are faster and stronger than rational thought. Tens of thousands of years of natural selection cannot be overturned by the slow and energy-thirsty circuits of our rational brain regions, or by a few years of scientific training. We slip back into those patterns without even realising.
Picture

​
The root of all thinking flaws?
The belief that “I am a good person” has been suggested to be the primordial bias underlying most or all others. Even disreputable or evil people appear to justify their actions to themselves with some perverted narrative that they are doing the world a service, or simply “levelling the playing field” after some perceived wrongdoing to them. 

Most hypotheses are wrong
Confirmation bias favouring anything that reinforces it is an obvious trap that we all fall into regularly in everyday life – politics, social media, thinking in stereotypes, etc. – but as scientists this can be particularly harmful. For example, it is an inescapable fact that most hypotheses are wrong. No matter how brilliant we are, the odds against getting it right first time are heavily stacked against us. But self-serving bias makes it extraordinarily difficult to be even-handed in our thinking. Data supporting our hypothesis easily grabs our attention and sticks in our memory, while conflicting data reminds us that we can be wrong, an uncomfortable thought which, at worst, seems to threaten our sense of worth as a scientist. In fact, the opposite is true.  It is our job to question beliefs, even beliefs that are our own. This is how we fit in to this particular profession. And this is why we must never lose humility. 

Cognitive dissonance
When we find evidence contradicting our beliefs, we feel uneasy. This is cognitive dissonance – a topic for a forthcoming article. We resist changing our mind, as far as we can, sometimes using ever more improbable explanations for the stuff that doesn’t fit, before finally accepting that we have to change the model. “That reviewer criticising my paper is unreasonable”, we say. Until we meet them at a conference and find out over a beer that they, like us, are just ordinary people doing their best under pressure. So maybe we should listen after all – find the common ground where the real progress lies. The irony is that once we do this, it actually feels good – a paradigm shift, or epiphany. 
Picture
​David McRaney’s book ‘How minds change’ puts it best: “Mindless self-justification blocks our ability to even see our errors, let alone correct them”. But if we questioned ourself all the time we would live in ‘paralysis by analysis’, a world of constant uncertainty where nothing gets done. We have to find the right balance between having confidence in our models and questioning them. 

Control or luck?
The illusion of control is another consequence and that’s a problem. If we don’t recognise that luck, as well as ability, plays a role when we have a grant funded, then it follows that our rejected grants (the majority, for pretty much everyone!) must have been rubbish. Throw in a dose of negativity bias – another forthcoming topic – and we quickly enter a completely unnecessary and unproductive doom loop catastrophising about how useless we are.
 
Where are all the ‘villains’?
The Karpman drama triangle is another outcome, with victim, villain and rescuer all locking themselves into a futile cycle of self-justification producing endless heat but no light, and a lot of unnecessary upset and stress. Peer review in science is full of this. It’s tough to have our hard work criticised, so the author’s perspective is always going to be very different from a reviewer’s. How much of the apocryphal ‘Reviewer 2’ phenomenon can be explained not by an army of perceived ‘villains’ out there but regular scientists not considering this difference in perspective?  “We are all Reviewer 2 in someone’s academic story” says the Twitter handle.
 
And yet we live in fear of them. We don’t need to do this to ourselves!
 
Am I good enough?
The opposite to self-serving bias may be even more damaging: self-doubt. Many of us flip between believing we are a ‘good person’ and successful to questioning whether we are ‘good enough’, confident one day, anxious the next. Or we switch from blaming others for adverse events to blaming ourselves. What’s that about and where does the truth lie?
 
This is where attribution error combines with heuristics, putting our emotions firmly in the driving seat. We swing constantly from one oversimplification of reality to its opposite depending on anything from what else is going on in our lives to the weather. We look for simple explanations for complex events, attributing them all to ourselves or all to the world around us when in reality nearly everything is a combination of both. If we struggle to understand a paper or seminar, for example, this could reflect a limitation in our knowledge or a poor writing or presentation style. We tend to tell ourselves it’s all one or all the other but usually there is at least a small element of each.  
 
Confident humility
All that shifting of interpretations is draining and confusing. The Dunning-Kruger effect is our tendency to overestimate our competence and imposter syndrome is its counterpart. We like to label people as having one or the other but in reality we all have elements of each and fluctuate between them.
 
Adam Grant in his book Think Again describes how we can use ‘confident humility’ to overcome this. He describes this as combining confidence in moving towards our goals in the future with the humility to question whether we have the right skills right now. The optimum spot is not how much confidence we have but what we are confident in: not our knowledge or skills now, but our ability to learn and grow, moving constantly towards our goals even if we never fully reach them. It is well worth applying this thinking to the old chestnut of whether some has ‘what it takes’ to become a PI. ‘What it takes’ may be much more about their ability to grow than what they are capable of today. 

Picture
Finding the middle ground
So whether we have a headwind or a tailwind, we can recognise that our mind will always play tricks on us. We live in a complex world where we only partially understand even our little corner of it, which we interpret emotionally long before our rational mind catches up. This is why having a break from our work in evenings, weekends and holidays matters. It brings us the perspective to spot the flaws in our thinking. Coaching can also help, as can the ability to laugh at ourselves. As scientists, questioning beliefs is our job. But occasionally the beliefs we need to question are our own.
0 Comments

Your comment will be posted after it is approved.


Leave a Reply.

    Author

    Professor Michael Coleman (University of Cambridge) Neuroscientist and Academic Coach: discovering stuff and improving research culture

    Illustrated by Dr Alice White (freelance science illustrator)

      Notify me when a new post is published

    Subscribe to Newsletter

    Archives

    November 2025
    October 2025
    September 2025
    December 2024
    November 2024
    October 2024
    September 2024
    June 2024
    May 2024
    April 2024

    Categories

    All
    Series 1
    Series 2
    Series 3

    View my profile on LinkedIn

    RSS Feed

Proudly powered by Weebly
  • Blog
  • About the Author
  • Contact