Are We Biased in Our Attitude to Bias?

We hear a lot these days about “bias” in human judgements and the problems this creates for the realisation of a fair and just society. But how often do we stop to think what we mean by the word? And is our failure to do so not in itself a form of bias where we succumb, for example, to the “framing effect” or the “default effect,” whereby it is is easier to work with what we are presented with, rather than to start out by exploring and challenging premises. So let’s think for a moment what we do mean by bias.

The idea of bias is premised on the notion that there exists an unbiased decision which can be arrived at objectively, and it denotes a failure to arrive at such a conclusion or something approximating closely to it. This makes sense where there are established objective means by which decisions or measurements can be made. This is in turn possible where there are objective measuring systems which can be used interchangeably with reasonable expectation of the same result, for example speed cameras which are kept accurately calibrated. But of course the kind of bias we are interested in is not of the kind where there is an objectively measurable right result against which a putatively biased decision can be compared post hoc.

Rather we are concerned with the situation where the process either of making decisions or of assessing bias involves a significant element of human judgment. This naturally raises the question whether there is such a thing as an objective standard for human judgements. While the assumption of the existence of such lies behind much of the contemporary discussion of bias, I would look to argue that such an idea is problematic at a fundamental level.

As argued by psychologist Daniel Kahneman in Thinking, Fast and Slow (2011), the existence of bias in the normal functioning of the human brain is by now well-established in terms of its often performing poorly in making objective judgments, with resort being made most of the time to convenient heuristics and analogies, mapping a question we are unable to answer satisfactorily onto one we believe we can and making do with that. This he calls System 1 thinking. Only when our decisions are challenged or we are otherwise called to account does any real attempt at first-principles rational analysis tend to kick in. This then usually takes the form of post hoc “rationalising”, rather than applying a formally rational approach. This he calls System 2 thinking.

If asked for an explanation…you will search your memory for presentable reasons and will certainly find some. Moreover you will believe the story you make up…

Intuitive answers come to mind quickly whether they originate from skills or from heuristics. There is no simple way for System 2 to distinguish between a skilled and a heuristic response.

Ibid., pp. 415-6.

A useful summary of some of the important types of “bias” which are known from psychology research to impact on our judgements is also provided on the How People Decide website.

The inevitable consequence of all this for our decision-making is that we generalise or stereotype people and situations based on prior existing prototypes that were generated from our (necessarily limited) experience. And, rather than this being an occasional lapse of judgement on our part, it is the default setting of all human beings and furthermore enables us to function effectively in society. It is often found, somewhat counterintuitively, that the belief that this is wrong does little to change the way our mind works and is only likely to lead to our denying our bias. Conversely, if we accept the evidence of Kahneman and acknowledge the limitations of our thinking, this does little either to change the degree to which we participate in heuristic and analogy without being aware of it.

What can be done about biases? How can we improve judgements and decisions, both our own and those of the institutions that we serve and that serve us? The short answer is that little can be achieved without a considerable investment of effort. As I know from experience, System 1 is not readily educable. Except for some effects that I attribute mostly to age, my intuitive thinking is just as prone to overconfidence, extreme predictions, and the planning fallacy as as it was before I made a study of these issues. I have improved only in my ability to recognize situations in which errors are likely.

Ibid., p. 417.

One might think, given that the above conclusions of one of the world’s most distinguished psychologists were widely publicised over a decade ago and are not considered controversial, that policy-makers would be somewhat sanguine about the prospects of success of strategies to eliminate bias in human decision-making. But this has not prevented the proliferation at great expense of courses on unconscious bias training, marketed as a tool to help eliminate bias in workplace decisions, in particular recruitment and promotion. Somewhat ironically, these course often cite Daniel Kahneman’s ideas as providing support for their arguments! This led the UK government to undertake a review of the cost-effectiveness of its own unconscious bias training programmes. Their conclusions were in line with what Kahneman would lead us to expect:

A systematic review of unconscious bias training examining 492 studies (involving more than 87,000 participants), found changes to unconscious bias measures were not associated with changes in behaviour [and that] Instructions to suppress stereotypes may not only activate and reinforce unhelpful stereotypes, they may provoke negative reactions and actually make people exacerbate their biases.

Written Ministerial Statement on Unconscious Bias Training, Dec. 2020

On this basis a decision was made that “unconscious bias training would be phased out in departments.”

One might ask at this point why it is, as we noted at the outset, that we continue to hear such a lot these days about “bias” in human judgements and the injustices that are perceived as resulting from it? Is not this insistence that bias can and should be eliminated in the face of expert opinion and evidence that this is likely not an achievable goal itself evidence of a form of bias? We could do worse here than to make use of Kahneman’s framework of analysis.

Framing Effect

An important determinant in our decision-making is the context in which a question is posed. The framing of discussions about bias in the workplace these days is invariably in pursuit of the policy agenda of an Equity, Diversity and Inclusion (EDI) department or function which typically justifies itself by means of claims to be attaining inter alia social justice. But perceptions of justice are in turn very influenced by framing and reference points, which is of course why we currently find ourselves engaged in “culture wars.” The very idea of social justice embeds within itself an implicit concept of a fair and just society, something about which there is no consensus.

Increasingly we see “injustices” defined in terms of statistical measurements of greater opportunity being afforded to this group or that, where previously there would have been more focus on the fairness of processes and whether the conduct of individuals is aligned with those processes and their stipulations. It is clear that the choice to prioritise one definition of fairness over another (as inevitably happens) is a form of bias, so any attempt to pursue policies justified on the basis of that choice must be construed as contaminated by bias. But of course it is invisible in practice because not an explicit part of the framing of the consequent policy agenda.

Causes Trump Statistics

Another problem with the use of statistical measurements as a tool aimed at eliminating or reducing bias is that they can give rise to the causation fallacy whereby observed correlations between events are readily given causal explanations by audiences to whom they are pointed out: not only are people likely to infer discrimination has taken place in recruitment processes on the basis of, say, fewer employees being female than male, such a conclusion is often actively encouraged by those seeking to address the problem of unconscious bias or to counter bias in other ways.

Emotional Arousal

A related problem is the propensity of the emotional charge surrounding an issue to impact on decision-making. Particularly if there is fear associated with one of two possible choices, the other option is much more likely to be selected even if that which is feared has only a very small probability of arising. For example, if in the context of the previous example, you risk being accused of bias (or worse) for preferring a male candidate over a female one but face no risk of censure for making the opposite choice, you will naturally lean toward the latter irrespective of your assessment of the relative merits of the two candidates. Clearly the presence of such a fear must be seen as a likely source of bias.

Further, if the intention is to achieve parity between male and female employees, this is unlikely to be achieved other than over a long period of time unless choices are explicitly biased; and there is no guarantee that such bias once introduced will end if/when parity is achieved. For example, if there are currently 40% female employees and the employee attrition rate is 50% over a five-year period, 50% more females than males will need to be employed over that period to achieve parity; and if the same policy continues beyond the point where parity is reached, there will be 55% females by the end of another five years, eventually increasing to 60%. It is hard to see how such can be characterised as the removal of bias, but it is a consequence of the thinking which is currently driving recruitment policies across most major organisations in the Anglosphere.


We could carry on in this vein but let us look at just one more angle to conclude. Kahneman cites his experience of the fickleness of people’s subjective confidence in the face of a powerful counter-narrative:

The main obstacle is that subjective confidence is determined by the coherence of the story one has constructed, not by the quality and amount of the information that supports it…

As a team converges on a decision…public doubts about the wisdom of the planned move are gradually suppressed and eventually come to be treated as evidence of flawed loyalty to the team and its leaders. The suppression of doubt contributes to overconfidence in a group where only the supporters of the decision have a voice.

Ibid., pp. 264-5

So when the UK Financial Conduct Authority announced last year that all listed financial institutions must monitor and report on diversity and inclusion on a “comply or explain” basis claiming (with virtually no supporting evidence)

This would advance our objectives of consumer protection, enhancing the integrity of the financial system and promoting effective competition. It supports healthy cultures within firms and in-turn delivers high standards of conduct, reduces groupthink and supports effective decision-making. It also promotes innovation and competition in products that meet diverse customer needs and unlocks talent.

Diversity and inclusion on company boards and executive management, Policy Statement PS22/3

it is perhaps not surprising that only 6% of respondents from within the finance industry took the risk of expressing dissenting views. Also it is unlikely against the backdrop of this narrative that there would be many inside the institutions targeted willing to offer any overt resistance to this new policy, so running the risk of being perceived or portrayed as unsupportive of its purported multifarious benefits.


It should by now be clear to the reader that the notion of removing bias from human decision-making is an unattainable chimera. Those imposing policies aimed at achieving this will necessary bring biases of their own to bear. In practice these will tend to be hidden by embedding the policy in a narrative which leads those it targets to support the agenda being advanced and often also to experience emotional arousal, positive or negative in relation to one or other choice.

In the end, each human being makes decisions based on the heuristics they personally have evolved as the optimal means of dealing with challenges. We should certainly be willing to review these heuristics and refine them when appropriate. But beware those who seek to have you abandon your judgment by accusing you of “bias.” Rather let he who is without bias throw the first stone…

By Colin Turfus

Colin Turfus is a quantitative risk manager with 16 years experience in investment banking. He has a PhD in applied mathematics from Cambridge University and has published research in fluid dynamics, astronomy and quantitative finance.

Leave a comment

Your email address will not be published. Required fields are marked *