Skip to main content

Generic risks and biases: Cognitive bias types

Image

About this sub-guideline

This sub-guideline is part of the guideline Generic risks and biases. Refer to the main guideline for context and an overview. For a discussion of risks that relate more specifically to the unique work of parliaments, refer to the guideline Risks and challenges for parliaments.

This sub-guideline focuses on cognitive biases, which are systematic errors in judgements or decisions common to human beings owing to cognitive limitations, motivational factors and adaptations accumulated throughout life. Sometimes, actions that reveal cognitive biases are unconscious.

Automation bias

Automation bias occurs when conclusions drawn from algorithms are valued more highly than human analyses. For example, people will often blindly follow satellite navigation systems and arrive at the wrong place, or cross dangerous streets and put their life at risk.

Group attribution bias

Group attribution bias refers to the belief that everything that occurs for one individual is true for everyone. An example is when people stereotype professions with statements such as “all lawyers are manipulative” or “all artists are eccentric”.

Implicit bias

Implicit bias refers to the practice by which people unconsciously associate situations with their own mental model of representing that situation. For example, people often assume that a younger colleague cannot be experienced enough to be a good manager, or that an older employee is not able to learn new skills.

In-group favouritism

In-group favouritism occurs when someone acts with partiality towards the existing aspects of the group to which they belong. For example, people may systematically recommend someone from their “group” for a job, while sports fans will always view their team as the best.

Out-group favouritism

Out-group favouritism refers to the favouring of groups outside the group to which a person belongs. For example, a manager who does not recognize the talent available in their own team will always turn to someone from another team for advice or support.

Affinity bias

Affinity bias happens when someone prefers individuals who are similar to them in terms of ideology, attitudes, appearance or religion. For example, a hiring manager might prefer a candidate who went to the same university as they did, overlooking other qualified applicants.

Social bias

Social bias occurs when many individuals in a society or community share the same bias. The simplest examples are religion and politics. Some people are so closed in a belief system that they are incapable of seeing both sides of an argument. They seek only information that supports their belief and negate anything that counters it, demonstrating their bias in their every action.

Rules and systems bias

Rules and systems bias refers to the fact that, when developers are used to particular rules embedded in systems, they try to reproduce the same rules to represent other situations. For example, developers sometimes choose solutions based on examples they readily remember. Controlled laboratory studies have identified the harmful effects of specific cognitive biases on several aspects of software development such as defect density, requirements specification, originality of design and feature design.

Requirement bias

Requirement bias refers to the assumption that all people or situations are capable of meeting, or meet, the same technical requirements (hardware and/or software). It is a subset of “rules and systems bias”.

Anchoring bias

Anchoring bias occurs when people rely too much on pre-existing information, or on the first information they find, when making decisions. For example, if someone sees a computer that costs $5,000, and then see a second one that costs $2,000, they are likely to view the second computer as cheap. This type of bias can impact procurement decisions.

Availability bias

Availability bias is a mental shortcut whereby people tend to ascribe excessive weight to what is readily “available” – i.e. what comes easily or quickly to mind – when making judgements and decisions. For example, people remember vivid events like plane crashes over more common incidents such as car crashes, despite the latter being much more common. As a result, they often overestimate the likelihood that a plane will crash and might even choose to drive rather than fly, even though they are much more likely to be involved in a road traffic accident. This type of bias can occur when business staff are describing business rules to developers.

Confirmation bias

Confirmation bias refers to the fact that people tend to prefer information that confirms their existing beliefs. It affects how people design and conduct surveys, interviews or focus groups, and analyse competition. Essentially, people construct questions in a way that will produce the answers they want. For example, if someone types the question “Are dogs better than cats?” into an online search engine, articles that argue in favour of dogs will appear first. Conversely, the question “Are cats better than dogs?” will produce results in support of cats. This applies to any two variables: the search engine “assumes” that the person thinks variable A is better than variable B, showing them results that agree with their opinion first.

User interaction bias

User interaction bias occurs when a user imposes their own self-selected biases and behaviour when interacting with data, output or results. For example, when a system is trained using streaming data from a live group discussion, it instils the bias that exists in that group.

Groupthink

Groupthink refers to the fact that people in a group tend to make non-optimal decisions based on their desire to conform to the group, or for fear of dissenting. For example, when the leader of a group tells everyone that they need to ban all members of a particular ethnic group from joining them, the members of the group accept that decision without questioning it.

Funding bias

Funding bias occurs when biased results are reported in order to support or satisfy the organization funding a piece of research. For example, a study published in a scientific journal found that drinks containing high-fructose corn syrup did not increase liver fat or ectopic fat deposition in muscles. However, the “acknowledgements” section shows that one of the researchers received funding from a major soft-drinks company. The results may therefore have been skewed to paint the funding organization in a positive light.

Sunk cost fallacy

The sunk cost fallacy is a human tendency to continue with an endeavour or behaviour because resources such as money, time or effort have already been invested, regardless of whether the costs outweigh the benefits. For example, in AI, an organization that has already invested significant time and money in a particular AI application will pursue it to market rather than deciding to cancel the project, even in the face of significant technical or ethical debt.

Rashomon effect

The Rashomon effect is a term derived from the classic 1950 Japanese film Rashomon, which explores the concept of subjective reality and the nature of truth by presenting differing accounts of a single event from the perspectives of multiple characters. This bias occurs when there are differences in perspective, memory and recall, interpretation, and reporting on the same event from multiple witnesses. For example, people who attended a legislative committee meeting might have different perceptions regarding the debate and, therefore, provide a different summary of the event.

Streetlight effect

The streetlight effect refers to the fact that people tend to search only where it is easiest to look, such as when data scientists develop an AI algorithm using only a small data set (i.e. only the data they have access to) instead of considering obtaining more complete data from other organizations.

Ranking bias

Ranking bias is a form of anchoring bias. It refers to the fact that, in a list of search engine results, people believe that the highest-ranked results are the most relevant and important. They will still tend to click more on the top result than others, even if the results are ranked randomly.

Ecological fallacy

The ecological fallacy refers the drawing of conclusions about individuals based on group-level data. For example, if a specific neighbourhood has a high crime rate, people might assume that any resident living in that area is more likely to commit a crime.

Survivorship bias

Survivorship bias is when people focus on the items, observations or people who “survive” (i.e. make it past a selection process), while overlooking those who do not. For example, by assessing only “surviving” businesses and mutual funds, analysts record positively biased financial and investment information – omitting the many companies that failed despite having similar characteristics as the successful ones.


The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].