Skip to main content

Generic risks and biases

Image

Audience

This high-level guideline is intended for senior parliamentary managers, as well as for parliamentary staff and MPs interested in gaining a broad understanding of generic risk and biases associated with AI.

About this guideline

This guideline describes a range of generic risks and biases related to the implementation of AI technologies, which parliaments will need to understand before embarking on AI projects and initiatives.

For a discussion of risks that relate more specifically to the unique work of parliaments, refer to the guideline Risks and challenges for parliaments.

Why inappropriate AI use is a risk

Inappropriate AI use can entail risks at various levels, from the individual to the global:

  • Unintended consequences, such as reinforcing existing biases through biased systems, resulting in unfair treatment of individuals or groups

  • A lack of accountability and transparency – which are crucial for building user trust – owing to poor understanding of the complexity of AI systems and the underlying decision-making processes

  • The manipulation of public opinion through deepfakes, misinformation and automated propaganda

  • The creation of echo chambers, which amplify biased views and extremism

  • Psychological profiling, which allows for the targeted manipulation of individuals

  • Fake content, which can potentially contribute to eroding trust in genuine information

  • Behavioural nudging, which can subtly influence opinions and actions, often without people’s full awareness, thus potentially undermining democratic processes and informed discourse

  • Physical and psychological harm through the use of AI systems in health care, autonomous vehicles and industrial automation, which can lead to accidents or malfunctions

  • Issues such as addiction, anxiety and depression, caused by AI-driven social media algorithms that promote harmful content or create unrealistic social comparisons

  • Stress, privacy invasion and discrimination through the use of AI systems for surveillance and profiling, exacerbating mental health problems and social tensions

Categories of risk

The integration of AI introduces new types of risk that may not be familiar to parliaments. These can include the following:

  • Lack of AI literacy

  • Bias and discrimination

  • Privacy invasion

  • Security vulnerabilities

  • Lack of accountability

  • Job displacement

  • Ethical dilemmas

  • Shadow AI

  • Lack of data sovereignty

  • Lack of trust

For further discussion of these categories, refer to the sub-guideline Generic risks and biases: Categories of risk.

Identifying biases in a parliament

Bias is a systematic difference in the treatment of objects, people or groups compared to others, leading to an imbalance in the distribution of data.

Biases are part of people’s lives. They usually start with habits or unconscious actions (cognitive biases) which, over time, materialize as technical biases (data biases and processing biases). Such a scenario increases or creates risks that could result in untrustful AI systems.

Biases in AI systems arise from human cognitive biases, the characteristics of the data used or the algorithms themselves. Where AI systems are trained on real-world data, there is the possibility that models can learn from, or even amplify, existing biases.

In a statistical context, errors in predictive systems are the difference between the values ​​predicted as model output and the real value of the variables considered in the sample. When the error occurs systematically in one direction or for a subset of data, bias can be identified in the data treatment.

 

Cognitive biases

Cognitive biases are systematic errors in judgements or decisions common to human beings owing to cognitive limitations, motivational factors and adaptations accumulated throughout life. Sometimes, actions that reveal cognitive biases are unconscious. 

For a list of cognitive biases, refer to the sub-guideline Generic risks and biases: Cognitive bias types.

 

Data biases

Data biases are a type of error in which certain elements of a data set are more heavily weighted or represented than others, painting an inaccurate picture of the population. A biased data set does not accurately represent a model’s use case, resulting in skewed outcomes, low accuracy levels and analytical errors.

For a list of cognitive biases, refer to the sub-guideline Generic risks and biases: Data bias types.

 

Processing and validation biases

Processing and validation biases arise from systematic actions and can occur in the absence of prejudice, partiality or discriminatory intent. In AI systems, these biases are present in algorithmic processes used in the development of AI applications.

For a list of cognitive biases, refer to the sub-guideline Generic risks and biases: Processing and validation bias types.

Interrelationship between biases

Cognitive biases are part the culture of many societies and organizations. They are often present, unconsciously, in the work processes and decisions that underpin the functioning of institutions. Over the years, cognitive biases are transformed – often in combination – into data biases and processing biases. 

The underrepresentation or omission of a particular type of data in a data sample can therefore be the result of one or more of the following factors (among others):

  • Systems were built by teams that unconsciously did not involve other organizational units owing to incorrect judgements regarding their participation.

  • Important stakeholders were not involved in the design of data-entry systems because they had a different view than project managers.

  • System interfaces favoured individual points of view or confirmed preconceived ideas.

  • Irrelevant or incomplete databases were used to train AI systems simply because they were easy to obtain and avoided the need for negotiation between managers from different departments.

  • AI system projects that revealed decisions based on inappropriate variables were launched anyway in order to justify the costs already incurred.

  • AI system developers were so used to working with certain models that they used them in situations where they were inappropriate.

Figure 1: Bias path from the unconscious to untrustful AI systems
Figure 1: Bias path from the unconscious to untrustful AI systems

Source: Adapted from NIST Special Publication 1270 and the Oxford Catalogue of Bias.

 

Some biases can multiply the impact of others

Below are some examples of how cognitive biases can influence and, in some cases, even compound data or processing biases in parliamentary settings:

  • Parliament feeds data sets with information from surveys and questionnaires completed only by people sharing the same political party ideology. Here, there is a high likelihood of existing affinity bias. Moreover, if this data set contains data such as “opinion regarding a specific theme”, and it is used to train an AI algorithm, there is a high possibility that such biases could be reproduced in that AI system.

  • Parliament uses only data sets from a very small number of committee meetings to train an AI algorithm. In this case, there is a likelihood of interpretation biases because some terms may have different meanings or importance to different committees.

  • Parliament has spent its entire innovation budget but the project team has failed to find the best AI algorithm to solve the original problem. The team implements an AI system anyway, launching it as a successful innovation, in an attempt to justify the costs. This is a funding bias that results in an AI system that is not reliable.

As the examples below show, with generative AI tools, all cognitive biases contained in a vast data set can be combined together and exposed directly to the user:

  • A generative AI tool replicates bias against female job applicants when asked to draft letters of recommendation. Letters for male applicants often use terms like “expert” and “integrity” while female candidates are described using terms such as “beauty” and “delight”.

  • Male researchers using a generative AI tool to create avatars receive diverse, empowering images portraying them as astronauts and inventors. However, a female researcher receives sexualized avatars – including topless versions, reminiscent of anime or video-game characters – that she did not request or consent to.

  • A generative AI system fails to create appropriate images of people with disabilities.

Sub-Guidelines

Generic risks and biases: Categories of risk

Generic risks and biases: Cognitive bias types

Generic risks and biases: Processing and validation bias types