Generic risks and biases: Processing and validation bias types

About this sub-guideline
This sub-guideline is part of the guideline Generic risks and biases. Refer to the main guideline for context and an overview. For a discussion of risks that relate more specifically to the unique work of parliaments, refer to the guideline Risks and challenges for parliaments.
This sub-guideline focuses on processing and validation biases, which arise from systematic actions and can occur in the absence of prejudice, partiality or discriminatory intent. In AI systems, these biases are present in algorithmic processes used in the development of AI applications.
Aggregation bias
Aggregation bias arises when a model assumes a one-size-fits-all approach for different demographic groups that, in reality, may have different characteristics or behaviours.
Amplification bias
Amplification bias occurs when several AI systems, each with separate biases influenced by their training data and programming, interact and mutually reinforce each other’s biases, leading to a more pronounced and persistent bias than what any single system might display.
For instance, a system trained on historical hiring data, in which male candidates have been predominantly selected, unintentionally favours male candidates during CV screening. Another AI system, tasked with performance evaluation, has been trained on data where female employees were often given lower scores owing to latent human biases. As these two systems interact, the hiring AI system may propose a larger number of male candidates, while the performance-evaluation AI system continues to judge female employees more harshly.
Deployment bias
Deployment bias – perhaps more of an operational failing than a bias – occurs when a system that works well in a test environment performs poorly when deployed in the real world owing to differences between the two environments.
Evaluation bias
Evaluation bias is a type of discrimination in which the methods used to evaluate an AI system’s performance are biased, leading to incorrect assessments of how well the system is working.
Exclusion or sampling bias
Exclusion or sampling bias occurs when specific groups of user populations are excluded from testing and subsequent analyses.
Feedback loop bias
Feedback loop bias arises when the output of an AI system influences future inputs, potentially reinforcing and amplifying existing biases over time.
Model selection bias
Model selection bias is a technical term for confounding exploratory and hypothesis-testing statistical analyses. If data is used to select the best-fitting model from a set of candidates, that same data cannot then be used to test hypotheses about the value of the estimated parameters of the best-fitting model.
Optimization bias
Optimization bias occurs when the objective function of an AI system is defined in a way that leads to unintended consequences or unfair outcomes.
Overfitting or underfitting bias
Overfitting bias refers to a situation where a model is too complex and fits too closely to the training data, potentially incorporating noise or outliers that do not represent the true patterns in the data. Conversely, underfitting bias occurs when the model is too simple to capture the true patterns in the data, leading to poor performance and potentially biased results.
Proxy bias
Proxy bias occurs when variables used as proxies for protected attributes (such as race or gender) introduce bias into the model.
Temporal bias
Temporal bias occurs when training data becomes outdated and no longer represents current realities, leading to biased predictions. While this might be considered a data bias, it is also a processing/validation bias because it often occurs when systems fail to consider temporal aspects of the data validation process, or when the process of updating and validating models fails to adequately account for changes in the underlying data distribution over time.

The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].