Skip to main content

Ethical principles: Fairness and non-discrimination

Image

About this sub-guideline

This sub-guideline is part of the guideline Ethical principles. Refer to the main guideline for context and an overview.

This sub-guideline explores the principle of fairness and non-discrimination in AI governance for parliaments, including minimizing biases in legislative processes and citizen interactions. It emphasizes the importance of trust and provides specific recommendations for dealing with potential biases.

Overall, this sub-guideline provides a framework for parliaments to develop and maintain AI systems that are fair, non-discriminatory and free from biases.

Why fairness and non-discrimination matter

Fairness can be defined at the individual level (such ensuring similar individuals are treated consistently) or at the group level. In the latter case, this involves grouping people into categories and ensuring that these groups are treated equitably.

Fairness, in the context of AI, is the ability of AI systems to not discriminate or reinforce biases against any individual or group. This principle is based on impartiality and inclusion. Fair, non-discriminatory decisions therefore presuppose bias-free data and algorithms.

Practising fairness and non-discrimination

In order to ensure that AI systems are fair and non-discriminatory, parliaments should adopt a comprehensive approach to data management and bias mitigation. The components of this approach are detailed below:

  • Data quality management: Establish robust processes to manage data quality, particularly for data sets likely to be used in AI systems. Implement practices to ensure that there are no biases in the data and in the models that will be used to train the algorithms. Such practices should consider not only data biases and processing biases, but also cognitive biases (for further guidance on this topic, refer to the guideline Generic risks and biases and its associated sub-guidelines).
  • Staff training: Provide staff with training in data ethics, focusing on identifying and minimizing biases throughout the AI development process.
  • Data governance: Implement a data governance process, with a clear delineation of responsibilities between data owners and data stewards.
  • Collaboration: Have IT and business units work closely together. Such collaboration is vital for predicting, minimizing and monitoring biases throughout the AI system life cycle.
  • Data ethics committee or team: Establish a data ethics committee or a multi-skilled team capable of analysing potential biases and communicating them to both managers and IT teams for each AI project.
  • Diversity and inclusivity: Prioritize diversity and inclusivity when forming project teams and data ethics committees. By bringing together individuals of different ages, genders, ethnicities and skill sets, parliaments can ensure that a broad range of perspectives are heard, reducing the risk of that potential biases could be overlooked and enhancing the overall fairness of AI systems.

Minimizing biases in parliamentary processes

When planning and developing AI systems for use in legislative processes, parliaments should:

  • Ensure that the data does not contain biases regarding political-party ideology and previous value judgements
  • Be aware of possible historical biases in data relating to committee meetings and plenary sessions
  • Establish partnerships with public organizations from which they regularly source external data for AI-powered bill-drafting systems, in order to maintain data quality
  • Be aware of biases in text translation and speech-to-text transcription
  • Confirm whether the information produced by generative AI systems is free from biases before considering using them

When planning and developing AI systems for use in government oversight processes, parliaments should:

  • Identify data quality problems in government data and alert the government agency in charge of the data
  • Establish partnerships with government agencies in charge of the data in order to improve data quality and minimize biases

When planning and developing AI systems for use in citizen interaction processes, parliaments should:

  • Identify biases coming from citizens
  • Avoid internalizing biases presented by citizens
  • Avoid exposing any biases when interacting with citizens

The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].