Skip to main content

Ethical principles: Transparency

Image

About this sub-guideline

This sub-guideline is part of the guideline Ethical principles. Refer to the main guideline for context and an overview.

This sub-guideline explores the principle of transparency in AI governance for parliaments. It defines transparency as the communication of appropriate information about AI systems in an understandable and accessible format. The sub-guideline addresses three key aspects of transparency: traceability, explainability and communication.

Highlighting the importance of documenting the entire life cycle of AI systems, from planning to decommissioning, it provides practical recommendations for implementing transparency. These include risk assessment documentation, standardized methods for explaining AI decisions, and clear communication about AI system capabilities and limitations. The sub-guideline also offers specific guidance on ensuring transparency in generative AI applications, acknowledging the unique challenges they present.

Overall, this sub-guideline provides a framework for parliaments to develop and maintain AI systems that are transparent, accountable and aligned with democratic values.

Why transparency matters

Transparency involves communicating appropriate information about AI systems to the right people and in a free, understandable and easily accessible format.

Transparency – throughout the entire life cycle of an AI system – encompasses three key aspects: traceability, explainability and communication. These are discussed below.

 

Traceability

Traceability implies the ability to follow and monitor the entire life cycle of an AI system, from the definition of its purpose, through to planning, development,use and ultimate decommissioning.

Architects, developers, decision makers and even users involved in the development and evolution of AI systems are advised to use a combination of tools and documentation to support traceability.

 

Explainability

Explainability is the ability for humans to understand and trust each decision, recommendation or prediction made by an AI system.

As complexity increases in AI systems, explainability declines. Consequently, initially simple AI systems become less explainable as new layers of functionality are added over time.

Since different AI system stakeholders require different types of explanations, parliaments must generate documentation aimed at decision makers and those responsible for AI governance, in addition to the documents produced by the development team.

 

Communication

Communication is important for transparency: humans must always know that they are interacting with an AI system. As such, any AI system that interacts with humans must identify itself unambiguously. It must be explained to users and practitioners, in a clear and accessible manner, how the system functions and what its limitations are.

Practising transparency in AI systems

In order to ensure that AI systems are transparent, parliaments should adopt a comprehensive, life cycle-wide approach. The components of this approach are detailed below:

  • Risk assessment: Produce a comprehensive risk assessment to guide project authorization, development and maintenance. This assessment should consider all stakeholders and inform decisions from initiation to potential decommissioning.

  • Standardization: As part of the AI systems development process, adopt a standardized transparency method, such as Explainable AI (XAI), to document key aspects including problem definition, data selection criteria, personal data usage, technical specifications, user feedback and oversight results. Capture the rationale behind all significant decisions.

  • Reporting: Maintain transparency through regular behaviour reports andcontinuous data storage for auditing. Clearly communicate system expectations, limitations and potential abnormalities to all relevant parties.

  • Communication: Ensure that AI applications interacting with humans disclose their artificial nature. Inform business managers about AI usage in their areas of responsibility.

  • Documentation: Tailor transparency documentation to the intended audience, whether internal or external. For outsourced AI systems, clearly communicate and enforce transparency requirements with external providers.

Practising transparency in generative AI systems

Parliaments using generative AI must prioritize transparency and responsibility, recognizing that the learning process and data used by AI systems may not be transparent:

  • Label AI-assisted documents, specifying the tool and version used.

  • Establish clear guidelines for permissible AI use in document creation.

  • Document AI processes from design to deployment, including mechanisms for ensuring trust in AI systems.

  • Prioritize commercial AI tools aligned with human rights frameworks.

  • Justify and document any use of personal or third-party data.

  • Clearly communicate when AI outputs are probabilistic rather than factual.


The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].