Skip to main content

Ethical principles: Privacy

Image

About this sub-guideline

This sub-guideline is part of the guideline Ethical principles. Refer to the main guideline for context and an overview. 

This sub-guideline explores the principle of privacy in AI governance for parliaments, with a focus on personal data protection. It outlines specific privacy concerns in various parliamentary work processes, including legislative, administrative and citizen interaction contexts. It emphasizes the importance of justifying and limiting the use of personal data in AI systems, and provides guidance on handling sensitive information. Special attention is given to the challenges posed by generative AI in processing personal and sensitive data.

Overall, this sub-guideline provides a framework for parliaments to develop and maintain AI systems that respect privacy and protect personal data.

Why privacy matters

In the context of digital transformation, it is important – and often a legal requirement – to protect personal data. AI systems are no exception to this rule, with parliaments needing to comply with both legislation and internal standards on this subject.

Before embarking on an AI project, parliaments will therefore need to delve deeper into the problem to be solved with AI in order to identify whether any type of personal data will be collected, processed and potentially shared, clearly recording the classification of the data and its precise meaning.

The following sections address specific privacy concerns raised by the diverse work processes of parliaments.

 

Use of personal data 

It is important to exercise caution and restraint when using personal data, and only to do so where absolutely necessary. Parliaments should adhere to the following principles:

  • Where personal data is deemed essential to an AI system’s functionality, a clear and compelling justification for its use must be made. This justification should be subject to rigorous scrutiny and approval by a data protection officer (or equivalent person) and by key decision makers within the AI governance framework.
  • If approval for the use of personal data is given, strict practices must be put in place to safeguard privacy and to prevent misuse. Such practices must protect individuals from exposure, even indirectly, especially when dealing with biometric data or when combining information from multiple sources.
  • AI systems must not profile individuals according to their behaviour or use personal data in ways that could lead to discrimination, the manipulation of opinions, or any form of harm, whether psychological, physical or financial.
  • Explicit authorization should be required for the use of sensitive data, adding an extra layer of protection and accountability.
  • Special conditions may be required for the use of personal data for research purposes or to support bills going through parliament, especially if parliament already has internal regulations regarding the use of personal data.

 

Administrative processes

Where parliament is adopting or developing AI systems, it should identify, understand and document what data is being used – both internal data, and externally sourced or hosted data – and identify who the owner of that data is.

 

Citizens’ data

When interacting with citizens, parliaments must take special care to manage and protect the personal data they collect, such as through an online digital service or a manual data-collection process. They must also carefully consider what data is stored in a system that is exposed to AI, and ensure that only essential data is retained. More generally, when designing an AI system, parliaments need to understand the parameters of data privacy, knowing what is admissible for release into the public domain, what must be anonymized, and what is protected.

 

Sensitive data and generative AI

Parliaments must exercise extreme caution and appropriate scrutiny when feeding personal and sensitive data into generative AI systems, as these systems will process and use any data given to them. The institution should have in place mechanisms to protect its personal and sensitive data from inadvertent or inappropriate access by such tools. This is especially important if this data is processed externally, as is the case with most generative AI systems. 

Where a parliament does authorize personal data for use by generative AI systems, it should actively implement processes to anonymize this data, as well as adopting other mechanisms, established by internal rules, before submitting any personal data to such tools. This practice minimizes the risk of personal data breaches and misuse.

Practising privacy

In order to ensure that AI systems respect and protect privacy, parliaments should adopt a comprehensive approach. The components of this approach are detailed below:

  • Conduct a thorough assessment of AI systems to identify any use of personal data, clearly documenting data classification and purpose.
  • Implement strict data protection practices, including obtaining approval from the data protection officer for any use of personal data in AI systems.
  • Establish clear protocols for managing citizens’ data in AI-driven interactions, ensuring that only essential data is collected and stored.
  • Develop and enforce stringent safeguards for handling sensitive data, particularly when using generative AI tools.
  • Create a comprehensive data ownership and management system, documenting both internal and external data sources used in AI processes.

The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].