Skip to main content

Ethical principles: Human autonomy and oversight

Image

About this sub-guideline

This sub-guideline is part of the guideline Ethical principles. Refer to the main guideline for context and an overview. 

This sub-guideline explores the principle of human autonomy and oversight in AI governance for parliaments. It refers to the way in which AI systems interact with humans, as well as the way in which information is stored, transmitted and secured. It stresses that parliaments, as enablers of a democratic, flourishing and equitable society, must support the user’s agency and uphold fundamental rights and that, in an AI context, this requires human oversight.

In this sub-guideline, special attention is given to the challenges posed by generative AI, emphasizing the need for robust feedback channels and frequent human checks.

Overall, this sub-guideline provides a framework for parliaments to develop and maintain AI systems that promote human autonomy and allow for human oversight. 

 

Human autonomy and protection of citizens’ rights

Humans should be free to express their opinions and make decisions about their lives without interference, coercion or manipulation. In order to ensure that AI systems do not negatively affect citizens’ rights, it is important for parliaments to understand how AI systems interact with humans and how information is stored, transmitted and secured. 

Maintaining human autonomy is particularly challenging in certain areas, especially web searches, content curation, content moderation, browsing activities, and email and text communications stores in the cloud. When parliaments rely on AI systems to execute core public functions, they should ensure that the design and operation of these systems complies with international human rights standards, as part of their duty to promote freedom of expression.

Moreover, in order to preserve human agency, direct interaction between human end users and an AI system should be established in a way that avoids simulating social relationships or stimulating potentially negative or addictive behaviour.

 

Human oversight

In the day-to-day running of organizations, human oversight is exercised through human supervision of an AI system’s outputs. Users and managers responsible for AI systems analyse this output to ascertain whether undesirable behaviours have occurred, whether the rules established at the development stage need to be modified, or whether there are any data biases that went unnoticed during the development of the system.

The nature of human oversight depends on the type of AI application. Supervision can occur during development, on an ongoing basis, or once the system is in production, in order to gather feedback on the system’s output.

There are three types of human supervision that can be applied to AI systems:

  • Human-in-the-loop (HITL): Under this model, a human mediates all decisions made by the AI system. While this approach offers the highest level of human control, it is not always desirable or feasible, particularly for systems designed for rapid decision-making or high-volume data processing.
  • Human-on-the-loop (HOTL): This approach allows for human intervention during the project development phase. Once the system is operational, the human’s role shifts to monitoring the system’s operation and decisions. This approach balances automation with human oversight, allowing for intervention when necessary.
  • Human-in-command (HIC): This is the most comprehensive form of oversight, extending beyond the AI system’s immediate functioning to consider broader economic, social, legal and ethical impacts. Under this model, oversight can even extend to society at large, with public feedback gathered on the AI system’s behaviour providing a broader perspective on its effects and implications.

The distinction between these three approaches – HITL, HOTL and HIC – lies primarily in the level of autonomy granted to the AI system and the extent of human oversight. These are summarized in Table 1 below:

Table 1: Models of Human-AI interaction

By implementing these oversight approaches, parliaments can harness the benefits of AI while maintaining essential human control and accountability, thus upholding democratic principles and public trust.

 

Practising human autonomy and oversight

In order to safeguard human autonomy and maintain proper oversight of AI systems, parliaments should adopt a comprehensive approach. The components of this approach are detailed below:

  • Risk assessment: Conduct a thorough risk assessment of each AI system, paying particular attention to those that interact directly with human end users. For these systems, identify any potential for confusion about who or what is engaging in the interaction.
  • Rules: Establish clear rules for AI-human interactions in order to prevent any manipulation or the formation of inappropriate social relationships. The type of oversight required for each AI system should be determined according to its specific risk profile.
  • Standards and testing: Develop a set of clear, measurable criteria for acceptable and unacceptable AI behaviours, and draw up an extensive testing plan to explore the full range of system behaviours. Parliament’s AI policy should designate a specific position or organizational body with the authority to withdraw an AI system from operation if it cannot meet these behavioural standards.
  • Training: Provide thorough training on the assessment process to bothtechnical staff and managers, including on the use of any specific tools or functionalities built into the AI system itself.
  • Reporting: Produce regular oversight reports, appropriately tailored for both technical staff and managers.
  • Review: Establish a timely and efficient process for reviewing these human oversight reports to ensure that any issues or concerns are addressed promptly, thus maintaining the integrity and trustworthiness of the AI systems in use.
  • By implementing these practices, parliaments can leverage the capabilities of AI systems while ensuring that these remain under appropriate human control and respect human autonomy.

 

Human oversight of generative AI 

Parliaments should establish robust oversight mechanisms for generative AI:

  • Create a digital channel for user feedback on AI outputs.
  • Invest in staff training for effective AI oversight.
  • Conduct more frequent human checks on AI-generated content, given the rapid advancements in this technology.
  • Carefully select generative AI tools and ensure that all users understand their specific limitations.

These measures allow parliaments to leverage generative AI while maintaining essential human control and ethical standards.


The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].