Skip to main content

Ethical principles: Robustness and safety

Image

About this sub-guideline

This sub-guideline is part of the guideline Ethical principles. Refer to the main guideline for context and an overview.

This sub-guideline explores the principle of robustness and safety in AI governance for parliaments, emphasizing that, in order to be trustworthy, AI systems should be robust to adversity and to changes within the environment for which they were designed.

The sub-guideline presents the principle of robustness and safety through two lenses: resilience to failures that could cause damage to people, organizations or the environment or that could prevent traceability, and resilience to cyberattacks.

Overall, this sub-guideline provides a framework for parliaments to develop and maintain AI systems that are robust and safe.

Resilience to cyberattacks

Cyberattacks against AI systems exploit both algorithmic opacity and the strong dependence of algorithms on data. Such attacks may be difficult to detect in a timely manner, requiring AI systems security management practices that encompass advanced prevention techniques and focus on restoring the system and the entire environment to normal operating conditions.

Resilience to failure

Failures can occur in AI systems when variables take on unknown or false values that the developer did not consider and did not programmatically act to prevent. While AI systems are generally expected to be robust, if such failures do occur, there must be a mechanism to restore the ​​system to its normal state in a timely and responsible manner, with minimal loss of data or impact on parliament.

Practising robustness and safety

In order to ensure that AI systems are robust and safe, parliaments should adopt a comprehensive, dynamic risk management approach that adapts to the ever-changing environment in which these systems operate. The components of this approach are detailed below:

Comprehensive testing: Identify and mitigate cyber threats specifically targeting AI systems, while not neglecting other potential vulnerabilities.

Security practices: Tailor security practices to address the unique challenges posed by AI systems and the threats they face, including through close and rapid communication between data teams and information security experts. When developing AI systems, cybersecurity should be a primary consideration, integrated from the outset, rather than added as an afterthought.

Training: Invest in continuous training for developers and information security staff. These staff should be well-versed in techniques to prevent cyberattacks on AI systems and equipped with disaster recovery strategies specific to these technologies.

Internal collaboration: Ensure that internal business units responsible for AI systems work closely with IT departments to establish clear parameters for monitoring system behaviour and defining thresholds for alerts regarding suspicious activity.

External partnerships: Forge partnerships with other public institutions. These alliances facilitate swift and effective communication about emerging threats and new attack categories. They also provide a platform for sharing experiences – both successes and failures – in implementing various security techniques and technologies.

By adopting this holistic approach, parliaments can create a resilient framework for AI systems that can withstand threats, adapt to changes, and continue to serve their intended purpose effectively and safely.

Maintaining safety when using generative AI

When implementing generative AI in parliamentary contexts, safety considerations are paramount:

  • Maintain strict control over data access, ensuring that AI systems and tools only interact with data specifically authorized for their intended purpose. This approach safeguards sensitive information and maintains the integrity of parliamentary processes.
  • Where data transfer to external cloud services raises security concerns or presents other risks, explore alternative solutions. One viable option is to employ open-source generative AI models that can run locally on a parliament’s own systems. This strategy provides the benefits of generative AI while offering full control over security, data management and integrity.

By adopting these measures, parliaments can harness the power of generative AI while upholding the highest standards of data protection and operational safety.


The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].