Risk management

Audience
This guideline is intended for senior parliamentary managers and parliamentary staff involved in the development and implementation of AI-based systems. It also provides a more detailed technical discussion for those involved in developing and implementing AI-related projects.
About this guideline
This guideline provides guidance on AI risk management for parliaments. It emphasizes the importance of continuous risk assessment throughout an AI system’s life cycle, from project proposal to decommissioning, with a particular focus on three stages: initial project authorization, the development phase and the operational phase. The associated sub-guideline includes questionnaires to support the identification, analysis and mitigation of AI-related risks.
Relevance of risk management to AI governance in parliaments
There are certain risks associated with the use and development of AI systems in parliamentary settings. Further discussion of these risks can be found in the guidelines Generic risks and biases and Risks and challenges for parliaments.
Managing these risks is essential to ensure that AI systems are safe, ethical, fair, private, trustworthy, transparent and compliant with regulations, as well as to ensure respect for human autonomy and intellectual property rights. In summary, effective AI risk management practices help parliaments to:
protect their data and AI systems
maintain business continuity
safeguard their reputation
prevent costly errors
support responsible innovation
Planning to implement an AI risk management process
An AI risk management process needs to be aligned with parliament’s culture and structure. As stakeholders are at different hierarchical levels, appropriate language must be used in assessment questionnaires and periodic reports in order to prevent delays or incorrect interpretations.
During the development phase, adjustments to traditional risk management processes, embedded in project management, can simplify and speed up the implementation of AI risk management. In this case, the process will involve not only AI project managers, but also IT committees, corporate committees and senior decision makers.
During the operational phase, a partnership between business and IT teams facilitates the integration of oversight practices and the collection of user feedback as input for AI risk management. In this case, it is crucial to involve a risk management team composed of people with AI skill and business staff in charge of AI-enabled digital services.
Applying risk management processes to the AI life cycle
Risk management is a continuous cycle that permeates all phases of an AI system’s life cycle, ensuring that risks are consistently identified, assessed, and managed or mitigated – from inception to deployment and beyond.
A typical AI risk management process includes the following phases:
AI risk assessment
AI risk analysis
AI risk treatment
AI risk communication
AI risk monitoring and review
These phases are discussed in turn below.
AI risk assessment
The aim of this first step is to identify what risks exist, to understand the operational and other implications of these risks, and to decide how they should be managed or mitigated.
This assessment exercise will typically be carried out using questionnaires for key stakeholders – which, in itself, can be a useful mitigation method because it exposes potential risks and increases awareness of them.
These questionnaires can be used in the following phases:
Initial project authorization
The development phase (prior to commissioning)
The operational phase (through to decommissioning)

Risk assessment for initial project authorization
The AI system life cycles starts with the presentation of a proposal for an AI project to the relevant governance body (council, committee or unit), along with a completed AI risk assessment questionnaire (Q1), which is used to gather information about the project’s purpose, stakeholders, compliance, data, agreements, potential biases and other factors.
Governance staff use the responses in the questionnaire to estimate the project’s risks and benefits, and to determine, on that basis, whether authorization should be given to add the AI project to parliament’s portfolio.
Risk assessment in the development phase
During the development phase, the Q2 questionnaire will inform the risk management process, with the aim of reducing and mitigating AI risks such that the output AI system is considered trustworthy for deployment. Unacceptable AI risks can, however, lead to the project being interrupted at this stage.

Risk assessment in the operational phase
After the AI system has been deployed, it is necessary to keep monitoring the system’s behaviour as well as changes in the variables considered in the AI system life cycle, such as data characteristics, business rules and social considerations.
The third risk assessment questionnaire (Q3) can be used during this phase. Similarly to the Q2 questionnaire, the risk score resulting from this third questionnaire will inform the risk management process, which at this point aims to reduce and mitigate AI risks in order to ensure that the system remains trustworthy.
AI risk analysis
Once the risks have been identified and assessed, the next step is to analyse these risks in light of parliament’s AI policy and its risk appetite – often based on its regulatory requirements and strategic objectives – in order to determine which risk(s) require(s) treatment. All identified risks should then be ranked in order to identify which require immediate attention and which should be monitored over time. All such decisions should be made with the close involvement of relevant stakeholders.
Trade-offs between different risk treatment options also need to be evaluated at this stage. For instance, by eliminating one identified risk, parliament could be at risk of not achieving other strategic goals by also eliminating the option of using the AI system in another, important way.
AI risk treatment
Once the identified risks have been analysed and prioritized, parliament should develop and implement plans to manage them, possibly using one or more of the following strategies:
Avoid: Eliminate the activity that gives rise to the risk. For example, parliament may decide not to implement an AI system, or even to abandon an AI project, if the associated risks are deemed too high.
Reduce: Take steps to reduce the likelihood of the risk occurring, or to mitigate its impact if it does occur.
Transfer: Transfer the risk to a third party, such as through insurance or by outsourcing certain services to a company better equipped to manage the risk.
Accept: Accept the risk without taking any action to alter its likelihood or impact. This is typically done when the cost of mitigating the risk exceeds the potential damage, and when the risk is considered low enough to be acceptable.

AI risk communication
Identified AI risks and associated management measures should be communicated to relevant stakeholders throughout the AI system’s life cycle. During the development phase, project managers and project office staff will provide regular updates on risk status and treatment effectiveness as part of their usual remit. Communication is equally important in the operational phase.
AI risk monitoring and review
During the operational phase, it is essential to continuously monitor and review AI risks.
Oversight and feedback mechanisms, coupled with training programmes, help to build a risk-aware culture and keep stakeholders informed.
Periodic audits and reviews should also be conducted to ensure compliance with AI policy and regulations. All identified incidents and near-misses should be analysed in order to identify root causes and improve risk management practices, with lessons learned documented and policies updated accordingly.
Should any unacceptable AI risks arise, it may be necessary to remove the AI system from operation.
Find out more
- Cheatham, B., Javanmardian, K., and Saman, H. (Undated). Confronting the risks of artificial intelligence, available at [Confronting AI risks | McKinsey]
- National Institute of Standards and Technology (NIST) Special Publication (SP) 800-30, Guide for Conducting Risk Assessments, available at [NIST SP 800-30 | NIST]
- Online Browsing Platform (OBP), ISO/IEC 27005:2022(en) Information security, cybersecurity and privacy protection — Guidance on managing information security risks, available at [ISO/IEC 27005:2022(en), Information security, cybersecurity and privacy protection — Guidance on managing information security risks]
- Tucker, B.A. (undated). Carnegie Mellon University, Advancing Risk Management Capability Using the OCTAVE FORTE Proces, available at [Advancing Risk Management Capability Using the OCTAVE FORTE Process (cmu.edu)]
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
Related content
About the guidelines | The role of AI in parliaments | Introducing AI applications | Inter-parliamentary cooperation for AI | Strategic actions towards AI governance | Risks and challenges for parliaments | Generic risks and biases | Ethical principles | Risk management | Alignment with national and international AI frameworks and standards | Project portfolio management | Data governance | Systems development | Security management | Training for Data Literacy and AI Literacy | Glossary of terms