Skip to main content

Security management

Image

Audience

This guideline is intended for senior technical staff involved in the development and implementation of AI systems. Some of the material it contains may also be relevant to senior parliamentary managers looking to gain a better understanding of technical issues relating to security. 

About this guideline

This guideline addresses the protection of AI systems from a wide range of threats and risks, outlining a set of practices for ensuring the confidentiality, integrity and availability of AI systems and data in parliaments.

AI systems are being deployed in many different ways, always with the aim of helping professionals increase their productivity. Parliaments are undoubtedly going to follow this trend, aiming for faster processes that accelerate democracy without harming or hurrying debate. Processes are also expected to be also safer, since parliaments are potentials target for national and international interest groups.

Why security management matters

As the use of AI increases, so does the risk to organizations using this technology. 

AI systems are associated with a range of security issues, such the inference of data – sometimes sensitive data – used in the training process, the alteration of such data, and the use of a particular prompt – wilfully or not – that could lead the AI system to reach a wrong or unexpected conclusion. All of these issues and more must be addressed before the AI system can be handed over to users.

Moreover, some AI behaviours could have a significant negative impact on an organization’s public reputation. This means that AI systems can only be deployed after – at the very least – a basic risk assessment demonstrating that risks are low or controlled, and that the benefits outweigh these risks.

The deeper the knowledge someone has about AI, the easier it will be for this person to come up with a possible way of misleading the system and turning a breakthrough technology into a personal weapon to threaten different actors.

Moreover, even organizations that do not use AI models and systems are at risk, because criminals are already using AI in an attempt to increase the success rate of their attacks. However, security considerations are especially important for organizations that do use AI in their own systems, since these models are prone to new types of attacks.

Considering the rise in cyberattacks, which surged after the COVID-19 pandemic, and the increasing use of AI models, which are the new “holy grail” of technology, overcoming AI threats is an important part of an organization’s cybersecurity plan.

Cybersecurity management in a parliamentary context

Cyberattacks are a growing concern as parliaments increase their reliance on internet-enabled connectivity – whether cloud-based servers, external systems or for users. Effective cybersecurity management is therefore critical for avoiding such attacks or minimizing their impact.

For further discussion of this subject, refer to the sub-guideline Security management: Parliamentary context.

Cybersecurity threats to AI systems

AI systems learn from the data they are fed and then apply models to help them make decisions, generate new content or do anything else they are programmed to do.

Just as parliaments must ensure that data is valid and of high quality, they must also ensure that there are no opportunities for attackers to exploit inputs into AI systems in order to corrupt and manipulate the data, modelling and outputs.

Attacks can occur in any phase, from data preparation through to AI system development, deployment and operation (for further discussion of this subject, refer to the guideline Systems development). As a result, the entire AI system life cycle should be properly supervised in order to minimize unexpected behaviours.

For further discussion of this subject, refer to the sub-guideline Security management: Threats.

Good practices for implementing AI-focused cybersecurity

Most types of attacks can be avoided or minimized by implementing good practices. Nonetheless, some attacks specifically targeting AI systems require specific countermeasures.

For further discussion of this subject, refer to the sub-guideline Security management: Good practices.

Main considerations when implementing AI-focused cybersecurity controls

Based on the main security frameworks, parliaments should gradually implement controls in the following four areas, according to their specific structure, needs and threat risks:

  • Technical controls
  • Organizational controls
  • Human controls
  • Physical controls

Together, measures across these four areas enable parliaments to enhance the protection of their AI systems.

For further discussion of this subject, refer to the sub-guideline Security management: Implementing cybersecurity controls.

Find out more


The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].

Sub-Guidelines

Security management: Implementing cybersecurity controls