Security management

Audience
This guideline is intended for senior technical staff involved in the development and implementation of AI systems. Some of the material it contains may also be relevant to senior parliamentary managers looking to gain a better understanding of technical issues relating to security.
About this guideline
This guideline addresses the protection of AI systems from a wide range of threats and risks, outlining a set of practices for ensuring the confidentiality, integrity and availability of AI systems and data in parliaments.
AI systems are being deployed in many different ways, always with the aim of helping professionals increase their productivity. Parliaments are undoubtedly going to follow this trend, aiming for faster processes that accelerate democracy without harming or hurrying debate. Processes are also expected to be also safer, since parliaments are potentials target for national and international interest groups.
Why security management matters
As the use of AI increases, so does the risk to organizations using this technology.
AI systems are associated with a range of security issues, such the inference of data – sometimes sensitive data – used in the training process, the alteration of such data, and the use of a particular prompt – wilfully or not – that could lead the AI system to reach a wrong or unexpected conclusion. All of these issues and more must be addressed before the AI system can be handed over to users.
Moreover, some AI behaviours could have a significant negative impact on an organization’s public reputation. This means that AI systems can only be deployed after – at the very least – a basic risk assessment demonstrating that risks are low or controlled, and that the benefits outweigh these risks.
The deeper the knowledge someone has about AI, the easier it will be for this person to come up with a possible way of misleading the system and turning a breakthrough technology into a personal weapon to threaten different actors.
Moreover, even organizations that do not use AI models and systems are at risk, because criminals are already using AI in an attempt to increase the success rate of their attacks. However, security considerations are especially important for organizations that do use AI in their own systems, since these models are prone to new types of attacks.
Considering the rise in cyberattacks, which surged after the COVID-19 pandemic, and the increasing use of AI models, which are the new “holy grail” of technology, overcoming AI threats is an important part of an organization’s cybersecurity plan.
Cybersecurity management in a parliamentary context
Cyberattacks are a growing concern as parliaments increase their reliance on internet-enabled connectivity – whether cloud-based servers, external systems or for users. Effective cybersecurity management is therefore critical for avoiding such attacks or minimizing their impact.
For further discussion of this subject, refer to the sub-guideline Security management: Parliamentary context.
Cybersecurity threats to AI systems
AI systems learn from the data they are fed and then apply models to help them make decisions, generate new content or do anything else they are programmed to do.
Just as parliaments must ensure that data is valid and of high quality, they must also ensure that there are no opportunities for attackers to exploit inputs into AI systems in order to corrupt and manipulate the data, modelling and outputs.
Attacks can occur in any phase, from data preparation through to AI system development, deployment and operation (for further discussion of this subject, refer to the guideline Systems development). As a result, the entire AI system life cycle should be properly supervised in order to minimize unexpected behaviours.
For further discussion of this subject, refer to the sub-guideline Security management: Threats.
Good practices for implementing AI-focused cybersecurity
Most types of attacks can be avoided or minimized by implementing good practices. Nonetheless, some attacks specifically targeting AI systems require specific countermeasures.
For further discussion of this subject, refer to the sub-guideline Security management: Good practices.
Main considerations when implementing AI-focused cybersecurity controls
Based on the main security frameworks, parliaments should gradually implement controls in the following four areas, according to their specific structure, needs and threat risks:
- Technical controls
- Organizational controls
- Human controls
- Physical controls
Together, measures across these four areas enable parliaments to enhance the protection of their AI systems.
For further discussion of this subject, refer to the sub-guideline Security management: Implementing cybersecurity controls.
Find out more
- Athalye A., N. Carlini and D. Wagner: Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
- Deloitte: Impact of COVID-19 on Cybersecurity
- Federal Bureau of Investigation (FBI): FBI Warns of Increasing Threat of Cyber Criminals Utilizing Artificial Intelligence
- Hurst A.: NCSC releases guidelines for secure AI development
- International Monetary Fund (IMF): Rising Cyber Threats Pose Serious Concerns for Financial Stability
- Kurakin A., I. Goodfellow and S. Bengio: Adversarial Examples in the Physical World
- Kurakin A. and others: 9 Common Types of Attacks on AI Systems
- Norwegian National Security Authority (NSM): NSM ICT Security Principles
- Open Worldwide Application Security Project (OWASP): OWASP Machine Learning Security Top Ten
- Oseni A. and others: Security and Privacy for Artificial Intelligence: Opportunities and Challenges
- Saleous A. and others: COVID-19 pandemic and the cyberthreat landscape: Research challenges and opportunities
- Schneider S. and others: Designing Secure AI-based Systems: a Multi-Vocal Literature Review
The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].
Related content
About the guidelines | The role of AI in parliaments | Introducing AI applications | Inter-parliamentary cooperation for AI | Strategic actions towards AI governance | Risks and challenges for parliaments | Generic risks and biases | Ethical principles | Risk management | Alignment with national and international AI frameworks and standards | Project portfolio management | Data governance | Systems development | Security management | Training for Data Literacy and AI Literacy | Glossary of terms