Skip to main content

Systems development

Image

Audience

This guideline is intended for IT managers and staff, software engineers, data scientists and technical project managers involved in designing, deploying and maintaining AI systems in parliaments.

About this guideline

This guideline focuses on AI systems development within parliamentary contexts, addressing the crucial intersection of technology and governance. It covers essential aspects such as the AI system life cycle, external development frameworks, deployment strategies and planning considerations. By emphasizing ethical principles, risk mitigation and best practices, this guideline aims to support parliaments in implementing AI solutions that enhance efficiency, transparency and decision-making while maintaining integrity and public trust in the legislative process.

Why AI systems development matters

In the context of AI governance, an AI systems development process is a set of practices designed to ensure that all AI projects solve the problems for which they were planned and adhere to AI ethical principles. As an inherently operational process, AI systems development should adhere to parliament’s AI policy, as well as follow the institution’s data management and security management procedures. 

In a parliamentary context, an AI systems development process is relevant to AI governance as a way of reducing ethical and operational risks. Details of how it does this are given below.

 

Preserving the privacy of data subjects

Personal data should be protected not only in the development phases, but also in the AI system’s outputs.

 

Ensuring transparency throughout the development and maintenance phases 

AI systems often imply complexity, making their internal decision-making processes opaque. For this reason, practices should be scrutinized in order improve transparency throughout the AI system life cycle – from initial project planning until the AI system is withdrawn from operation. This approach should make it easier to explain the AI system’s outcomes and ensure that the development phases respect parliament’s rules and are compliant with regulations. 

 

Reducing biases and discrimination

Techniques to identify groups that are to be protected from biases should be applied throughout the process, from planning to deployment. While AI systems are in operation, continuous monitoring helps to minimize new biases not seen in the development phase. 

 

Creating accountability

Through systematic steps for planning, implementing, testing and improving, practices are delegated and approved by key stakeholders, with individual roles and responsibilities clearly defined. The AI system’s functionality should be documented in such a way that it can be audited. 

 

Improving robustness and safety

The systems development process should focus on improving robustness and safety, through a system architecture that prioritizes cybersecurity and through extensive testing.

 

Maintaining human autonomy

Humans should play a continuous verification role in order to ensure that the AI system’s outputs are reliable, both during development and following deployment in a live environment. This human oversight will ensure that the system continues to adhere to the ethical principles considered in the project phase, and allows for new ethical risks to be identified. 

 

Guaranteeing regulatory compliance

AI systems are required to comply with various legal and regulatory requirements, established both internally and within parliament’s country or region.

Systems life cycle and development frameworks

The AI systems life cycle is a sequential list of steps, practices and decisions that drive the development and deployment of AI-based solutions. Having a well-defined life cycle is vital for parliaments that are developing their own AI-based systems and tools, as it provides a structured and systematic approach to building, deploying and maintaining ethical AI technologies. 

Within this context, there are an increasing number of external AI development frameworks that parliaments can use. These consist of building blocks and integrated software libraries that make it easier to develop, train, validate and deploy AI solutions through a high-level programming interface.

For further discussion of systems life cycle and development frameworks, refer to the sub-guideline Systems development: Systems life cycle and development frameworks.

Deployment and implementation

When deploying and implementing AI systems and tools, parliaments need to understand key aspects such as deployment strategies, common deployment cases and critical planning recommendations, including topics such as stakeholder engagement, pilot project initiation and the use of agile methods. Context should also be given consideration, including parliamentary workflows, internal expertise and opportunities for leveraging responsible AI tools.

For further discussion of the deployment and implementation of AI systems, refer to the sub-guideline Systems development: Deployment and implementation.

For further discussion of software deployment patterns, refer to the sub-guideline Systems development: Deployment patterns.

Find out more


The Guidelines for AI in parliaments are published by the IPU in collaboration with the Parliamentary Data Science Hub in the IPU’s Centre for Innovation in Parliament. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International licence. It may be freely shared and reused with acknowledgement of the IPU. For more information about the IPU’s work on artificial intelligence, please visit www.ipu.org/AI or contact [email protected].

Sub-Guidelines

Systems development: Systems life cycle and development frameworks

Systems development: Deployment and implementation