Dr Andy Williamson
Senior researcher, Centre for Innovation in Parliament
There is nothing new about AI. In the 14th century, Ramon Llull proposed a paper‑based method that could generate new knowledge by combining concepts. By the early 1950s, British scientists were playing checkers against their computers.
Fast forward to 2023, and AI has become the latest technology buzzword. There is lots of exaggeration, similarly to previous digital trends such as the internet, social media or blockchain. But is this excitement justified? Will AI really change our lives?
We have certainly embarked on a new level of AI integration. AI can now engage with us directly, generate new content and predict outcomes. This could transform a wide variety of settings, including parliaments.
AI is developing with astonishing speed. Our current level of AI innovation is unprecedented, which brings opportunities but also challenges. A number of parliaments are already using and experimenting with AI. It has the potential to support legislative drafting, summarize parliamentary documents, help citizens ask questions about parliamentary activities, and more. Today, AI can generate, analyse and improve large volumes of text with high levels of efficiency. And the reliability of the models will only improve.
We can learn from the early adopters. So we are pleased to be able to share examples of their work in this and previous issues of the Innovation Tracker (see articles on Bahrain, Italy and ECRPD). Others are experimenting before committing, and yet more are still to make a move.
Implementing AI means transforming culture, procedures and processes – not just adopting a new technology. To understand what AI means for parliaments, we must first focus on areas such as regulation and governance. AI must be used responsibly and ethically. We should also think about the levels of digital and information literacy that we need if we are to benefit from AI. We must make sure AI is properly managed in parliaments, whether we use a top-down framework or a more use-case approach (there are advocates for both). To manage AI effectively, parliaments must work together and share experiences.
AI has its challenges – particularly generative AI and the large language models that it is based on. How are the algorithms constructed? How do we ensure there is no inherent bias in the large language models, and no confirmation bias in the algorithms? ‘Hallucinations’ (where AI tools generate fictional or spurious results) must be dealt with, and the risks from limited or biased source material are a consideration. Parliaments must explore the unintended consequences of AI, including unintended bias, disinformation risk (such as intentional deep fakes), and the perpetuation of stereotypes that embed inequalities. There are challenges for cybersecurity too, not least in ensuring that AI-based systems cannot be manipulated by unscrupulous actors.
Parliaments must develop strong guidelines for their use of AI. They must consider auditing and transparency procedures, guidance on when and where AI can be used, and how to make its use transparent to the public. As well as recognizing the challenges, AI’s positive potential should also be highlighted. Parliaments need to foster a culture of innovation and experimentation as new applications emerge and technologies improve rapidly. All this can be achieved more effectively when parliaments share their experiences and learn from each other.