The rapid evolution of artificial intelligence (AI) has prompted many experts to warn about its impacts on democracy. To explore this issue further, the IPU is preparing a series of five articles on the topic. In this first piece, IPU Secretary General, Martin Chungong, discusses the risks and opportunities for democracy and argues that we need more than regulation.
AI offers us many opportunities, but it has troublesome aspects too. By enabling the proliferation of falsehoods and extremist views, it becomes a very real threat to our democracies. These dangers are not necessarily new: even the ancient Greeks wrote about the impacts of false information on democracy. But AI threatens to accelerate these risks exponentially. We could be in for a bumpy ride.
Still in its early stages but developing fast, AI has already become a part of our daily lives. Generally involving some levels of autonomy, self-learning and adaptability, AI-enabled technologies are used in finance, medicine, customer service and much more. Every time we go online, we are greeted with specific and very well-targeted ads. AI may be a powerful tool, but its unregulated spread reminds many of social media.
Why? By allowing disinformation to flourish, social media has disrupted efforts to slow climate change, promote vaccination or even build trust between communities. Unscrupulous people use social media to build false narratives, spreading fear and hatred online, often directed at women. So far, these stories have mostly been written by humans, but we can expect artificial intelligence to enhance the volume and velocity of such disinformation.
Meanwhile, AI will probably also accelerate the use and manipulation of private data in ways that are reminiscent of the Cambridge Analytica scandal. For anybody with the desire and money to use it, AI will accelerate the use of private data for massive and perhaps unbeatable electoral advantage.
I can imagine many other ways in which – in the wrong hands – AI could impact democracy. It could replace humans at work, increase unemployment, and create the conditions for fascism. By creating high-quality fakes it could provoke political scandal on the eve of important elections. Or by monitoring and controlling restless populations, it could be used to clamp down on democratic liberties.
Some experts worry that AI could eventually take on a life of its own. They fear that it could outwit, control or destroy us simple humans.
NOT ALL DOOM AND GLOOM
We should not fall prey to our fears, however, and nothing is inevitable.
The nefarious use of social media has been an early stress test, a sign of the challenges that AI could yet bring. But democratic processes and institutions, such as parliaments, are agile and adapting fast. Democracy is under more strain today than ever before, but all around the world people continue to renew their parliamentary representatives through elections.
AI has positive aspects too. It can enhance the democratic process, for example, by analysing large datasets, identifying patterns, and providing vital insights to policy makers. It cuts through routine or time-consuming tasks, like sorting through voter data or creating political ads. It can thus enable transparency. The IPU’s Centre for Innovation in Parliament sees more and more examples of how parliaments are successfully harnessing new technologies such as AI and becoming stronger institutions as a result.
And why should we not use AI as a force for good, solving some of our most complicated challenges, such as climate change, environmental destruction, declining trust and the growth of inequality? That would also help our democracies, enabling them to deliver more and better for their people.
REGULATION
Regulation, however, feels inevitable. And parliaments must play a key role. It is not just that business leaders, academics and experts are pleading for it. AI systems with human-competitive intelligence could one day threaten humanity, they say. AI is changing fast and the worst possible outcomes are unpleasant to imagine. We are right to err on the side of caution.
New technologies are hard to regulate, however, especially when they are evolving fast. Regulators will need to balance the risks and the opportunities. Do we try to ban AI altogether on the grounds that the risks, however improbably, could be catastrophic? Or do we try to impose some guidelines and reap the opportunities?
The European Union is moving ahead with AI regulation, but others are hardly far behind. I believe, however, that this issue requires more than regulation.
Regulation doesn’t necessarily mean restriction. Instead, we can usefully manage AI and protect our democracies, if we educate people to identify disinformation, regulate the use of private data, and protect our parliaments, media and other democratic institutions. We will be stronger together if we can agree on and share our guidelines and frameworks, collaborate with business and civil society, and enable continued dialogue between countries. The IPU, as the global organization of parliaments, is already facilitating that exchange and sharing of knowledge.
AI will certainly present further challenges, but I remain respectfully upbeat that democracy will continue to be resilient. Like many other new technologies, AI needs careful management, but history teaches us that democracies are more than capable of rising to the challenge.