Skip to main content
Voices and videos

AI-3/5: On peace and security, parliaments must keep AI in check

M. Lacroix

Christophe Lacroix, Belgium MP and co-rapporteur of the IPU Standing Committee on Peace and International Security

The rapid evolution of artificial intelligence (AI) has prompted many experts to warn about its impacts on democracy. To explore this issue further, the IPU is preparing a series of five articles on the topic. In this third piece, Belgian MP and co-rapporteur of the IPU Standing Committee on Peace and International Security, Christophe Lacroix, talks about the military applications of AI as well as the associated risks.

In 1983, as relations between the superpowers reached new lows, Soviet early warning systems reported the launch of an intercontinental ballistic missile from the United States. A nuclear response from the Soviet Union would have ended humanity as we know it, but a quick-thinking Soviet officer guessed correctly that the system was malfunctioning. By dismissing the information as a false alarm, he prevented Armageddon.

For Christophe Lacroix, Belgian MP and co-rapporteur of the IPU Standing Committee on Peace and International Security, the lesson is very clear: we must never rely on machines for our peace and security.

AI may have advantages for peace and security in terms of intelligence and transparency, for example, but it will be a major threat if it eventually becomes independent of human control, making life or death decisions without consideration for ethics or international law.

“The potential consequences of artificial intelligence have been underestimated,” Mr. Lacroix says.

“Like Icarus who got too close to the sun, we need to make sure that the development of AI does not get out of control.”

Military officers insist that humans will always control AI-powered weapons systems. But not every actor is a disciplined military officer committed to democratic ideals. There is every reason to believe that non-state actors and rogue states might use AI in ways that are less predictable or respectful of human rights, Mr. Lacroix says.

Besides, he says, a human will have to be very strong – intellectually and psychologically – to disobey or ignore a machine whose information and decision-making power are said to be infallible.

These perceptions of infallibility make AI destabilizing. If a nation fears that its competitor is about to win vital AI competitive advantage, it may launch a pre-emptive attack.

“AI revolutionizes the available military options,” Mr. Lacroix says. “Perceptions of the whole strategic environment will be turned upside down and this makes a renewed arms race very likely.”

MULTILATERAL SYSTEM

Many parliamentarians are reluctant to discuss security issues, believing that they are for internal government only. But parliamentarians are in fact responsible for scrutinizing government spending, including on the military, and the IPU was founded more than 130 years ago on the premise that international dialogue can help to protect international peace and security.  

“In a world that is increasingly characterized by a rise in exacerbated nationalism, we can see that multilateralism is in some danger,” Mr. Lacroix says.

“And it is precisely the role of the IPU to promote this multilateralism, which has provided us until now with an opportunity for negotiation and diplomacy, the tools of peace which can sometimes be fragile but which must continue to exist.”

Unilateral action can help to establish norms and standards, Lacroix says, which is why he has co-signed a parliamentary bill in Belgium to ban entirely lethal autonomous weapons systems (LAWS). But real progress will come through international agreements that carry some kind of legal weight.

At the multilateral level, the United Nations has been discussing AI within the context of the Convention on Certain Conventional Weapons, but discussions are stuck on definitions.

“When Member States want to develop such autonomous weapons systems, they find ways to delay the discussions,” he says. “Agreeing on a common definition is not the right way forward.”

“In any case, we need to go much further.”

When they meet in October 2023 for the IPU’s 147th Assembly in Angola, parliamentarians will have an opportunity to discuss LAWS and AI’s military applications in the context of hearings at the Standing Committee on Peace and International Security. Led by Mr. Lacroix and Argentina MP Margarita Stolbizer, those hearings will inform the outlines of an IPU resolution to be voted on by IPU Members in 2024.

“We don’t need unanimity and we will not get it, but most IPU Member Parliaments can be expected to vote for this resolution,” he says, noting the strength of support in South America.

“That would send a strong signal to the United Nations,” he says.

Mr. Lacroix says that the IPU plays an important role by discussing such technologies, adding a vital layer of accountability.

“Questions of responsibility, transparency, fallibility and security, as well as the human element – these are all eminently political and ethical issues,” Lacroix says.

“They must not be left alone in the hands of scientists, technicians and companies investing in this area,” he adds.

The views and opinions expressed by the parliamentarians in the IPU's voices section are their own and do not necessarily reflect the IPU’s overall position.

Read more from the IPU series on AI

AI-1/5: Democracy is resilient, but AI needs regulation and careful management

Martin Chungong, IPU Secretary General, discusses the risks and opportunities for democracy and argues that we need more than regulation.

AI-2/5: MPs need to engage with scientists, says Denis Naughten

The IPU asks Denis Naughten, Chair of the IPU’s Working Group on Science and Technology, what advice he would have for parliamentarians around the world.