Jump to page content

First Session of the 2025 Group of Governmental Experts (GGE) on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems (LAWS)

  • 03.03.2025
    • Disarmament
Scroll to page content
The targeting of human beings cannot be entrusted to statistical approximations performed by algorithms. In order to protect the dignity and sacredness of human life, it is necessary “to ensure and safeguard a space for proper human control over the choices made by artificial intelligence programs: human dignity itself depends on it.”

Statement of H.E. Archbishop Ettore Balestrero, Permanent Observer of the Holy See to the United Nations and other International Organizations in Geneva to the First Session of the 2025 Group of Governmental Experts (GGE) on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems (LAWS)

Geneva, 3-7 March 2025

 

 

Mr. Chair,

 

At the outset, allow me to thank you for all the preparatory work that you have done in advance of this first session of the 2025 Group of Governmental Experts (GGE), together with the “Friends of the Chair” (Australia, Brazil, the Philippines, and Switzerland).

 

Mr. Chair,

Autonomous Weapon Systems, which are capable of identifying and attacking targets without direct human intervention, are a “cause for grave ethical concern”[1] because they lack the “unique human capacity for moral judgment and ethical decision-making.”[2] Without adequate, meaningful and consistent human control,[3] the weaponization of AI could also become highly problematic and pose an existential risk. For these reasons, the Holy See has called for a reconsideration of the development of these weapons and a ban on their use, because “no machine should ever choose to take the life of a human being.”[4]

 

Mr. Chair,

 

With regard to the revised “rolling text” that you have provided, my Delegation supports your approach of identifying those systems that are wholly or partially incompatible with IHL and other existing international obligations. This could indeed contribute to an appropriate characterization of the systems under consideration and help to establish prohibitions and restrictions accordingly.

 

While we seek to accomplish such tasks, it is of paramount importance to take into account broader ethical considerations. Thus, for my Delegation, not allowing the weapons systems to decide autonomously whether to hit human beings is indeed one of the fundamental overarching ethical issues that sits right at the heart of this GGE.

 

Therefore, building on the various implications for IHL that you have identified in the rolling text, the Holy See deems it of fundamental importance to keep human dignity and activity, together with the ethical considerations at the center of our discussions. At a time when new and emerging technologies hold great promises for human development, it is precisely the ethos of understanding the value and dignity of the human person and human activity that is most at risk.

Mr. Chair,

 

The targeting of human beings cannot be entrusted to statistical approximations performed by algorithms. In order to protect the dignity and sacredness of human life, it is necessary “to ensure and safeguard a space for proper human control over the choices made by artificial intelligence programs: human dignity itself depends on it.”[5] It should be recalled that these machines would only operate in a simulation of reality and of human behavior, without truly capturing all its aspects. In fact, reality is multidimensional and only human agency and acumen can take into account its incalculable dimensions and fully grasp its real implications. Similarly, “only the human ‘heart’ can reveal the meaning of our existence”.[6]

 

In order to address the abovementioned ethical concerns and to avoid legal ambiguity regarding accountability, my Delegation would like to see a clear and specific prohibition in the rolling text on the use of autonomous weapons systems that target human beings or whose use can be expected to cause accidental human casualties.

 

 

Mr. Chair,

 

The Holy See appreciates the references in the rolling text to the “principles of humanity”, “appropriate control”, and “human judgement”. We are confident that a greater clarity and a common understanding of these terms will emerge during our discussion. Given the rapid pace of technological advancements and the massive investment and research into weaponizing artificial intelligence, it is of the utmost urgency that this GGE delivers concrete results in the form of a robust, legally binding instrument and, in the meantime, establish an immediate moratorium on their development and use.

 

Over the next few days, my Delegation stands ready to engage in constructive discussions guided by a sense of urgency. The rolling text contains a skeleton structure and a number of elements that could form a solid basis for negotiation on a legally binding instrument. These negotiations could refine the prohibitions and regulations that you have outlined, inter alia, by adding some ethical concerns that should be universally shared and are fundamental to humanity, such as the prohibition of anti-personnel autonomous weapons.

 

Thank you, Mr. Chair.



[1] Pope Francis, Message for the 57th World Day of Peace, 1 January 2024. Ibid. 

[2] Ibid.

[3] Holy See, Working Paper to the 6th Review Conference of the CCW, Translating ethical concerns into a normative and operational framework for Lethal Autonomous Weapons Systems, 2021.

[4] Antiqua et Nova, Note of the Holy See on the Relationship Between Artificial Intelligence and Human Intelligence, 28 January 2025, n. 100.

[5] Ibid.

[6] Pope Francis, Message on the occasion of the “Sommet pour l’action sur l’intelligence artificielle”, 10 February 2025.