‘The battlefield will become a space of impunity’: How AI challenges the laws of war
'We are going backwards in terms of protecting those who are more vulnerable on the battlefield: civilians,' says Fr. Afonso Seixas Nunes SJ.
Father Afonso Seixas Nunes, a Portuguese Jesuit, is an accomplished chef and baker. He has been known to make personalized wedding cakes for the couples whose weddings he officiates, picking ingredients that match their personality.
When he is not in the kitchen, Fr. Seixas Nunes, can be found in the classroom. He is currently a lecturer and researcher at the Saint Louis University in St. Louis, Missouri, where he is a leading expert in the laws of war, including the legal and ethical implications of the use of artificial intelligence in warfare.
He studies, as he puts it, “the worst men can do to each other.”
In an extended conversation with The Pillar, Seixas Nunes spoke about his research, shedding some light on what future developments in modern warfare could look like.
This interview was conducted in Portuguese. It has been edited for length and clarity.
What exactly is an autonomous weapons system? What is the degree of human involvement required for a system to cease to be considered autonomous?
That is an important question!
Even within the military, there is a lot of confusion about that. The novelty of an autonomous system of warfare is its ability to identify, select and engage a military target without human intervention.
AI was not born yesterday; we have had it for years. What is a new feature is AI’s ability to identify targets. The systems used until now, such as Israel’s Iron Dome, are designed to shoot down a specific type of object. When that object appears on the system's radar, the system locates it and shoots it down. But the target was pre-loaded to the system.
New AI enhanced systems, however, can process data in a volume, variety, and velocity which is impossible for a human operator in due time. These systems become more efficient than human soldiers.
For example, it is possible nowadays for an AI decision support system to inform a military commander that a vehicle, which looks like a normal car, is carrying weapons, making it a military target that can be lawfully engaged.
By that definition, a remote-controlled drone is not an autonomous system. How about a landmine, for instance?
Anti-personnel mines are still sometimes mistakenly identified as “autonomous systems,” but they are not, because a mine, once it is activated, does not distinguish between an animal, a child, or a soldier. The mine has no ability to identify who is setting it off and it is not adaptable to new circumstances on the battlefield, which is another characteristic of autonomous systems. Adaptability becomes an important feature of these systems.
Consider the following example: I am a legitimate military target. The system identifies and locates me and it is ready to eliminate me. However, what if I suddenly enter a crowd of civilians? An autonomous system has the ability to suspend the attack and adapt to the new circumstances.
And that's where generative artificial intelligence comes in, which is the latest achievement of AI.
You are saying this technology already exists?
A system that can identify a target using facial recognition can determine whether or not there are civilians around and then, once the target is isolated, take it out with no human intervention?
Yes and no. I mean, these systems exist, but they have not been used because they are extremely vulnerable to cyber-attacks.
The fact that these systems are open to new data coming from the battlefield makes them adaptable, but it also makes them vulnerable.
We know that the United States, South Korea, and Russia already have these systems. One never really knows what China has, but it is estimated that it already has them as well. Countries are cautious to use them, though, because of the risks mentioned above. That explains why states have invested in another area: AI-DSS, that is, “AI Decision Support Systems.” Such are the systems being used by Israel in the war in Gaza.
AI-DSS use what is called “deep sensing,” which is several systems operating together that receive information from multiple satellites, or from multiple drones at the same time. This information is then channeled to the military commander.
The military commander will be provided with intelligence from the ground allowing the commander to have information of circumstances that he was not aware of. For example, data that allows the military commander to know where exactly Hamas fighters are located and hiding.
Which, I imagine, poses a number of ethical and legal problems…
Yes, of course.
Let me use an example most people will understand: a student can ask ChatGPT to write an essay about a novel. But, if he has not actually read the book, he can not know if it is a good essay or not.
This can be directly transposed to the battlefield.
When verifying the identification of the target, the military commander does not have any way of knowing what the original data was or how it was processed. Commanders are simply asked to trust these systems, about which they know very little.
So when things go wrong, the commander can just blame the system?
That's the first problem: the dissociation of communication.
By this concept, I want to describe the reality that autonomous systems will collect, process and act based on neural networks that are opaque to any human operator. Adding to this problem is the fact that these algorithms operate as black boxes — that is, it is impossible, until today, to know how and why the algorithm established certain connections and patterns between the data.
This can lead to a double scapegoating process: on one hand, we blame the machines. There is no human fault, “the system” is the guilty one. On the other hand, some scholars, based on the rules of precaution, make the military commanders fully responsible.
However, if the former leads to the decriminalization of the laws of war, the latter makes commanders responsible for “actions” he could not be aware of. Therefore we cannot find the commander responsible under international criminal law.
This is the problem that has occupied states, scholars and nongovernmental organizations — to find the best possible legal framework to prevent an “accountability gap.”
Otherwise, the battlefield will become a space of impunity.
If I understand correctly, you are saying that these new forms of warfare, which are generally presented as being a way of conducting strikes without endangering the lives of one’s military, can end up leading to anarchy in terms of the law of war?
I think that is exactly the way to put it, and we are seeing that in Israel at the moment.
Do you see any hope that following this paradigm shift, the international community will find a new legal framework?
Let me be very honest.
In one sense I have lots of hope. If we lose hope we fall into a state of depression. Being immersed in this field, day after day, seeing the number of civilian victims potentially increasing, if I don't find hope in prayer and in what I believe in, it is impossible to cope and proceed.
On the other hand, looking at the measures taken by the current U.S. administration — the questioning of international organizations such as NATO; the inherent problems with the application of the UN Charter — it is very difficult for any international lawyer not to feel demotivated and uncertain about the future of international law.
There were so many achievements after World War II regarding international law, but now we look at the conflicts in Ukraine, in Gaza, in Sudan, and it seems that we are going backwards in terms of protecting those who are more vulnerable on the battlefield: civilians and civilian objects.
The war in Ukraine is perhaps the first in which drones have been used in swarms. We had already seen drones used in other conflicts to hit specific targets, but this is new.
Were you surprised by this evolution, or were you expecting things to evolve in this way?
I wasn't surprised. Intel was the first company to try to use drones in swarms, they tried with 200 and succeeded. Then they tried with 500, but it was a total failure. Then China put several thousand drones in the sky, simulating fireworks. This was a shock to the West, because we had not managed to coordinate 500 drones.
So, nowadays, it is possible to use orchestrated swarms of drones. This doesn’t mean they all carry munitions. Some can be weaponized, others can do only surveillance, but they all operate together in the conduct of hostilities.
We also have other novelties in the war in Ukraine through the development of military equipment, such as a sort of suit that makes soldiers completely undetectable by radars. In terms of military technology, that is the big news to come out of Ukraine.
But regarding AI, the first AI war, so to speak, is in Gaza.
The use of drones and AI introduces new players: the people who control and develop the technology, behind enemy lines.
Would it be legitimate to assassinate a drone operator in his home in Moscow, for instance?
From the perspective of international law, yes.
When you look at the definition of combatant in the Geneva Convention, and especially from the 1977 additional protocols, it applies to people like drone operators, since they are within the state’s armed forces. It doesn’t matter if the operator is behind the frontline, he could even be asleep and he would continue to have combatant status. Therefore, he can be targeted at any time.
But I will give you an even more extreme example: Elon Musk could be considered a military target by Russia, since he was facilitating intelligence to the Ukrainian armed forces through his company Starlink. Here, we have a civilian who provides relevant information to a state through his private company which constitutes direct participation in the hostilities.
There is a question of whether or not Musk has personally become a party to the conflict, but Starlink could definitely be attacked. That was actually discussed between the ambassador of the Russian Federation to the United States and Elon Musk and he was allegedly warned that if he kept providing information his satellites would be attacked.
This is a glimpse of the near future, in which we will see a radical change from what we consider to be the traditional parties on a battlefield. We already have non-state actors, such as terrorist groups, and now we are going to have private corporations, namely through the exploitation and use of outer space and the seabed.
Since the invention of bows and arrows, man has been trying to create technology to inflict harm on his enemy from a safe distance.
Can we argue that all of this is just a natural evolution of that, or are we actually facing a paradigm shift?
The history of weapons is the creation of the greatest physical distance from the enemy. The big change now is the dissociation of communication, as I mentioned before, which is linked to a dissociation of risk.
Interestingly, Pope Innocent II, in 1139, issued a bull banning the crossbow, over a question that has arisen in relation to drones. The Holy Father's concern at the time, which was and remains very legitimate, was that the combatant would lose track of the impact of the use of force; that is, he would not see the consequences of his actions.
Of course, this had no practical consequences on the ground at the time. But we see the same thing happening with the use of drones, which is the de-sensitization of the soldier. This risk is potentially heightened with the use of artificial intelligence, that is, the consequences are left for only the victim to see.
So, we see that Pope Innocent II had this concern but the practical effect was absolutely none, because obviously no one stopped using crossbows.
Skeptics nowadays could say that here we are talking about ethics and morals, but countries will continue to develop and use these weapons.
Is this reflection on the ethical and moral dimension still needed?
It certainly is! People are understandably distrustful of international law because it is inorganic and they do not see consequences in terms of accountability.
I think that the Trump administration has aggravated that, and I believe that if the U.S. had an administration along the same lines as the previous one, we would not have the recent attacks that we are seeing from Russia against Ukraine, and Israel would also have a more restrictive policy.
But there are things we tend to forget such as the Chilcot Inquiry in the UK, regarding its intervention in the second invasion of Iraq.
The Chilcot Inquiry concluded that their involvement was based on a personal desire for revenge and that there was no evidence of weapons of mass destruction. The Prime Minister Tony Blair was basically identified as what we would consider today as a war criminal.
While it's curious that the UK, as a party to the Rome Statute, hasn’t yet opened a case against him, it is noteworthy that following the moral rebuke, Mr. Blair effectively disappeared from political life. Thus, we can see that there are often moral consequences which are more effective than a prison sentence.
It is this moral dimension that gives some space to the rights of the innocent civilians.
People ask me all the time how, as a priest, I can be interested in these issues. And indeed, my day-to-day work is to study the worst that men can do to each other. But then there are the civilian victims that we do not even know the names of. When the soldiers return, they get their military honors – that’s fine, I'm not saying they shouldn't have them. But the innocent civilians whose names no one even knows are reduced to numbers: 50 dead, 100 dead…That is why I committed myself to this work.
People are not numbers; they are victims.
Pope Francis suggested at one point that one can no longer speak of just war. There was debate about whether that constituted an explicit repudiation of just war theory.
In your opinion, does just war theory still make sense in the Catholic tradition? How can it be adapted to these new forms of war?
Pope Francis was subjected to immense criticism because of that statement, particularly in the most conservative American circles, which is unfair to say the least. The pope said what any Christian should say, which is that there is no justification for war, insofar as war is the destruction of what God creates.
Neither St. Thomas Aquinas nor St. Augustine, who are the fathers of just war theory, ever said that war was good.
Pope Francis continued that tradition, arguing that there is no justification for war. Pope Leo XIV has already said that there are no inevitable wars. This is in accordance with the Gospel, and we cannot ask a pope to contradict the Gospel.
What has developed over the centuries in the Catholic Church are situations in which war would not be considered legitimate from a moral point of view, but would be accepted as a last resort to put an end to a catastrophic situation. But it seems to me that there are situations that defy any moral justification.
Take for example Russia's intervention in Ukraine. I cannot see any other legitimate or appropriate and proportionate solution other than the exercise of self-defense, which is the use of force to put an end to an attack.
Again, regarding Pope Francis, there is a very interesting article by a Ukrainian theologian, currently at the University of Yale, analyzing how the late pope had always been critical of Russia and, in particular, upheld Ukraine’s right to self-defense.
The question is: what is the morally legitimate response in situations that violate the most basic standards of morality and threaten the survival of a people? I do not know what other moral solution there could be besides legitimate defense.
The problem, then, is how states exercise that right to self-defense. If you look at the current situation in Israel, for example, no one questions Israel's right to self-defense in the face of a terrorist attack, such as the October 7 attacks. The problem is that Israel is not exercising self-defense but rather a true war of destruction against the people of Palestine.
The title of your doctoral thesis, published in 2022, is “The Legitimacy and Accountability for the Deployment of Autonomous Weapon Systems under International Humanitarian Law.”
How does a priest get involved in this field? Was it a personal interest, or did the Jesuits steer you into this research?
A bit of both.
I was starting my degree in theology when my provincial contacted me with an interesting and challenging suggestion: Pope Benedict XVI had highlighted the need for the Church to have experts in international humanitarian law, i.e. the laws of war. I always had a passion for international law, but not necessarily questions on war, but that was how the idea began to take hold.
When I finished theology, I applied for a master's in international law at the London School of Economics and that was my first contact with the problems of the technology of war.
At the time, I was concerned with the legality of drone strikes in Pakistan but when I applied for a PhD at the University of Essex, my supervisor challenged me to move to the questions of autonomous systems of warfare.
Have military or defense institutions shown any interest in your work?
Do you have any current collaboration with governments, or with the U.S. government in particular?
Before coming to the United States, I worked with the Dutch Armed Forces and with the British and Israeli Ministries of Defense. They find the provocations of a priest very interesting.
The United States has a very particular vision of the laws of war.
In fact, just a fortnight ago Pete Hegseth, the Secretary of Defense, made the study of International Humanitarian Law, that is, the laws of war, optional for the armed forces. It is sad, very sad, as Geoffrey Corn has already argued, the U.S. was one of the pioneers of the modern laws of war.
Many of the officers in the British army that I have met are committed Anglicans, and there is a very strong moral element in the British army. With the United States, on the other hand, I have had the opposite experience where the efficiency of fighting and destroying the enemy obliviates any moral consideration.
Morality is seen as useful insofar as it might protect American soldiers, but the enemy is reduced to a “thing” that must be eliminated.
You are quite critical of the U.S. administration in some of your answers.
Are you concerned about any repercussions as a result of that?
To date I have not experienced any kind of sanction or reprimand, but colleagues of mine have received letters, and not just in the field of war, but also in the field of health rights, or any evaluation deemed critical of the Trump administration’s policies.
There is an impressive system of monitoring what is published. Then people get a letter from the attorney general informing them that their research is under investigation.
Sometimes this results in sanctions, sometimes not, sometimes they are forbidden from publishing.
I am a commissioned military intelligence officer and full time military planner. Instead of chiming in with my own thoughts or expertise, I just want to thank you for this excellent reporting that covers the technical and moral aspects of an extremely important and emerging problem. This is the type of intersection that makes me proud to be a Pillar subscriber.
This was incredibly interesting. I learned a lot here.