Tuesday, April 9, 2024

AI IDF Gaza IHL

The International Committee of the Red Cross (ICRC) comments that the advance of artificial intelligence (AI) for military purposes raises profoundly worrying questions for humanity. A web article takes a look at some of the key questions and concerns surrounding the use of AI, especially machine learning, in armed conflict.


Destruction is seen in Al-Rimal popular district of Gaza City after it was targeted by airstrikes carried out by Israeli forces, October 10, 2023. (Mohammed Zaanoun/Activestills)


Machine learning is a type of AI system that creates its own instructions based on the data on which it is ‘trained’. It then uses these instructions to generate a solution to a particular task. The software writes itself in a way. Recent advances in AI are in machine learning.


The ICRC has highlighted three areas in which AI is being developed for use by armed actors in warfare, which raise significant questions from a humanitarian perspective:


  1. Integration in weapon systems, particularly autonomous weapon systems
  2. Use in cyber and information operations
  3. Underpinning military ‘decision support systems’ (What You Need to Know About Artificial Intelligence in Armed Conflict, 2023)


A decision support system is any computerised tool that may use AI-based software to produce analyses to inform military decision-making.


For example, an AI image recognition system might be used to help identify military objects by analysing drone footage, as well as other intelligence streams, to recommend targets for the military.


In other words, these AI systems can be used to inform decisions about who or what to attack and when.


With rapid developments in AI being integrated into military systems, it is crucial that states address specific risks for people affected by armed conflict.


Although there are a wide range of implications to consider, specific risks include the following:


  • An increase in the dangers posed by autonomous weapons;
  • Greater harm to civilians and civilian infrastructure from cyber operations and information warfare;
  • A negative impact on the quality of human decision-making in military settings. (What You Need to Know About Artificial Intelligence in Armed Conflict, 2023)


Yuval Abraham, a journalist and filmmaker based in Jerusalem, in an article for +972 magazine, in partnership with Local Call, reports that the Israeli army has marked tens of thousands of Gazans as suspects for assassination, using an AI targeting system with little human oversight and a permissive policy for casualties.


A new investigation by +972 Magazine and Local Call reveals that the Israeli army has developed an artificial intelligence-based program known as “Lavender.” 


Formally, the Lavender system is designed to mark all suspected operatives in the military wings of Hamas and Palestinian Islamic Jihad (PIJ), including low-ranking ones, as potential bombing targets. The sources told +972 and Local Call that, during the first weeks of the war, the army almost completely relied on Lavender, which clocked as many as 37,000 Palestinians as suspected militants — and their homes — for possible air strikes.


“We were not interested in killing [Hamas] operatives only when they were in a military building or engaged in a military activity,” A., an intelligence officer, told +972 and Local Call. “On the contrary, the IDF bombed them in homes without hesitation, as a first option. It’s much easier to bomb a family’s home. The system is built to look for them in these situations.”


The Lavender machine joins another AI system, “The Gospel,” about which information was revealed in a previous investigation by +972 and Local Call in November 2023, as well as in the Israeli military’s own publications. A fundamental difference between the two systems is in the definition of the target: whereas The Gospel marks buildings and structures that the army claims militants operate from, Lavender marks people — and puts them on a kill list.


In addition, according to the sources, when it came to targeting alleged junior militants marked by Lavender, the army preferred to only use unguided missiles, commonly known as “dumb” bombs (in contrast to “smart” precision bombs), which can destroy entire buildings on top of their occupants and cause significant casualties. “You don’t want to waste expensive bombs on unimportant people — it’s very expensive for the country and there’s a shortage [of those bombs],” said C., one of the intelligence officers. Another source said that they had personally authorized the bombing of “hundreds” of private homes of alleged junior operatives marked by Lavender, with many of these attacks killing civilians and entire families as “collateral damage.”


B., a senior officer who used Lavender, echoed to +972 and Local Call that in the current war, officers were not required to independently review the AI system’s assessments, in order to save time and enable the mass production of human targets without hindrances.


“Everything was statistical, everything was neat — it was very dry,” B. said. He noted that this lack of supervision was permitted despite internal checks showing that Lavender’s calculations were considered accurate only 90 percent of the time; in other words, it was known in advance that 10 percent of the human targets slated for assassination were not members of the Hamas military wing at all. (Abraham, n.d.)



The abstract of the paper “Machine Learning Weapons and International Humanitarian Law: Rethinking Meaningful Human Control “ authored by Shin-Shin Hua on the Georgetown Law International Law Journal website, comments on the necessity for human control.


But in a machine learning paradigm, human control may become unnecessary or even detrimental to IHL compliance. In order to leverage the potential of this technology to minimize casualties in conflict, an unthinking adherence to the principle of “the more control, the better” should be abandoned. Instead, this Article seeks to define prophylactic measures that ensure machine learning weapons can comply with IHL rules. Further, it explains how the unique capabilities of machine learning weapons can facilitate a more robust application of the fundamental IHL principle of military necessity. (Hua, n.d.)


The AI tools in use by advanced military organizations, like the IDF, are capable of providing the intelligence not only to select targets but to deploy armed devices that can be designed to comply with International Humanitarian Law concepts of discrimination and proportionality that are meant to reduce civilian casualties and use only the amount of force required to achieve military objectives. Recent applications to the International Court of Justice accusing genocide in Gaza and the growing rift between the government of Israel and nations that have traditionally supported that government are a symptom of grievous misuse of AI technology in this war.



References

Abraham, Y. (n.d.). ‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza. +972 Magazine. Retrieved April 9, 2024, from https://www.972mag.com/lavender-ai-israeli-army-gaza/ 


Hua, S.-S. (n.d.). Machine Learning Weapons and International Humanitarian Law: Rethinking Meaningful Human Control. Georgetown Law. Retrieved April 9, 2024, from https://www.law.georgetown.edu/international-law-journal/wp-content/uploads/sites/21/2020/03/GT-GJIL200015.pdf 


What you need to know about artificial intelligence in armed conflict. (2023, October 6). International Committee of the Red Cross. Retrieved April 9, 2024, from https://www.icrc.org/en/document/what-you-need-know-about-artificial-intelligence-armed-conflict 


No comments:

Post a Comment