Tuesday, May 28, 2024

AI Distinction Proportionality IHL in Gaza

As civilians become more familiar with online applications of Artificial Intelligence (AI) in gathering information and preparing reports, it is important to consider the ethical and strategic use of AI in the warfare in Gaza.



AI and impact on death in Gaza


Military applications of AI include PID (positive identification) and prediction of weapons effects. International Humanitarian Law (IHL) requires military forces to apply distinction between civilians, civilian infrastructure and military targets and to take action to avoid civilian attack, injury, and death. Overall battlefield awareness applications of AI allow prediction of the effects of weapons systems and permits the choice of weapon that is proportional to the required military objective. 


John G. Thorne, Lt Col, USAF, has authored a paper, Warriors and War Algorithms: Leveraging Artificial Intelligence to Enable Ethical Targeting as a component of a course in Ethics and Emerging Military Technology at Naval War College, Newport, RI



The Department of Defense’s (DoD) primary form of Distinction is through Positive Identification (PID) PID is defined as “the reasonable certainty that a functionally and geospatially defined object of attack is a legitimate military target in accordance with the Law of War and applicable Rules Of Engagement.” More simply, it answers the Who?, What?, and Where? questions regarding an entity. PID is also acknowledged as the foundational consideration in the DoD’s collateral damage methodology, which its Proportionality assessments are based on. It states that an assessment begins with the question, “Can I PID the object I want to affect?” While PID can technically be achieved through a combination of different intelligence sources, invariably a visual component is required in the final analysis. Therefore, the most commonly known algorithms associated with PID are related to imagery interpretation.


The DoD’s Joint Technical Coordinating Group for Munitions Effectiveness (JTCG/ME) provides software that models the accuracy and explosive yield of warhead, guidance, and fusing combinations found in the Joint Munitions Effectiveness Manuals (JMEM). It also provides vulnerability data for potential targets, based on their size and construction. The JMEM software allows the human analyst to create a specific engagement scenario by manually selecting a warhead, guidance system, fuse combination, and target characteristics. The JMEM algorithm then analyzes hundreds to thousands of iterations of that scenario, usually leveraging a Monte Carlo method, to determine a probable level of damage that scenario would create. (Thorne, n.d., #)


Yuval Abraham, a journalist and filmmaker based in Jerusalem, in an article for +972 magazine, in partnership with Local Call, reports that the Israeli army has marked tens of thousands of Gazans as suspects for assassination, using an AI targeting system with little human oversight and a permissive policy for casualties.


A new investigation by +972 Magazine and Local Call reveals that the Israeli army has developed an artificial intelligence-based program known as “Lavender.” 

Formally, the Lavender system is designed to mark all suspected operatives in the military wings of Hamas and Palestinian Islamic Jihad (PIJ), including low-ranking ones, as potential bombing targets. The sources told +972 and Local Call that, during the first weeks of the war, the army almost completely relied on Lavender, which clocked as many as 37,000 Palestinians as suspected militants — and their homes — for possible air strikes.


“We were not interested in killing [Hamas] operatives only when they were in a military building or engaged in a military activity,” A., an intelligence officer, told +972 and Local Call. “On the contrary, the IDF bombed them in homes without hesitation, as a first option. It’s much easier to bomb a family’s home. The system is built to look for them in these situations.”


The Lavender machine joins another AI system, “The Gospel,” about which information was revealed in a previous investigation by +972 and Local Call in November 2023, as well as in the Israeli military’s own publications. A fundamental difference between the two systems is in the definition of the target: whereas The Gospel marks buildings and structures that the army claims militants operate from, Lavender marks people — and puts them on a kill list.


In addition, according to the sources, when it came to targeting alleged junior militants marked by Lavender, the army preferred to only use unguided missiles, commonly known as “dumb” bombs (in contrast to “smart” precision bombs), which can destroy entire buildings on top of their occupants and cause significant casualties. “You don’t want to waste expensive bombs on unimportant people — it’s very expensive for the country and there’s a shortage [of those bombs],” said C., one of the intelligence officers. Another source said that they had personally authorized the bombing of “hundreds” of private homes of alleged junior operatives marked by Lavender, with many of these attacks killing civilians and entire families as “collateral damage.”


B., a senior officer who used Lavender, echoed to +972 and Local Call that in the current war, officers were not required to independently review the AI system’s assessments, in order to save time and enable the mass production of human targets without hindrances.


“Everything was statistical, everything was neat — it was very dry,” B. said. He noted that this lack of supervision was permitted despite internal checks showing that Lavender’s calculations were considered accurate only 90 percent of the time; in other words, it was known in advance that 10 percent of the human targets slated for assassination were not members of the Hamas military wing at all. (Abraham, n.d.)


Burak Elmali in an article posted to the Anadolu Agency (AA) web site notes that  Israel’s use of advanced software tools underscores a harrowing reality: AI, when misapplied, can facilitate atrocities of catastrophic proportions The TRT World Research Centre reports that the conduct of warfare is among the many domains influenced by AI, and the latter’s impact on this field is no longer theoretical. Gaza is a case in point. This evolution prompts profound questions about human agency and responsibility within AI debates. What if the person wielding the technological prowess harbours a militaristic ideology so extreme that it sanctions genocidal actions?


Over a period exceeding six months, Israel has conducted airstrikes with indiscriminate genocidal intent, as revealed through the candid admissions of numerous military intelligence insiders. Their confession-like statements about Israel’s use of advanced software tools, such as Lavender and Where’s Daddy, underscores a harrowing reality: AI, when misapplied, can facilitate atrocities of catastrophic proportions, and turn out to be as inhumane as possible.


The Israeli Defence Ministry’s communication often follows the path of censorship, obfuscation, and deflection tactics. This time was no different. The spokesperson dismissed the accusations with a mere denial. Yet, the stark reality reflected in the civilian death toll leaves little room to ignore the assertions made about the AI-driven genocidal undertaking attributed to Lavender. The algorithm used indicated the acceptance of 15-20 civilian casualties for one low-ranking Hamas member and up to 100 for one senior Hamas member. 


Such robotisation of inhumanity is very disturbing. Alas, the figures are aligned with the reported death tolls in Gaza. Furthermore, the use of unguided bombs, which cause enormous devastation in heavily populated areas in scenarios involving unconfirmed junior Hamas members, suggests that Israel is conducting more war crimes, this time pretexting the use of AI. (Elmalı & Kilavuz, 2024)


The Military Forces of Israel have access to very sophisticated AI supported tools, similar to those of the United States, that not only permit, but require, the strictest application of International Humanitarian Law in the areas of distinction and proportionality. Failure to do so exposes Israeli authorities to war crime accusations and further isolation from traditional allies in Western Democracies.



References

Abraham, Y. (2024, April 3). 'Lavender': The AI machine directing Israel's bombing spree in Gaza. +972 Magazine. Retrieved May 28, 2024, from https://www.972mag.com/lavender-ai-israeli-army-gaza/ 


Elmalı, B., & Kilavuz, İ. F. (2024, May 1). Israel Has Tainted AI with Genocide – TRT World Research Centre. TRT World Research Centre. Retrieved May 17, 2024, from https://researchcentre.trtworld.com/perspectives/israel-has-tainted-ai-with-genocide/ 


Thorne, J. G. (n.d.). Warriors and War Algorithms: Leveraging Artificial Intelligence to Enable Ethical Targeting. Ethics and Emerging Military Technology Graduate Certificate. https://apps.dtic.mil/sti/trecms/pdf/AD1181382.pdf 




No comments:

Post a Comment