Sunday, March 17, 2024

Responsible AI in Gaza against Hamas

The prosecution of the war against Hamas by the IDF is enhanced by artificial intelligence systems ability to select targets and predict civilian casualties. The US State Department has developed a political declaration on ethical military AI.



The military is weighing heavily how to use AI. (Graphic by Breaking Defense, original images via Getty and DVIDS)

Sydney J. Freedberg Jr., writer for Breaking Defense since 2011, deputy editor for the site's first decade, covering technology, strategy, and policy with a particular focus on the US Army, reports on the first of many annual meetings of the countries that signed on to the US ``Political Declaration” on military AI. The countries have shared model policies and best practices on everything from combat robots to back-office algorithms.



Thirteen months after the State Department rolled out its Political Declaration on ethical military AI at an international conference in the Hague, representatives from the countries who signed on will gather outside of Washington to discuss next steps.


“We really want to have a system to keep states focused on the issue of responsible AI and really focused on building practical capacity,” a senior State Department Official told Breaking Defense.


State wants this to be the first of an indefinite series of annual conferences hosted by fellow signatory states around the world. In between these general sessions, the State official explained, smaller groups of like-minded nations should get together for exchanges, workshops, wargames, and more — “anything to build awareness of the issue and to take some concrete steps” towards implementing the declaration’s 10 broad principles. (Freedberg Jr., n.d.)



The endorsing States believe that the following measure should be implemented in the development, deployment, or use of military AI capabilities, including those enabling autonomous functions and systems.


B. States should take appropriate steps, such as legal reviews, to ensure that their military AI will be used consistent with their respective obligations under international law, in particular international humanitarian law. States should also consider how to use military AI capabilities to enhance their implementation of international humanitarian law and to improve the protection of civilians and civilian objects in armed conflict.


Indeed, it’s a hallmark of State’s Political Declaration — and the Pentagon’s approach to AI ethics, from which it draws — that it addresses not just futuristic “killer robots” and SkyNet-style supercomputers, but also other military uses of AI that, while less dramatic, are already happening today. That includes mundane administration and industrial applications of AI, such as predictive maintenance. But it also encompasses military intelligence AIs that help designate targets for lethal strikes, such as the American Project Maven and the Israeli Gospel (Habsora). (Latest-Version-Political-Declaration-On-Responsible-Military-Use-Of-AI-And-Autonomy, n.d.)


Harry Davies, Bethan McKernan and Dan Sabbagh in Jerusalem, writing for the Guardian, reports on how Israel uses AI to select bombing targets in Gaza. Concern is expressed over a data driven ‘factory’ that significantly increases the number of targets for strikes in the Palestinian territory.



The latest Israel-Hamas war has provided an unprecedented opportunity for the IDF to use such tools in a much wider theatre of operations and, in particular, to deploy an AI target-creation platform called “the Gospel”, which has significantly accelerated a lethal production line of targets that officials have compared to a “factory”.


However a short statement on the IDF website claimed it was using an AI-based system called Habsora (the Gospel, in English) in the war against Hamas to “produce targets at a fast pace”.


The IDF said that “through the rapid and automatic extraction of intelligence”, the Gospel produced targeting recommendations for its researchers “with the goal of a complete match between the recommendation of the machine and the identification carried out by a person”.


One official, who worked on targeting decisions in previous Gaza operations, said the IDF had not previously targeted the homes of junior Hamas members for bombings. They said they believed that had changed for the present conflict, with the houses of suspected Hamas operatives now targeted regardless of rank.


“That is a lot of houses,” the official told +972/Local Call. “Hamas members who don’t really mean anything live in homes across Gaza. So they mark the home and bomb the house and kill everyone there.” 


However, experts in AI and armed conflict who spoke to the Guardian said they were sceptical of assertions that AI-based systems reduced civilian harm by encouraging more accurate targeting.


“Look at the physical landscape of Gaza,” said Richard Moyes, a researcher who heads Article 36, a group that campaigns to reduce harm from weapons.


“We’re seeing the widespread flattening of an urban area with heavy explosive weapons, so to claim there’s precision and narrowness of force being exerted is not borne out by the facts.”(Davies et al., 2023)


Multiple sources told the Guardian and +972/Local Call that when a strike was authorised on the private homes of individuals identified as Hamas or Islamic Jihad operatives, target researchers knew in advance the number of civilians expected to be killed.


Each target, they said, had a file containing a collateral damage score that stipulated how many civilians were likely to be killed in a strike.


One source who worked until 2021 on planning strikes for the IDF said “the decision to strike is taken by the on-duty unit commander”, some of whom were “more trigger happy than others”.


The source said there had been occasions when “there was doubt about a target” and “we killed what I thought was a disproportionate amount of civilians”.


An Israeli military spokesperson said: “In response to Hamas’ barbaric attacks, the IDF operates to dismantle Hamas military and administrative capabilities. In stark contrast to Hamas’ intentional attacks on Israeli men, women and children, the IDF follows international law and takes feasible precautions to mitigate civilian harm.” (Davies et al., 2023)


Two key components of international humanitarian law are discrimination and proportionality. The advanced intelligence gathering systems of the IDF are capable of distinguishing military combatants from civilians thus enabling the discrimination required. Technology that guides smart proportionally sized munitions to military targets allows the IDF to control the collateral damage to life sustaining civilian infrastructure in Gaza. The nations that have signed the United States “Political Declaration on Responsible Military Use of Artificial Intelligence” express the principle that military forces must enhance their implementation of international humanitarian law and to improve the protection of civilians and civilian objects in armed conflict. UN experts have welcomed the suspension of arms transfers to Israel by Belgium, Italy, Spain, the Netherlands and the Japanese company Itochu Corporation. The European Union also recently discouraged arms exports to Israel. The post Arms exports to Israel notes that Canada’s Minister of Foreign, Affairs, Mélanie Joly, is reported as being increasingly concerned about the role of Canada in supplying arms to Israel.



References

Davies, H., McKernan, B., & Sabbagh, D. (2023, December 1). 'The Gospel': how Israel uses AI to select bombing targets in Gaza. The Guardian. Retrieved March 17, 2024, from https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets 

Freedberg Jr., S. J. (n.d.). 40-plus countries convening next week to thrash out ‘responsible AI’ for military use. Breaking Defense. Retrieved March 17, 2024, from https://breakingdefense.com/2024/03/40-plus-countries-convening-next-week-to-thrash-out-responsible-ai-for-military-use/ 

Latest-Version-Political-Declaration-on-Responsible-Military-Use-of-AI-and-Autonomy. (n.d.). Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. Retrieved March 17, 2024, from https://www.state.gov/wp-content/uploads/2023/10/Latest-Version-Political-Declaration-on-Responsible-Military-Use-of-AI-and-Autonomy.pdf 


No comments:

Post a Comment