Recent reports have highlighted the Israeli military's use of artificial intelligence in its operations against Hamas, raising significant ethical and humanitarian concerns. The AI system, referred to as Lavender, has been given sweeping approval for officers to adopt its 'kill lists,' often without thorough checks on why certain targets were chosen, effectively serving as a 'rubber stamp' for the machine's decisions. This has led to the killing of thousands of innocent civilians in Gaza, with the military policy allowing for the death of up to 15 or 20 civilians for every junior Hamas operative targeted. Critics argue that this approach confirms fears about the lack of human oversight in AI-driven military operations and the potential for mass casualties. Despite these reports, Israel has disputed the claims, arguing against the assertions that it tolerates civilian deaths as acceptable collateral damage.
Israel waits until an AI-identified target is at home with his family before striking, killing everyone in the building (with no effort to check whether the AI was even correct). Any country sending arms to Israel at this point is a similarly rogue nation. https://t.co/mWv52Ff5vo
Israel is aggressively disputing assertions that it is using an artificial intelligence system for a targeted killing program that tolerates civilian deaths as acceptable collateral damage in its war against Hamas. https://t.co/WnbyoACrNa
So you know how we've been having this big debate for several years now about the ethics of giving military AI systems release authority for lethal weapons? Discussion's over. Israel's been doing AI targeting for months now and their soldiers appear to have been blindly… https://t.co/HyDeq83P6z
Israel disputes it has powerful AI program for targeted killing that tolerates civilian casualties https://t.co/dZ7PT0zIAa
The real AI threat is not years in the making, it is here and it is wiping out civilians, children and women with gusto. Except when it hallucinates rather than just blowing up Palestinians, it blows up Westerners helping them. https://t.co/1r5dlSa1IZ
This story is horrific, and basically confirms every one of the fears we had back in October when we knew the IDF was using AI—it's error-prone, there's scant human oversight, and it facilitates the mass rubber stamping of targets, of mass killing. https://t.co/LnwPGVUFAn
"[T]he army also decided during the first weeks of the war that, for every junior Hamas operative that Lavender marked, it was permissible to kill up to 15 or 20 civilians." https://t.co/tOwyqgAj35
As I suspected, AI is behind these civilian killings, according to this Israeli media report: https://t.co/dgeIXRVJv6
The Israeli army "gave sweeping approval for officers to adopt Lavender’s kill lists, with no requirement to thoroughly check why the machine made those choices...human personnel often served only as a 'rubber stamp' for the machine’s decisions." https://t.co/fA6tNzpi5j
Everything campaigners have been warning about since artificial intelligence has been creeping into weapons systems and military planning is now applied ruthless by #Israel in #Gaza, killing thousands of innocent civilians. Horrible, dystopian read https://t.co/R3ZQdmeEGP