Permissive airstrikes on non-military targets and the use of an AI system have enabled the Israeli army to carry out its deadliest war on Gaza.
Permissive airstrikes on non-military targets and the use of an AI system have enabled the Israeli army to carry out its deadliest war on Gaza.
Fascinating article, thanks!
This is the first I’ve heard of this being implemented. Are any other militaries using AI to generate targets?
I certainly hope that unlike many forms of AI they are able to see what criteria led to targets being selected, because often times this happens in a black box. Without this feature oversight and debugging becomes difficult if not impossible. Is the point ensuring that no human can be blamed if it goes wrong? This article certainly seems to be making the case that whatever human verification there is is insufficient and the standards for acceptable civilian casualties are lax.
It would be nice if some of their sources would go on the record if these accusations regarding target selection are true; I’d like to see the IDF respond to them and clarify its standards for selecting targets and what they consider acceptable collateral damage. Though, there are probably serious consequences to whistleblowing during wartime so I’m not holding my breath.
I think many other militaries have been developing such systems, but they haven’t actively been deploying them, primarily because they’re not at war. The only one who maybe might have been is Russia, but there hasn’t been any coverage of them using systems like that.
Well, we’ll find out about other militaries soon enough. Stock prices for weapon’s manufacturers have been booming. The US and EU want a convenient weapon’s testing ground and a canal to gas fields out of this.
The biggest loser is always innocent civilians at home and abroad.