AI is being used everywhere these days -- including by the military in the current attack on Iran, along with ongoing conflict in the Ukraine and Gaza.
The US Department of War is now insisting that any contractual procurement of AI for the military call for "any lawful use" of AI, without constraints. That has kicked up a fuss with their old supplier, Anthropic, and possible new suppliers including OpenAI, xAI and Google.
The concerns revolve around mass domestic surveillance and the possibility of future fully-autonomous lethal weapons -- such as drones programmed to identify and kill enemy combatants without human intervention. Many say the latter is currently illegal under international law (though that's disputable).
Plenty of efforts are ongoing to try to come up with international agreement, but it's hard -- not least because it's tricky to define what "fully-autonomous lethal weapons" even are. Experts are meeting this week in Geneva to talk about possible inclusion under the Convention on Certain Conventional Weapons -- the document which, for example, currently bans the use of blinding lasers in war. It will be a long road to any such agreement, but it's good that the United Nations at least has it in its sights.
My story for Nature
https://www.nature.com/articles/d41586-026-00710-w
No comments:
Post a Comment