In this highly thought provoking article, Patrick Truffer discusses the current status of lethal autonomous weapon systems (LAWS) that are equipped with Artificial Intelligence (AI) and the idea of banning this technology.
Patrick notes that whilst automatic weapon systems have existed for decades, it has been the operator who has the final decision on lethal force (although some Defence systems, including air defence are already autonomous). Defence industries from a number of countries, such as the US, UK, Japan and Russia, have been reaserching the field of LAWS where AI will play a decisive role in decision making. Recent examples include: Northrop Grumman’s X-47B which can autononmously take off, refuel and land; autonomous micro-drones for carrying out small missions; and loitering munitions which can orbit prior to attacking their target.
The author discusses the risks of the profliration of LAWS, specifically the inability to understand own actions in an overall context. LAWS works within a ‘set of rules’ which maybe at odds with the rapidly changing nature of conflict; however, he highlights that this reasoning does not play a decisive role in the call for an international ban on LAWS, rather opposition has been driven by NGOs in the field of human rights and international humanitarian law.
The article concludes that a ban on LAWS would be ‘pushing the genie back into the bottle’, preventing the benefit of potential opportunities. Given the development efforts and interest of the states involved in the discussion of LAWS, a ban is unlikely.