News from Lucy Suchman of an important essay she’s just completed with Jutta Weber on Human-Machine Autonomies, available from Academia.edu here.
This is how they begin:
This paper takes up the question of how we might think about the increasing automation of military systems not as an inevitable ‘advancement’ of which we are the interested observers, but as an effect of particular world-making practices in which we need urgently to intervene. We begin from the premise that the foundation of the legality of killing in situations of war is the possibility of discrimination between combatants and non-combatants. At a time when this defining form of situational awareness seems increasingly problematic, military investments in the automation of weapon systems are growing. The trajectory of these investments, moreover, is towards the development and deployment of lethal autonomous weapons; that is, weapon systems in which the identification of targets and initiation of fire is automated in ways that preclude deliberative human intervention. Challenges to these developments underscore the immorality and illegality of delegating responsibility for the use of force against human targets to machines, and the requirements of International Humanitarian Law that there be (human) accountability for acts of killing. In these debates, the articulation of differences between humans and machines is key.
Our aim in this paper is to strengthen arguments against the increasing automation of weapon systems, by expanding the frame or unit of analysis that informs these debates. We begin by tracing the genealogy of concepts of autonomy within the philosophical traditions that inform Artificial Intelligence (AI), with a focus on the history of early cybernetics and contemporary approaches to machine learning in behaviour-based robotics. We argue that while cybernetics and behaviour-based robotics challenge the premises of individual agency, cognition, communication and action that comprise the Enlightenment tradition, they also reiterate aspects of that tradition in the design of putatively intelligent, autonomous machines. This argument is made more concrete through a close reading of the United States Department of Defense Unmanned Systems Integrated Roadmap: FY2013-2038, particularly with respect to plans for future autonomous weapon systems. With that reading in mind, we turn to resources for refiguring agency and autonomy provided by recent scholarship in science and technology studies (STS) informed by feminist theory. This work suggests a shift in conceptions of agency and autonomy, from attributes inherent in entities, to effects of discourses and material practices that variously conjoin and/or delineate differences between humans and machines. This shift leads in turn to a reconceptualization of autonomy and responsibility as always enacted within, rather than as separable from, particular human- machine configurations. We close by considering the implications of these reconceptualizations for questions of responsibility in relation to automated/autonomous weapon systems. Taking as a model feminist projects of deconstructing categorical distinctions while also recognising those distinctions’ cultural-historical effects, we argue for simultaneous attention to the inseparability of human-machine agencies in contemporary war fighting, and to the necessity of delineating human agency and responsibility within political, legal and ethical/moral regimes of accountability.
On a bright fall day last year off the coast of Southern California, an Air Force B-1 bomber launched an experimental missile that may herald the future of warfare.
Initially, pilots aboard the plane directed the missile, but halfway to its destination, it severed communication with its operators. Alone, without human oversight, the missile decided which of three ships to attack, dropping to just above the sea surface and striking a 260-foot unmanned freighter…
The Pentagon argues that the new antiship missile is only semiautonomous and that humans are sufficiently represented in its targeting and killing decisions. But officials at the Defense Advanced Research Projects Agency, which initially developed the missile, and Lockheed declined to comment on how the weapon decides on targets, saying the information is classified.
“It will be operating autonomously when it searches for the enemy fleet,” said Mark A. Gubrud, a physicist and a member of the International Committee for Robot Arms Control, and an early critic of so-called smart weapons. “This is pretty sophisticated stuff that I would call artificial intelligence outside human control.”
Paul Scharre, a weapons specialist now at the Center for a New American Security who led the working group that wrote the Pentagon directive, said, “It’s valid to ask if this crosses the line.”
And the Israeli military and armaments industry, for whom crossing any line is second nature, are developing what they call a ‘suicide drone’ (really). At Israel Unmanned Systems 2014, a trade fair held in Tel Aviv just three weeks after Israel’s latest assault on Gaza, Dan Cohen reported:
Lieutenant Colonel Itzhar Jona, who heads Israel Aerospace Industries, spoke about “loitering munitions” — what he called a “politically correct” name for Suicide Drones. They are a hybrid of drone and missile technology that have “autonomous and partially autonomous” elements, and are “launched like a missile, fly like an UAV [unmanned aerial vehicle],” and once they identify a target, revert to “attack like a missile.” Jona called the Suicide Drone a “UAV that thinks and decides for itself,” then added, “If you [the operator] aren’t totally clear on the logic, it can even surprise you.”
Jona praised the advantage of the Suicide Drone because the operator “doesn’t have to bring it home or deal with all sorts of dilemmas.” The Suicide Drone will quickly find a target using its internal logic, which Jona explained in this way: “It carries a warhead that eventually needs to explode. There needs to be a target at the end that will want to explode. Or it won’t want to and we will help it explode.”
So thoughtful to protect ‘the operator’ from any stress (even if s/he might be a little ‘surprised’). Here is Mondoweiss‘s subtitled clip from the meeting, which opens with a short discussion of the major role played by UAVs in the air and ground attacks on Gaza, and then Jona describes how ‘we always live on the border':