Game of Drones

creechcasino_0

Joe Pugliese has sent me a copy of his absorbing new essay, ‘Drone casino mimesis: telewarfare and civil militarization‘, which appears in Australia’s Journal of sociology (2016) (online early).  Here’s the abstract:

This article stages an examination of the complex imbrication of contemporary civil society with war and militarized violence. I ground my investigation in the context of the increasing cooption of civil sites, practices and technologies by the United States military in order to facilitate their conduct of war and the manner in which drone warfare has now been seamlessly accommodated within major metropolitan cities such as Las Vegas, Nevada. In the context of the article, I coin and deploy the term civil militarization. Civil militarization articulates the colonizing of civilian sites, practices and technologies by the military; it names the conversion of such civilian technologies as video games and mobile phones into technologies of war; and it addresses the now quasi-seamless flow that telewarfare enables between military sites and the larger suburban grid and practices of everyday life. In examining drone kills in the context of Nellis Air Force Base, Las Vegas, I bring into focus a new military configuration that I term ‘drone casino mimesis’.

I’m particularly interested in what Joe has to say about what he calls the ‘casino logic and faming mimesis’ of ‘the drone habitus’.  Most readers will know that ‘Nellis’ (more specifically, Creech Air Force Base, formerly Indian Springs), for long the epicentre of the US Air Force’s remote operations, is a short drive from Las Vegas – and those who have seen Omar Fast‘s 5,000 Feet is Best will remember the artful way in which it loops between the two.

drone-pilots

Two passages from Joe’s essay have set me thinking.  First Joe moves far beyond the usual (often facile) comparison between the video displays in the Ground Control Station and video games to get at the algorithms and probabilities that animate them:

‘…there are mimetic relations of exchange between Las Vegas’s and Nellis’s gaming consoles, screens and cubicles.

AR-701029956.jpg&updated=201401020851&MaxW=800&maxH=800&noborder

‘Iconographically and infrastructurally, casino gaming and drone technologies stand as mirror images of each other. My argument, however, is not that both these practices and technologies merely ‘reflect’ each other; rather, I argue that gaming practices and technologies effectively work to constitute and inflect drone practices and technologies on a number of levels. Casino drone mimesis identifies, in new materialist terms, the agentic role of casino and gaming technologies precisely as ‘actors’ (Latour, 2004: 226) in the shaping and mutating of both the technologies and conduct of war. Situated within a new materialist schema, I contend that the mounting toll of civilian deaths due to drone strikes is not only a result of human failure or error – for example, the misreading of drone video feed, the miscalculation of targets and so on. Rather, civilian drone kills must be seen as an in-built effect of military technologies that are underpinned by both the morphology (gaming consoles, video screens and joysticks) and the algorithmic infrastructure of gaming – with its foundational dependence on ‘good approximation’ ratios and probability computation.’

And then this second passage where Joe develops what he calls ‘the “bets” and “gambles” on civilian life’:

‘[Bugsplat’ constitutes a] militarized colour-coding system that critically determines the kill value of the target. In the words of one former US intelligence official:

You say something like ‘Show me the Bugsplat.’ That’s what we call the probability of a kill estimate when we are doing this final math before the ‘Go go go’ decision. You would actually get a picture of a compound, and there will be something on it that looks like a bugsplat actually with red, yellow, and green: with red being anybody in that spot is dead, yellow stands a chance of being wounded; green we expect no harm to come to individuals where there is green. (Quoted in Woods, 2015: 150)

Described here is a mélange of paintball and video gaming techniques that is underpinned, in turn, by the probability stakes of casino gaming: as the same drone official concludes, ‘when all those conditions have been met, you may give the order to go ahead and spend the money’ (quoted in Woods, 2015: 150). In the world of drone casino mimesis, when all those gaming conditions have been met, you spend the money, fire your missiles and hope to make a killing. In the parlance of drone operators, if you hit and kill the person you intended to kill ‘that person is called a “jackpot”’ (Begley, 2015: 7). Evidenced here is the manner in which the lexicon of casino gaming is now clearly constitutive of the practices of drone kills. In the world of drone casino mimesis, the gambling stakes are high. ‘The position I took,’ says a drone screener, ‘is that every call I make is a gamble, and I’m betting on their life’ (quoted in Fielding-Smith and Black, 2015).

There is much more to Joe’s essay than this, but these passages add considerably to my own discussion of the US targeted killing program in the Federally Administered Tribal Areas of Pakistan in ‘Dirty dancing’.  You can find the whole essay under the DOWNLOADS tab, but this is the paragraph I have in mind (part of an extended discussion of the ‘technicity’ of the US targeted killing program and its reliance on kill lists, signals intercepts and visual feeds):

The kill list embedded in the [disposition] matrix has turned out to be infinitely extendable, more like a revolving door than a rolodex, so much so that at one point an exasperated General Kayani demanded that Admiral Mullen explain how, after hundreds of drone strikes, ‘the United States [could] possibly still be working its way through a “top 20” list?’  The answer lies not only in the remarkable capacity of al Qaeda and the Taliban to regenerate: the endless expansion of the list is written into the constitution of the database and the algorithms from which it emerges. The database accumulates information from multiple agencies, but for targets in the FATA the primary sources are ground intelligence from agents and informants, signals intelligence from the National Security Agency (NSA), and surveillance imagery from the US Air Force. Algorithms are then used to search the database to produce correlations, coincidences and connections that serve to identify suspects, confirm their guilt and anticipate their future actions. Jutta Weber explains that the process follows ‘a logic of eliminating every possible danger’:

‘[T]he database is the perfect tool for pre-emptive security measures because it has no need of the logic of cause and effect. It widens the search space and provides endless patterns of possibilistic networks.’

Although she suggests that the growth of ‘big data’ and the transition from hierarchical to relational and now post-relational databases has marginalised earlier narrative forms, these reappear as soon as suspects have been conjured from the database. The case for including – killing – each individual on the list is exported from its digital target folder to a summary Powerpoint slide called a ‘baseball card’ that converts into a ‘storyboard’ after each mission. Every file is vetted by the CIA’s lawyers and General Counsel, and by deputies at the National Security Council, and all ‘complex cases’ have to be approved by the President. Herein lies the real magic of the system. ‘To make the increasingly powerful non-human agency of algorithms and database systems invisible,’ Weber writes, ‘the symbolic power of the sovereign is emphasised: on “Terror Tuesdays” it (appears that it) is only the sovereign who decides about life and death.’ But this is an optical illusion. As Louise Amoore argues more generally, ‘the sovereign strike is always something more, something in excess of a single flash of decision’ and emerges instead from a constellation of prior practices and projected calculations.

Crossing the line

435px-Atlas_frontview_2013News from Lucy Suchman of an important essay she’s just completed with Jutta Weber on Human-Machine Autonomies, available from Academia.edu here.

This is how they begin:

This paper takes up the question of how we might think about the increasing automation of military systems not as an inevitable ‘advancement’ of which we are the interested observers, but as an effect of particular world-making practices in which we need urgently to intervene. We begin from the premise that the foundation of the legality of killing in situations of war is the possibility of discrimination between combatants and non-combatants. At a time when this defining form of situational awareness seems increasingly problematic, military investments in the automation of weapon systems are growing. The trajectory of these investments, moreover, is towards the development and deployment of lethal autonomous weapons; that is, weapon systems in which the identification of targets and initiation of fire is automated in ways that preclude deliberative human intervention. Challenges to these developments underscore the immorality and illegality of delegating responsibility for the use of force against human targets to machines, and the requirements of International Humanitarian Law that there be (human) accountability for acts of killing. In these debates, the articulation of differences between humans and machines is key.

Our aim in this paper is to strengthen arguments against the increasing automation of weapon systems, by expanding the frame or unit of analysis that informs these debates. We begin by tracing the genealogy of concepts of autonomy within the philosophical traditions that inform Artificial Intelligence (AI), with a focus on the history of early cybernetics and contemporary approaches to machine learning in behaviour-based robotics. We argue that while cybernetics and behaviour-based robotics challenge the premises of individual agency, cognition, communication and action that comprise the Enlightenment tradition, they also reiterate aspects of that tradition in the design of putatively intelligent, autonomous machines. This argument is made more concrete through a close reading of the United States Department of Defense Unmanned Systems Integrated Roadmap: FY2013-2038, particularly with respect to plans for future autonomous weapon systems. With that reading in mind, we turn to resources for refiguring agency and autonomy provided by recent scholarship in science and technology studies (STS) informed by feminist theory. This work suggests a shift in conceptions of agency and autonomy, from attributes inherent in entities, to effects of discourses and material practices that variously conjoin and/or delineate differences between humans and machines. This shift leads in turn to a reconceptualization of autonomy and responsibility as always enacted within, rather than as separable from, particular human- machine configurations. We close by considering the implications of these reconceptualizations for questions of responsibility in relation to automated/autonomous weapon systems. Taking as a model feminist projects of deconstructing categorical distinctions while also recognising those distinctions’ cultural-historical effects, we argue for simultaneous attention to the inseparability of human-machine agencies in contemporary war fighting, and to the necessity of delineating human agency and responsibility within political, legal and ethical/moral regimes of accountability.

LRASM (Lockheed-Martin photo) PNG

It’s a must-read, I think, especially in the light of a report from the New York Times of the Long Range Anti-Ship Missile (above) developed for the US military by Lockheed Martin:

On a bright fall day last year off the coast of Southern California, an Air Force B-1 bomber launched an experimental missile that may herald the future of warfare.

Initially, pilots aboard the plane directed the missile, but halfway to its destination, it severed communication with its operators. Alone, without human oversight, the missile decided which of three ships to attack, dropping to just above the sea surface and striking a 260-foot unmanned freighter…

The Pentagon argues that the new antiship missile is only semiautonomous and that humans are sufficiently represented in its targeting and killing decisions. But officials at the Defense Advanced Research Projects Agency, which initially developed the missile, and Lockheed declined to comment on how the weapon decides on targets, saying the information is classified.

“It will be operating autonomously when it searches for the enemy fleet,” said Mark A. Gubrud, a physicist and a member of the International Committee for Robot Arms Control, and an early critic of so-called smart weapons. “This is pretty sophisticated stuff that I would call artificial intelligence outside human control.”

Paul Scharre, a weapons specialist now at the Center for a New American Security who led the working group that wrote the Pentagon directive, said, “It’s valid to ask if this crosses the line.”

And the Israeli military and armaments industry, for whom crossing any line is second nature, are developing what they call a ‘suicide drone’ (really).  At Israel Unmanned Systems 2014, a trade fair held in Tel Aviv just three weeks after Israel’s latest assault on Gaza, Dan Cohen reported:

Lieutenant Colonel Itzhar Jona, who heads Israel Aerospace Industries, spoke about “loitering munitions” — what he called a “politically correct” name for Suicide Drones. They are a hybrid of drone and missile technology that have “autonomous and partially autonomous” elements, and are “launched like a missile, fly like an UAV [unmanned aerial vehicle],” and once they identify a target, revert to “attack like a missile.” Jona called the Suicide Drone a “UAV that thinks and decides for itself,” then added, “If you [the operator] aren’t totally clear on the logic, it can even surprise you.”

Jona praised the advantage of the Suicide Drone because the operator “doesn’t have to bring it home or deal with all sorts of dilemmas.” The Suicide Drone will quickly find a target using its internal logic, which Jona explained in this way: “It carries a warhead that eventually needs to explode. There needs to be a target at the end that will want to explode. Or it won’t want to and we will help it explode.”

So thoughtful to protect ‘the operator’ from any stress (even if s/he might be a little ‘surprised’).  Here is Mondoweiss‘s subtitled clip from the meeting, which opens with a short discussion of the major role played by UAVs in the air and ground attacks on Gaza, and then Jona describes how ‘we always live on the border’: