Following up my post on Google and Project Maven here, there’s an open letter (via the International Committee for Robot Arms Control) in support of Google employees opposed to the tech giant’s participation in Project Maven here: it’s open for for (many) more signatures…

An Open Letter To:

Larry Page, CEO of Alphabet;
Sundar Pichai, CEO of Google;
Diane Greene, CEO of Google Cloud;
and Fei-Fei Li, Chief Scientist of AI/ML and Vice President, Google Cloud,

As scholars, academics, and researchers who study, teach about, and develop information technology, we write in solidarity with the 3100+ Google employees, joined by other technology workers, who oppose Google’s participation in Project Maven. We wholeheartedly support their demand that Google terminate its contract with the DoD, and that Google and its parent company Alphabet commit not to develop military technologies and not to use the personal data that they collect for military purposes. The extent to which military funding has been a driver of research and development in computing historically should not determine the field’s path going forward. We also urge Google and Alphabet’s executives to join other AI and robotics researchers and technology executives in calling for an international treaty to prohibit autonomous weapon systems.

Google has long sought to organize and enhance the usefulness of the world’s information. Beyond searching for relevant webpages on the internet, Google has become responsible for compiling our email, videos, calendars, and photographs, and guiding us to physical destinations. Like many other digital technology companies, Google has collected vast amounts of data on the behaviors, activities and interests of their users. The private data collected by Google comes with a responsibility not only to use that data to improve its own technologies and expand its business, but also to benefit society. The company’s motto “Don’t Be Evil” famously embraces this responsibility.

Project Maven is a United States military program aimed at using machine learning to analyze massive amounts of drone surveillance footage and to label objects of interest for human analysts. Google is supplying not only the open source ‘deep learning’ technology, but also engineering expertise and assistance to the Department of Defense.

According to Defense One, Joint Special Operations Forces “in the Middle East” have conducted initial trials using video footage from a small ScanEagle surveillance drone. The project is slated to expand “to larger, medium-altitude Predator and Reaper drones by next summer” and eventually to Gorgon Stare, “a sophisticated, high-tech series of cameras…that can view entire towns.” With Project Maven, Google becomes implicated in the questionable practice of targeted killings. These include so-called signature strikesand pattern-of-life strikes that target people based not on known activities but on probabilities drawn from long range surveillance footage. The legality of these operations has come into question under international[1] and U.S. law.[2] These operations also have raised significant questions of racial and gender bias (most notoriously, the blanket categorization of adult males as militants) in target identification and strike analysis.[3]These problems cannot be reduced to the accuracy of image analysis algorithms, but can only be addressed through greater accountability to international institutions and deeper understanding of geopolitical situations on the ground.

While the reports on Project Maven currently emphasize the role of human analysts, these technologies are poised to become a basis for automated target recognition and autonomous weapon systems. As military commanders come to see the object recognition algorithms as reliable, it will be tempting to attenuate or even remove human review and oversight for these systems. According to Defense One, the DoD already plans to install image analysis technologies on-board the drones themselves, including armed drones. We are then just a short step away from authorizing autonomous drones to kill automatically, without human supervision or meaningful human control. If ethical action on the part of tech companies requires consideration of who might benefit from a technology and who might be harmed, then we can say with certainty that no topic deserves more sober reflection – no technology has higher stakes – than algorithms meant to target and kill at a distance and without public accountability.

We are also deeply concerned about the possible integration of Google’s data on people’s everyday lives with military surveillance data, and its combined application to targeted killing. Google has moved into military work without subjecting itself to public debate or deliberation, either domestically or internationally. While Google regularly decides the future of technology without democratic public engagement, its entry into military technologies casts the problems of private control of information infrastructure into high relief.

Should Google decide to use global internet users’ personal data for military purposes, it would violate the public trust that is fundamental to its business by putting its users’ lives and human rights in jeopardy. The responsibilities of global companies like Google must be commensurate with the transnational makeup of their users. The DoD contracts under consideration by Google, and similar contracts already in place at Microsoft and Amazon, signal a dangerous alliance between the private tech industry, currently in possession of vast quantities of sensitive personal data collected from people across the globe, and one country’s military. They also signal a failure to engage with global civil society and diplomatic institutions that have already highlighted the ethical stakes of these technologies.

We are at a critical moment. The Cambridge Analytica scandal demonstrates growing public concern over allowing the tech industries to wield so much power. This has shone only one spotlight on the increasingly high stakes of information technology infrastructures, and the inadequacy of current national and international governance frameworks to safeguard public trust. Nowhere is this more true than in the case of systems engaged in adjudicating who lives and who dies.
We thus ask Google, and its parent company Alphabet, to:

  • Terminate its Project Maven contract with the DoD.

  • Commit not to develop military technologies, nor to allow the personal data it has collected to be used for military operations.

  • Pledge to neither participate in nor support the development, manufacture, trade or use of autonomous weapons; and to support efforts to ban autonomous weapons.

Google eyes

The Oxford English Dictionary recognised ‘google’ as a verb in 2006, and its active form is about to gain another dimension.  One of the most persistent anxieties amongst those executing remote warfare, with its extraordinary dependence on (and capacity for) real-time full motion video surveillance as an integral moment of the targeting cycle, has been the ever-present risk of ‘swimming in sensors and drowning in data‘.

But now Kate Conger and Dell Cameron report for Gizmodo on a new collaboration between Google and the Pentagon as part of Project Maven:

Project Maven, a fast-moving Pentagon project also known as the Algorithmic Warfare Cross-Functional Team (AWCFT), was established in April 2017. Maven’s stated mission is to “accelerate DoD’s integration of big data and machine learning.” In total, the Defense Department spent $7.4 billion on artificial intelligence-related areas in 2017, the Wall Street Journal reported.

The project’s first assignment was to help the Pentagon efficiently process the deluge of video footage collected daily by its aerial drones—an amount of footage so vast that human analysts can’t keep up, according to Greg Allen, an adjunct fellow at the Center for a New American Security, who co-authored a lengthy July 2017 report on the military’s use of artificial intelligence. Although the Defense Department has poured resources into the development of advanced sensor technology to gather information during drone flights, it has lagged in creating analysis tools to comb through the data.

“Before Maven, nobody in the department had a clue how to properly buy, field, and implement AI,” Allen wrote.

Maven was tasked with using machine learning to identify vehicles and other objects in drone footage, taking that burden off analysts. Maven’s initial goal was to provide the military with advanced computer vision, enabling the automated detection and identification of objects in as many as 38 categories captured by a drone’s full-motion camera, according to the Pentagon. Maven provides the department with the ability to track individuals as they come and go from different locations.

Google has reportedly attempted to allay fears about its involvement:

A Google spokesperson told Gizmodo in a statement that it is providing the Defense Department with TensorFlow APIs, which are used in machine learning applications, to help military analysts detect objects in images. Acknowledging the controversial nature of using machine learning for military purposes, the spokesperson said the company is currently working “to develop polices and safeguards” around its use.

“We have long worked with government agencies to provide technology solutions. This specific project is a pilot with the Department of Defense, to provide open source TensorFlow APIs that can assist in object recognition on unclassified data,” the spokesperson said. “The technology flags images for human review, and is for non-offensive uses only. Military use of machine learning naturally raises valid concerns. We’re actively discussing this important topic internally and with others as we continue to develop policies and safeguards around the development and use of our machine learning technologies.”


As Mehreen Kasana notes, Google has indeed ‘long worked with government agencies’:

2017 report in Quartz shed light on the origins of Google and how a significant amount of funding for the company came from the CIA and NSA for mass surveillance purposes. Time and again, Google’s funding raises questions. In 2013, a Guardian report highlighted Google’s acquisition of the robotics company Boston Dynamics, and noted that most of the projects were funded by the Defense Advanced Research Projects Agency (DARPA).

‘We’re not in Kansas anymore’

Today’s Guardian has a report from Roy Wenzl called ‘The kill-chain: inside the unit that tracks targets in America’s drone wars’.  There’s not much there that won’t be familiar to regular readers, but the focus is not on the pilots and sensor operators but on the screeners – the analysts who scrutinise the full-motion video feeds from the drones to provide ISR (intelligence, surveillance and reconnaissance).

The report describes the work of the 184th Intelligence Wing of the National Air Guard at McConnell AFB in Kansas:

‘They video-stalk enemy combatants, and tell warfighters what they see… The group does this work in the middle of America, at an air base surrounded by flat cow pastures and soybean fields….

The work is top secret.They say that they see things in those drone images that no one wants to see. Sometimes, it’s terrorists beheading civilians. Sometimes it’s civilians dying accidentally in missions that the Kansans help coordinate.

They agonize over those deaths. The most frequently heard phrase in drone combat, one airman says, is: “Don’t push the button.”

“You see [enemy combatants] kiss their kids goodbye, and kiss their wives goodbye, and then they walk down the street,” said a squadron chief master sergeant. “As soon as they get over that hill, the missile is released.”

The Americans wait to fire, he says, “because we don’t want the family to see it”.

One of those involved marvels at the technology involved: ‘The technology we use is just insane, it’s so good.’  As the report notes, critics of the programme have a more literal meaning of insanity in their minds….

The report also confirms the intensity (and, as part of that intensity, the tedium) of the shift-work involved:

Back in Kansas, in the SCIF (Sensitive Compartmented Intelligence Facility), members of Col Brad Hilbert’s group watch dozens of screens. One eight-hour shift will watch multiple targets, then hand off surveillance to the next shift. Multiple missions run simultaneously.

While enemy combatants walk around carrying weapons, the group studies their movements. They can watch one person, or one building, or one small neighborhood. The drones loiter high and unseen, giving clear, hi-tech visuals….

Most of what they watch is tedious. “They will sometimes watch one pile of sand every day for a month,” their chaplain says.

But sometimes, they see that an enemy is about to attack US troops. The commanders decide to “neutralize” him. When commanders order attacks, the Kansans become one link in a kill chain, which can include armed Reaper and Predator drone operators, fighter pilots, ground artillery commanders – and commanders with authority to approve or deny strikes.

Those who don’t count and those who can’t count

An excellent article from the unfailing New York Times in a recent edition of the Magazine: Azmat Khan and Anand Gopal on ‘The Uncounted‘, a brilliant, forensic and – crucially – field-based investigation into civilian casualties in the US air war against ISIS:

American military planners go to great lengths to distinguish today’s precision strikes from the air raids of earlier wars, which were carried out with little or no regard for civilian casualties. They describe a target-selection process grounded in meticulously gathered intelligence, technological wizardry, carefully designed bureaucratic hurdles and extraordinary restraint. Intelligence analysts pass along proposed targets to “targeteers,” who study 3-D computer models as they calibrate the angle of attack. A team of lawyers evaluates the plan, and — if all goes well — the process concludes with a strike so precise that it can, in some cases, destroy a room full of enemy fighters and leave the rest of the house intact.

The coalition usually announces an airstrike within a few days of its completion. It also publishes a monthly report assessing allegations of civilian casualties. Those it deems credible are generally explained as unavoidable accidents — a civilian vehicle drives into the target area moments after a bomb is dropped, for example. The coalition reports that since August 2014, it has killed tens of thousands of ISIS fighters and, according to our tally of its monthly summaries, 466 civilians in Iraq.

What Azmat and Anand found on the ground, however, was radically different:

Our own reporting, conducted over 18 months, shows that the air war has been significantly less precise than the coalition claims. Between April 2016 and June 2017, we visited the sites of nearly 150 airstrikes across northern Iraq, not long after ISIS was evicted from them. We toured the wreckage; we interviewed hundreds of witnesses, survivors, family members, intelligence informants and local officials; we photographed bomb fragments, scoured local news sources, identified ISIS targets in the vicinity and mapped the destruction through satellite imagery. We also visited the American air base in Qatar where the coalition directs the air campaign. There, we were given access to the main operations floor and interviewed senior commanders, intelligence officials, legal advisers and civilian-casualty assessment experts. We provided their analysts with the coordinates and date ranges of every airstrike — 103 in all — in three ISIS-controlled areas and examined their responses. The result is the first systematic, ground-based sample of airstrikes in Iraq since this latest military action began in 2014.

We found that one in five of the coalition strikes we identified resulted in civilian death, a rate more than 31 times that acknowledged by the coalition. It is at such a distance from official claims that, in terms of civilian deaths, this may be the least transparent war in recent American history [my emphasis].  Our reporting, moreover, revealed a consistent failure by the coalition to investigate claims properly or to keep records that make it possible to investigate the claims at all. While some of the civilian deaths we documented were a result of proximity to a legitimate ISIS target, many others appear to be the result simply of flawed or outdated intelligence that conflated civilians with combatants. In this system, Iraqis are considered guilty until proved innocent. Those who survive the strikes …  remain marked as possible ISIS sympathizers, with no discernible path to clear their names.

They provide immensely powerful, moving case studies of innocents ‘lost in the wreckage’.  They also describe the US Air Force’s targeting process at US Central Command’s Combined Air Operations Center (CAOC) at Al Udeid Air Base in Qatar (the image above shows the Intelligence, Surveillance and Reconnaissance Division at the CAOC, which ‘provides a common threat and targeting picture’):

The process seemed staggeringly complex — the wall-to-wall monitors, the soup of acronyms, the army of lawyers — but the impressively choreographed operation was designed to answer two basic questions about each proposed strike: Is the proposed target actually ISIS? And will attacking this ISIS target harm civilians in the vicinity?

As we sat around a long conference table, the officers explained how this works in the best-case scenario, when the coalition has weeks or months to consider a target. Intelligence streams in from partner forces, informants on the ground, electronic surveillance and drone footage. Once the coalition decides a target is ISIS, analysts study the probability that striking it will kill civilians in the vicinity, often by poring over drone footage of patterns of civilian activity. The greater the likelihood of civilian harm, the more mitigating measures the coalition takes. If the target is near an office building, the attack might be rescheduled for nighttime. If the area is crowded, the coalition might adjust its weaponry to limit the blast radius. Sometimes aircraft will even fire a warning shot, allowing people to escape targeted facilities before the strike. An official showed us grainy night-vision footage of this technique in action: Warning shots hit the ground near a shed in Deir al-Zour, Syria, prompting a pair of white silhouettes to flee, one tripping and picking himself back up, as the cross hairs follow.

Once the targeting team establishes the risks, a commander must approve the strike, taking care to ensure that the potential civilian harm is not “excessive relative to the expected military advantage gained,” as Lt. Col. Matthew King, the center’s deputy legal adviser, explained.

After the bombs drop, the pilots and other officials evaluate the strike. Sometimes a civilian vehicle can suddenly appear in the video feed moments before impact. Or, through studying footage of the aftermath, they might detect signs of a civilian presence. Either way, such a report triggers an internal assessment in which the coalition determines, through a review of imagery and testimony from mission personnel, whether the civilian casualty report is credible. If so, the coalition makes refinements to avoid future civilian casualties, they told us, a process that might include reconsidering some bit of intelligence or identifying a flaw in the decision-making process.

There are two issues here.  First, this is indeed the ‘best-case scenario’, and one that very often does not obtain.  One of the central vectors of counterinsurgency and counterterrorism is volatility: targets are highly mobile and often the ‘window of opportunity’ is exceedingly narrow.  I’ve reproduced this image from the USAF’s own targeting guide before, in relation to my analysis of the targeting cycle for a different US air strike against IS in Iraq in March 2015, but it is equally applicable here:

Second, that ‘window of opportunity’ is usually far from transparent, often frosted and frequently opaque.  For what is missing from the official analysis described by Azmat and Anand turns out to be the leitmotif of all remote operations (and there is a vital sense in which all forms of aerial violence are ‘remote’, whether the pilot is 7,000 miles away or 30,000 feet above the target [see for example here]):

Lt. Gen. Jeffrey Harrigian, commander of the United States Air Forces Central Command at Udeid, told us what was missing. “Ground truth, that’s what you’re asking for,” he said. “We see what we see from altitude and pull in from other reports. Your perspective is talking to people on the ground.” He paused, and then offered what he thought it would take to arrive at the truth: “It’s got to be a combination of both.”

The military view, perhaps not surprisingly, is that civilian casualties are unavoidable but rarely intentional:

Supreme precision can reduce civilian casualties to a very small number, but that number will never reach zero. They speak of every one of the acknowledged deaths as tragic but utterly unavoidable.

Azmat and Anand reached a numbingly different conclusion: ‘Not all civilian casualties are unavoidable tragedies; some deaths could be prevented if the coalition recognizes its past failures and changes its operating assumptions accordingly. But in the course of our investigation, we found that it seldom did either.’

Part of the problem, I suspect, is that whenever there is an investigation into reports of civilian casualties that may have been caused by US military operations it must be independent of all other investigations and can make no reference to them in its findings; in other words, as I’ve noted elsewhere, there is no ‘case law’: bizarre but apparently true.

But that is only part of the problem.  The two investigators cite multiple intelligence errors (‘In about half of the strikes that killed civilians, we could find no discernible ISIS target nearby. Many of these strikes appear to have been based on poor or outdated intelligence’) and even errors and discrepancies in recording and locating strikes after the event.

It’s worth reading bellingcat‘s analysis here, which also investigates the coalition’s geo-locational reporting and notes that the official videos ‘appear only to showcase the precision and efficiency of coalition bombs and missiles, and rarely show people, let alone victims’.  The image above, from CNN, is unusual in showing the collection of the bodies of victims of a US air strike in Mosul, this time in March 2017; the target was a building from which two snipers were firing; more than 100 civilians sheltering there were killed.  The executive summary of the subsequent investigation is here – ‘The Target Engagement Authority (TEA) was unaware of and could not have predicted the presence of civilians in the structure prior to the engagement’ – and report from W.J. Hennigan and Molly Hennessy-Fiske is here.

Included in bellingcat’s account is a discussion of a video which the coalition uploaded to YouTube and then deleted; Azmat retrieved and archived it – the video shows a strike on two buildings in Mosul on 20 September 2015 that turned out to be focal to her investigation with Anand:

The video caption identifies the target as a ‘VBIED [car bomb] facility’.  But Bellingcat asks:

Was this really a “VBIED network”? Under the original upload, a commenter starting posting that the houses shown were his family’s residence in Mosul.

“I will NEVER forget my innocent and dear cousins who died in this pointless airstrike. Do you really know who these people were? They were innocent and happy family members of mine.”

Days after the strike, Dr Zareena Grewal, a relative living in the US wrote in the New York Times that four family members had died in the strike. On April 2, 2017 – 588 days later – the Coalition finally admitted that it indeed bombed a family home which they confused for an IS headquarters and VBIED facility.

“The case was brought to our attention by the media and we discovered the oversight, relooked [at] the case based on the information provided by the journalist and family, which confirmed the 2015 assessment,” Colonel Joe Scrocca, Director of Public Affairs for the Coalition, told Airwars.

Even though the published strike video actually depicted the killing of a family, it remained – wrongly captioned – on the official Coalition YouTube channel for more than a year.

This is but one, awful example of a much wider problem.  The general conclusion reached by Azmat and Anand is so chilling it is worth re-stating:

According to the coalition’s available data, 89 of its more than 14,000 airstrikes in Iraq have resulted in civilian deaths, or about one of every 157 strikes. The rate we found on the ground — one out of every five — is 31 times as high.

One of the houses [shown above] mistakenly identified as a ‘VBIED facility’ in that video belonged to Basim Razzo, and he became a key informant in Azmat and Anand’s investigation; he was subsequently interviewed by Amy Goodman: the transcript is here. She also interviewed Azmat and Anand: that transcript is here.  In the course of the conversation Anand makes a point that amply and awfully confirms Christiane Wilke‘s suggestion – in relation to air strikes in Afghanistan – that the burden of recognition, of what in international humanitarian law is defined as ‘distinction’, is tacitly being passed from combatant to civilian: that those in the cross-hairs of the US military are required to perform their civilian status to those watching from afar.

It goes back to this issue of Iraqis having to prove that they are not ISIS, which is the opposite of what we would think. We would think that the coalition would do the work to find out whether somebody is a member of ISIS or not. Essentially, they assume people are ISIS until proven otherwise.

To make matters worse, they have to perform their ‘civilianness’ according to a script recognised and approved by the US military, however misconceived it may be.  In the case of one (now iconic) air strike in Afghanistan being an adolescent or adult male, travelling in a group, praying at one of the times prescribed by Islam, and carrying a firearm in a society where that is commonplace was enough for civilians to be judged as hostile by drone crews and attacked from the air with dreadful results (see here and here).

This is stunning investigative journalism, but it’s more than that: the two authors are both at Arizona State University, and they have provided one of the finest examples of critical, probing and accessible scholarship I have ever read.

Tracking and targeting

News from Lucy Suchman of a special issue of Science, Technology and Human Values [42 (6) (2017)]  on Tracking and targeting: sociotechnologies of (in)security, which she’s co-edited with Karolina Follis and Jutta Weber.

Here’s the line-up:

Lucy Suchman, Karolina Follis and Jutta Weber: Tracking and targeting

This introduction to the special issue of the same title sets out the context for a critical examination of contemporary developments in sociotechnical systems deployed in the name of security. Our focus is on technologies of tracking, with their claims to enable the identification of those who comprise legitimate targets for the use of violent force. Taking these claims as deeply problematic, we join a growing body of scholarship on the technopolitical logics that underpin an increasingly violent landscape of institutions, infrastructures, and actions, promising protection to some but arguably contributing to our collective insecurity. We examine the asymmetric distributions of sociotechnologies of (in)security; their deadly and injurious effects; and the legal, ethical, and moral questions that haunt their operations.

Karolina Follis: Visions and transterritory: the borders of Europe

This essay is about the role of visual surveillance technologies in the policing of the external borders of the European Union (EU). Based on an analysis of documents published by EU institutions and independent organizations, I argue that these technological innovations fundamentally alter the nature of national borders. I discuss how new technologies of vision are deployed to transcend the physical limits of territories. In the last twenty years, EU member states and institutions have increasingly relied on various forms of remote tracking, including the use of drones for the purposes of monitoring frontier zones. In combination with other facets of the EU border management regime (such as transnational databases and biometrics), these technologies coalesce into a system of governance that has enabled intervention into neighboring territories and territorial waters of other states to track and target migrants for interception in the “prefrontier.” For jurisdictional reasons, this practice effectively precludes the enforcement of legal human rights obligations, which European states might otherwise have with regard to these persons. This article argues that this technologically mediated expansion of vision has become a key feature of post–cold war governance of borders in Europe. The concept of transterritory is proposed to capture its effects.

Christiane Wilke: Seeing and unmaking civilians in Afghanistan: visual technologies and contested professional visions

While the distinction between civilians and combatants is fundamental to international law, it is contested and complicated in practice. How do North Atlantic Treaty Organization (NATO) officers see civilians in Afghanistan? Focusing on 2009 air strike in Kunduz, this article argues that the professional vision of NATO officers relies not only on recent military technologies that allow for aerial surveillance, thermal imaging, and precise targeting but also on the assumptions, vocabularies, modes of attention, and hierarchies of knowledges that the officers bring to the interpretation of aerial surveillance images. Professional vision is socially situated and frequently contested with communities of practice. In the case of the Kunduz air strike, the aerial vantage point and the military visual technologies cannot fully determine what would be seen. Instead, the officers’ assumptions about Afghanistan, threats, and the gender of the civilian inform the vocabulary they use for coding people and places as civilian or noncivilian. Civilians are not simply “found,” they are produced through specific forms of professional vision.

Jon Lindsay: Target practice: Counterterrorism and the amplification of data friction

The nineteenth-century strategist Carl von Clausewitz describes “fog” and “friction” as fundamental features of war. Military leverage of sophisticated information technology in the twenty-first century has improved some tactical operations but has not lifted the fog of war, in part, because the means for reducing uncertainty create new forms of it. Drawing on active duty experience with an American special operations task force in Western Iraq from 2007 to 2008, this article traces the targeting processes used to “find, fix, and finish” alleged insurgents. In this case they did not clarify the political reality of Anbar province but rather reinforced a parochial worldview informed by the Naval Special Warfare community. The unit focused on the performance of “direct action” raids during a period in which “indirect action” engagement with the local population was arguably more appropriate for the strategic circumstances. The concept of “data friction”, therefore, can be understood not simply as a form of resistance within a sociotechnical system but also as a form of traction that enables practitioners to construct representations of the world that amplify their own biases.

M.C. Elish: Remote split: a history of US drone operations and the distributed labour of war

This article analyzes US drone operations through a historical and ethnographic analysis of the remote split paradigm used by the US Air Force. Remote split refers to the globally distributed command and control of drone operations and entails a network of human operators and analysts in the Middle East, Europe, and Southeast Asia as well as in the continental United States. Though often viewed as a teleological progression of “unmanned” warfare, this paper argues that historically specific technopolitical logics establish the conditions of possibility for the work of war to be divisible into discreet and computationally mediated tasks that are viewed as effective in US military engagements. To do so, the article traces how new forms of authorized evidence and expertise have shaped developments in military operations and command and control priorities from the Cold War and the “electronic battlefield” of Vietnam through the Gulf War and the conflict in the Balkans to contemporary deployments of drone operations. The article concludes by suggesting that it is by paying attention to divisions of labor and human–machine configurations that we can begin to understand the everyday and often invisible structures that sustain perpetual war as a military strategy of the United States.

I’ve discussed Christiane’s excellent article in detail before, but the whole issue repays careful reading.

And if you’re curious about the map that heads this post, it’s based on the National Security Agency’s Strategic Mission List (dated 2007 and published in the New York Times on 2 November 2013), and mapped at Electrospaces: full details here.

Intelligence and War

Vue d’artiste de l’évolution de l’Homme peinte sur un mur, stencil graffiti on Vali-ye-Asr Avenue in central Tehran. By Paul Keller, 4 November 2007

A new edition of the ever-interesting Mediatropes is now online (it’s open access), this time on Intelligence and War: you can access the individual essays (or download the whole issue) here.  Previous issues are all available here.

The issue opens with an editorial introduction (‘Intelligence and War’ by Stuart J Murray, Jonathan Chau, Twyla Gibson.  And here is Stuart’s summary of the rest of the issue:

Michael Dorland’s “The Black Hole of Memory: French Mnemotechniques in the Erasure of the Holocaust” interrogates the role of memory and memorialization in the constitution of post-World War II France. Dorland hones in on the precarity of a France that grapples with its culpability in the Vel’ d’Hiv Round-up, spotlighting the role of the witness and the perpetually problematized function of testimony as key determinants in challenging both the public memory and the historical memory of a nation.

Sara Kendall’s essay, “Unsettling Redemption: The Ethics of Intra-subjectivity in The Act of Killing” navigates the problematic representation of mass atrocity. Employing Joshua Oppenheimer’s investigation of the Indonesian killings of 1965–1966, Kendall unsettles the documentary’s attempts to foreground the practices of healing and redemption, while wilfully sidestepping any acknowledgment of the structural dimensions of violence. To Kendall, the documentary’s focus on the narratives of the perpetrators, who function as proxies for the state, makes visible the aporia of the film, substituting a framework based on affect and empathy in place of critical political analyses of power imbalances.

Kevin Howley is concerned with the spatial ramifications of drone warfare. In “Drone Warfare: Twenty-First Century Empire and Communications,” Howley examines the battlefield deployment of drones through the lens of Harold Innis’s distinction between time-biased and space-biased media. By considering the drone as a space-biased technology that can transmit information across vast distances, yet only remain vital for short periods of time, Howley sees the drone as emblematic of the American impulse to simultaneously and paradoxically collapse geographical distance while expanding cultural differences between America and other nations.

Avital Ronell’s essay, entitled “BIGLY Mistweated: On Civic Grievance,” takes direct aim at the sitting US president, offering a rhetorical analysis of what she calls “Trumpian obscenity.” Ronell exposes the foundations of the current administration, identifying a government bereft of authority, stitched together by audacity, and punctuated by an almost unfathomable degree of absurdity. In her attempt to make sense of the fundamentally nonsensical and nihilistic discourse that Trump represents, Ronell walks alongside Paul Celan, Melanie Klein, and especially Jacques Derrida, concluding with a suggestive, elusive, and allusive possibility for negotiating the contemporary, Trumpian moment.

In “The Diseased ‘Terror Tunnels’ in Gaza: Israeli Surveillance and the Autoimmunization of an Illiberal Democracy,” Marouf Hasian, Jr. explains how Israel’s state-sanctioned use of autoimmunizing rhetorics depict the lives of Israelis as precarious and under threat. Here, the author’s preoccupation is with the Israeli strategy of rhetorically reconfiguring smuggling tunnels as “terror tunnels” that present an existential threat to Israeli citizens. In doing so, he shows how the non-combatant status of Gazan civilians is dissolved through the intervening effects of these media tropes.

Derek Gregory’s essay, “The Territory of the Screen,” offers a different perspective on drone warfare. Gregory leverages Owen Sheers’s novel, I Saw a Man, to explore the ways in which modern combat is contested through a series of mediating layers, a series of screens through which the United States, as Gregory argues, dematerializes the corporeality of human targets. For Gregory, drone warfare’s facilitation of remote killings is predicated on technical practices that reduce the extinguishing of life to technological processes that produce, and then execute, “killable bodies.”

But how is the increasingly unsustainable illusion of intelligence as being centralized and definitive maintained? Julie B. Wiest’s “Entertaining Genius: U.S. Media Representations of Exceptional Intelligence” identifies the media trope of exceptionally intelligent characters across mainstream film and television programs as key to producing and reinforcing popular understandings of intelligence. Through her analysis of such fictional savants, Wiest connects these patterns of representation to the larger social structures that reflect and reinforce narrowly defined notions of intelligence, and those who are permitted to possess it.

We end this issue with a poem from Sanita Fejzić, who offers a perspective on the human costs of war that is framed not by technology, but through poetic language.

My own essay is a reworked version of the penultimate section of “Dirty Dancing” (DOWNLOADS tab) which we had to cut because it really did stretch the length limitations for Life in the Age of Drone Warfare; so, as Stuart notes, I re-worked it, adding an extended riff on Owen Sheers‘ luminous I saw a man and looping towards the arguments I since developed in ‘Meatspace?

A landscape of interferences

Uruzgan strike (National Bird reconstruction)

[Still image from NATIONAL BIRD © Ten Forward Films; the image is of the film’s re-enactment of the Uruzgan air strike based on the original transcript of the Predator crew’s radio traffic.]

I’ve been reading the chapter in Pierre Bélanger and Alexander Arroyo‘s Ecologies of Power that provides a commentary on what has become the canonical US air strike in Uruzgan, Afghanistan in February 2010 (‘Unmanned Aerial Systems: Sensing the ecology of remote operational environments’, pp. 267-320).  In my own analysis of the strike I emphasised the production of

a de-centralised, distributed and dispersed geography of militarised vision whose fields of view expanded, contracted and even closed at different locations engaged in the administration of military violence. Far from being a concerted performance of Donna Haraway‘s ‘God-trick’ – the ability to see everything from nowhere – this version of networked war was one in which nobody had a clear and full view of what was happening.

Part of this can be attributed to technical issues – the different fields of view available on different platforms, the low resolution of infra-red imagery (which Andrew Cockburn claims registers a visual acuity of 20/200, ‘the legal definition of blindness in the United States’), transmission interruptions, and the compression of full-colour imagery to accommodate bandwidth pressure…

But it is also a matter of different interpretive fields. Peter Asaro cautions:

‘The fact that the members of this team all have access to high-resolution imagery of the same situation does not mean that they all ‘‘see’’ the same thing. The visual content and interpretation of the visual scene is the product of analysis and negotiation among the team, as well as the context given by the situational awareness, which is itself constructed.’

The point is a sharp one: different visualities jostle and collide, and in the transactions between the observers the possibility of any synoptic ‘God-trick’ disappears. But it needs to be sharpened, because different people have differential access to the distributed stream of visual feeds, mIRC and radio communications. Here the disposition of bodies combines with the techno-cultural capacity to make sense of what was happening to fracture any ‘common operating picture’.

ecologies-of-powerPierre and Alexander’s aim is to ‘disentangle’ the Electromagnetic Environment (EME), ‘the space and time in which communications occur and transmissions take place’, as a Hertzian landscape.  The term is, I think, William J. Mitchell‘s in Me++:

‘Every point on the surface of the earth is now part of the Hertzian landscape – the product of innumerable transmissions and of the reflections and obstructions of those transmissions… The electronic terrain that we have constructed is an intricate, invisible landscape.’

(Other writers – and artists – describe what Anthony Dunne called Hertzian space).

The Hertzian landscape is often advertised – I use the world deliberately – as an isotropic plane.  Here, for example, is how one commercial company describes its activations (and its own product placement within that landscape) in a scenario that, in part, parallels the Uruzgan strike:

A bobcat growls over the speaker, and Airmen from the 71st Expeditionary Air Control Squadron [at Al Udeid Air Base in Qatar] spring into action within the darkened confines of the Battlespace Command and Control (C2) Center, better known as ‘Pyramid Control.’

Keeping WatchThis single audio cue alerts the Weapons Director that an unplanned engagement with hostile force – referred to as Troops in Contact, or TIC – has occurred somewhere in Afghanistan. On the Weapons Director’s computer monitor a chat room window ashes to distinguish itself from the dozens of rooms he monitors continuously.

More than a thousand kilometers away, a Joint Terminal Attack Controller on the ground has called for a Close Air Support (CAS) aircraft to assist the friendly forces now under assault. The Weapons Director has minutes to move remotely piloted vehicles away from the CAS aircraft’s ight path, to de-conict the air support and ground re from other aircraft, and to provide an update on hostile activity to all concerned.

The Weapons Director has numerous communication methods at his disposal, including VoIP and tactical radio to quickly get the critical information to operators throughout southwest Asia and across the world, including communicating across differently classied networks. This enables key participants to assess the situation and to commence their portions of the mission in parallel.

You can find the US military’s view of the 71st here – it called the Squadron, since deactivated, its ‘eyes in the sky’ – and on YouTube here.


In practice, the Hertzian landscape is no isotropic plane.  Its heterogeneous in space and inconstant in time, and it has multiple, variable and even mobile terrestrial anchor points: some highly sophisticated and centralised (like the Combined Air Operations Center at Al Udeid), others improvisational, even jerry-rigged (see above), and yet others wholly absent (in the Uruzgan case the Joint Terminal Attack Controller with the Special Forces Detachment had no ROVER, a militarized laptop, and so he was unable to receive the video stream from the Predator).

Pierre and Alexander provide an ‘inventory of interferences’ that affected the Uruzgan strike:

‘Saturating the battlefield with multiple electro-magnetic signals from multiple sources, a Hertzian landscape begins to emerge in relief.  In this sense, it is interference – rather than clarity of signal – that best describes a synoptic and saturated environment according to the full repertoire of agencies and affects through which it is dynamically composed, transformed and reconstituted.’ (p. 276)

In fact, they don’t work with the ‘full repertoire of agencies’ because, like most commentators, their analysis is confined to the transcript of radio communications between the aircrews tracking the vehicles and the Joint Terminal Attack Controller on the ground.  Although this excludes testimony from the ground staff in superior command posts (‘operations centres’) in Kandahar and Bagram and from those analysing the video feeds in the continental United States, these actors were subject to the same interferences: but their effects were none the less different.  The catastrophic air strike, as Mitchell almost said, was ‘smeared across multiple sites’… a ‘smearing’ because the time and space in which it was produced was indistinct and inconstant, fractured and febrile.

Here, in summary form, are the interferences Pierre and Alexander identify, an inventory which they claim ‘renders the seemingly invisible and neutral space of the electromagnetic environment extremely social and deeply spatial’ (p. 319).  It does that for sure, but the the exchanges they extract from the transcripts do not always align with the general interferences they enumerate – and, as you’ll see, I’m not sure that all of them constitute ‘interferences’.



(1) Thermal interference:  The Predator started tracking the three vehicles while it was still dark and relied on infrared imagery to do so (so did the AC-130 which preceded them: see the images above).  Movement turns out to be ‘the key signature that differentiates an intensive landscape of thermal patterns into distinct contours and forces’, but it was not only the movement of the vehicles that mattered.  The crew also strained to identify the occupants of the vehicles and any possible weapons – hence the Sensor Operator’s complaint that ‘the only way I’ve ever been able to see a rifle is if they move them around when they’re holding them’ –  and the interpretation of the imagery introduced ‘novel semiotic complexities, discontinuities and indeterminacies’ (p. 280).

(2) Temporal interference: Times throughout the radio exchanges were standardised to GMT (‘Zulu time’), though this was neither the time at Creech Air Force Base in Nevada (-8 hours) nor in Uruzgan (+4 1/2 hours).  Hence all of those involved were juggling between multiple time zones, and the Sensor Operator flipped between IR and ‘full Day TV’.   ‘Yet this technical daylighting of the world [the recourse to Zulu time] is not always a smooth operation, always smuggling back in local, contingent temporalities into universal time from all sides’ (p. 281).


(3) Electromagnetic interference: The participants were juggling multiple forms of communication too – the troops on the ground used multi-band radios (MBITRs), for example, while the aircrew had access to secure military chatrooms (mIRC) to communicate with bases in the continental United States and in Afghanistan and with other aircraft but not with the troops on the ground, while the screeners analysing the video stream had no access to the radio communications between the Ground Force Commander and the Predator crew – and the transcripts reveal multiple occasions when it proved impossible to maintain ‘multiple lines of communication across the spectrum against possible comms failure.’  But this was not simply a matter of interruption: it was also, crucially, a matter of information in one medium not being made available in another (though at one point, long before the strike, the Predator pilot thought he was on the same page as the screeners: ‘I’ll make a radio call and I’ll look over [at the chatroom] and they will have said the same thing.’)

(4) Informational interference:  The transcript reveals multiple points of view on what was being seen – and once the analysis is extended beyond the transcript to those other operations centres the information overload (sometimes called ‘helmet fire’) is compounded.

(5) Altitudinal, meteorologic interference:  The Predator’s altitude was not a constant but was changed to deconflict the airspace as other aircraft were moved into and out of the area; those changes were also designed to improve flight operations (remote platforms are notoriously vulnerable to changing weather conditions) and image quality.  There were thus ‘highly choreographed negotiations of and between contingently constituted spatial volumes – airspace – and [electro-magnetic] spectral spaces, both exploiting and avoiding the thickened electromagnetic atmospheres of communications systems and storms alike’ (p. 288).

(6) Sensorial interference:  When two strike aircraft (‘fast movers’) were sent to support the Special Forces, the Ground Force Commander ordered them out of the area in case they ‘burn’ (warn) the target; similarly, the OH-158 helicopters did not move in ‘low and slow’ to observe the three vehicles more closely in case that alerted their occupants.

 ‘While the acoustic space of [the Predator] personnel is characterised by speech and static, the occupation of spectral space generates another acoustic space for surface-bound targets of surveillance.  Each aircraft bears a particular acoustic signature … [and] in the absence of visual contact the whines, whirs and wails of encroaching aircraft warn targets of the content of communications… These disparate acoustic spaces reveal the asymmetry of sensory perception and heightened awareness between the graphic (visual) and acoustic channels’ (p. 289).

burning-the-target-001 burning-the-target-2-001

That asymmetry was accentuated because, as Nasser Hussain so brilliantly observed, the video feeds from the Predator were silent movies: none of those watching had access to the conversations between the occupants of the vehicles, and the only soundtrack was provided by those watching from afar.

(7) Orbital interference:  The crowded space of competing communications requires ‘specific orbital coordinations between patterns of  “orbiting” (circling) aircraft and satellites’ (p. 292), but this is of necessity improvisational, involving multiple relays and frequently imperfect – as this exchange cited by Pierre and Alexander indicates (it also speaks directly to (3) above):

02:27 (Mission Intelligence Coordinator MIC): Alright we need to relay that.

02:27 (Pilot): Jag that Serpent 12 can hear Fox 24 on sat in (muffled) flying

02:27 (Pilot): Jag 25 [JTAC on the ground], Kirk97 [Predator callsign]

02:27 (Unknown):..Low thirties, I don’t care if you burn it

02:27 (Sensor): “I don’t care if you burn it”? That really must have been the other guys talking [presumably the ‘fast movers’]

02:27 (JAG 25): Kirk 97, Jag 25

02:28 (Pilot): Kirk 97, go ahead

02:28 (Pilot): Jag 25, Kirk 97

02:28 (JAG 25):(static) Are you trying to contact me, over?

0228 (Pilot): Jag 25, Kirk97, affirm, have a relay from SOTF KAF [Special Operations Task Force at Kandahar Airfield] fires [Fires Officer], he wants you to know that he uhh cannot talk on SAT 102. Serpent 12 can hear Fox 24 on SATCOM, and is trying to reply. Also ,the AWT [Aerial Weapons Team] is spooling up, and ready for the engagement. How copy?

02:28 (JAG 25): Jag copies all

02:28(Pilot):K. Good.

02:29(Pilot): Can’t wait till this actually happens, with all this coordination and *expletive*

(agreement noises from crew)

02:29 (Pilot): Thanks for the help, you’re doing a good job relaying everything in (muffled), MC. Appreciate it

(8) Semantic interference:  To expedite communications the military relies on a series of acronyms and shorthands (‘brevity codes’), but as these proliferate they can obstruct communication and even provoke discussion about their meaning and implication (hence the Mission Intelligence Controller: ‘God, I forget all my acronyms’); sometimes, too, non-standard terms are introduced that add to the confusion and uncertainty.

(9) Strategic, tactical interference:  Different aerial platforms have different operational envelopes and these both conform to and extend ‘a strategic stratigraphy of airspace and spectral space alike’ (p. 296).  I confess I don’t see how this constitutes ‘interference’.

(10) Occupational interference:  The knowledge those viewing the Full Motion Video feeds bring to the screen is not confined to their professional competences but extends into vernacular knowledges (about the identification of the three vehicles, for example): ‘The casual fluency with which particular visuals signals are discussed, interpreted and mined for cultural information shows a broad base of vernacular technical knowledge’ (p. 297).  The example Pierre and Alexander give relates to a discussion over the makes of the vehicles they are tracking, but again I don’t see how this constitutes ‘interference’ – unless that vernacular knowledge collides with professional competences.  The most obvious examples of such a collision are not technical at all but reside in the assumptions and prejudices the crew brought to bear on the actions of those they were observing.  Some were ostensibly tactical – the investigation report noted that the crew ‘made or changed key assessments [about the intentions of those they were observing] that influenced the decision to destroy the vehicles’ and yet they had ‘neither the training nor the tactical expertise to make these assessments’  – while others were cultural (notably, a marked Orientalism).

(11) Physiological interference:  Here Pierre and Alexander cite the corporeality of those operating the Predator: the stresses of working long shifts (and the boredom), the rest breaks that interrupt the ‘unblinking stare’, and the like.

(12) Organizational interference:   At one point the Sensor Operator fantasised about having ‘a whole fleet of Preds up here… ripple firing missiles right and left’  but – seriously, ironically, grumpily: who knows? – adds ‘we’re not killers, we are ISR.’


Pierre and Alexander see a jibing of these two missions (though whether that justifies calling this ‘interference’ is another question): ‘Despite the blurry, hairline differences between [Intelligence, Surveillance and Reconnaissance] and kill-chain operations, the ontologies of informational and kinetic environments make for different occupational worlds altogether’ (p. 301).  I’m not sure about that; one of the key roles of Predators – as in this case – has been to mediate strikes carried out by other aircraft, and while those mediations are frequently complicated and fractured (as Pierre and Alexander’s inventory shows) I don’t think this amounts to occupying ‘different occupational worlds’ let alone provoking ‘interference’ between them.

(13) Geographic, altitudinal interference:  This refers to the problems of a crowded airspace and the need for deconfliction (hence the pilot’s call: ‘I got us new airspace so even if they do keep heading west we can track them’).

(14) Cognitive interference: Remote operations are characterised by long, uneventful periods of watching the screen interrupted by shorter periods of intense, focused strike activity – a cyclical process that Pierre and Alexander characterise as an ‘orbital tension of acceleration and deceleration [that] lies at the heart of the killchain’ that profoundly affects ‘cognitive processing in and of the volatile operational environment’ (p. 305).  For them, this is epitomised when the Mission Intelligence Coordinator typed ‘Killchain’ into mIRC and immediately cleared the chat window for all but essential, strike-related communications.

(15) Topographic, organizational interference: Pierre and Alexander claim that ‘the complex relief of the ground, that is terrain and topography, is magnified in remote-split operations’ – this is presumably a reference to the restricted field of view of those flying the platforms – and that this is paralleled by the different levels of command and control to which the crews are required to respond: ‘navigating competing command pyramids is taken in stride with maneuvering around mountains’  (p. 308).  These are important observations, but I don’t see what is gained by the juxtaposition; in the Uruzgan case the Predator was navigating mountainous terrain  (‘You got a mountain coming into view,’ the Safety Observer advises, ‘keep it in a turn’) but the crew was not responding to directives from multiple operations centres.  In fact, that was part of the problem: until the eleventh hour staff officers were content to watch and record but made no attempt to intervene in the operation.

(16) Demographic, physiologic interference:  Here Pierre and Alexander cite both the composition of the crews operating the remote platforms – predominantly young white men who, so they say, exhibit different inclinations to those of ‘conventional’ Air Force pilots – and the repeated identification of the occupants of the suspect vehicles as ‘Military-Aged Males (‘statistical stereotyping’) (p. 309).


[Still image from NATIONAL BIRD © Ten Forward Films]

(17) Motile interference: Pierre and Alexander treat the crew’s transition from a gung-ho desire to strike and an absolute confidence in target identification to confusion and disquiet once the possibility of civilian casualties dawns on them as a disjunctive moment in which they struggle to regain analytical and affective control: ‘The revelation of misinterpretation exposes the persistence of interference all along, and generates its own form of cognitive shock’ (p. 312).  This feeds directly into:

(18) Operational, ecological interference:  As the crew absorbed new information from the pilots of the attack helicopters about the presence of women and children in the vehicles they registered the possibility of a (catastrophic) mistake, and so returned to their ISR mission – taking refuge in their sensors, what they could and could not have seen, and bracketing the strike itself – in an attempt to screen out the discordant information: ‘The optic that initially occasioned the first identifiable instances of misinterpretation is re-activated as a kind of prosthetic inducer of cognitive distance’ (p. 313).  The exchange below (beautifully dissected by Lorraine de Volo) captures this almost therapeutic recalibration perfectly:


(19) Political, epistemological interference:  Here the target is the cascade of redactions that runs through the unclassified version of the transcripts (and, by extension, the investigation report as a whole).  ‘That redaction and the strategic project it serves – secrecy in the form of classification – is not necessarily deployed electromagnetically does not mean its effects are limited to analog media’ since the objective is to command and control a whole ‘ecology of communication'(p. 316) (see my posts here and here).

This inventory is derived from a limited set of transactions, as I’ve said, but it’s also limited by the sensing and communication technology that was available to the participants at the time, so some caution is necessary in extrapolating these findings.  But the general (and immensely important) argument Pierre and Alexander make is that the catastrophic strike cannot be attributed to ‘miscommunication’ – or at any rate, not to miscommunication considered as somehow apart from and opposed to communication.  Hence their focus on interference:

‘Defined by moments of incoherence or interruption of a dominant signal that is itself a form of interference, interferences can take on different and often banal forms such as radio static, garbled signals, forgotten acronyms, misread gestures or even time lapses, which in the remote operational theaters of military missions result in disastrous actions.  Moreover, interference indexes the common media, forms, processes, and spaces connecting apparently disparate communication and signals across distinct material and operational environments.

In this sense, interference is not a subversion of communication but rather a constitutive and essential part of it.  Interference is thus both inhibitor and instigator.  Interference makes lines of communication read, alternatively, as field of interactions.  In this expanded field, interference may complexify by cancelling out communications, blocking or distorting signals, but conversely it may also amplify and augment both the content of sensed information and sensory receptions of the environment of communications.  Interference is what makes sensing ecologies make sense.’ (p. 318)

They also emphasise, more than most of us, that the ‘networks’ that enable drone strikes are three-dimensional (so reducing them to a planar map does considerable violence to the violence), that the connections and communications on which they rely are imperfect and inconstant in time and space, and that these extend far beyond any conventional (or even unconventional) ‘landscape’.  In general, I think, the critical analysis of drone warfare needs to be thickened in at least two directions: to address what happens on the ground, including the preparation of the ground, so to speak; and to reconstruct the fraught geopolitics of satellite communications and bandwidth that so materially shapes what is seen and not seen and what is heard and not heard.  More to come on both.