The eyes have it…

The Disorder of Things has hosted a symposium on Antoine Bousquet‘s The eye of war: military perception from the telescope to the drone (Minnesota UP, 2018).  Antoine’s introduction is here.

There were four other participants, and below I’ve linked to their commentaries and snipped extracts to give you a sense of their arguments: but it really is worth reading them in full.

Kate Hall‘Linear Perspective, the Modern Subject, and the Martial Gaze’

For Bousquet this future of globalised targeting that the birth of linear perspective has brought us to throws the role of the human into question. With the move of perception into the realm of the technical, Bousquet sees that perception has become a process without a subject, and as human agency is increasingly reduced, so does the possibility for politics – leading, perhaps much like the concerns of the Frankfurt School, to passivity and a closing of the space of critique. For Bousquet the figure that captures this positioning or transformation of the human, and the image that ends the book, is the bomber instructor recording aircraft movement within a dark camera obscura tent. As Bousquet concludes, “…the camera obscura’s occupant is both a passive object of the targeting process and an active if compliant agent tasked with the iterative process and optimization of its performance. Perhaps this duality encapsulates the martial condition we inhabit today, caught between our mobilization within the circulatory networks of the logistics of perception and the roving crosshairs of a global imperium of targeting – and all watched over by machines of glacial indifference.” 

If this is the figure that encapsulates the condition of the present, Bousquet has shown in Eye of War how its foundations are found in the early modern period. And in tracing this history, it is clear the future does not look promising for humans (both as passive subjects and as objects of lethal surveillance). But Bousquet does not give us a sense of how we might change course. Eye of War does not ask, where is the space for politics in this analysis of the present?

Dan Öberg‘Requiem for the Battlefield’

While the culminating battle of the Napoleonic wars, Waterloo, was fought at a battlefield where 140,000 men and 400 guns were crammed into an area of roughly 3,5 miles, the latter half of the 19th century becomes characterised by the dispersal and implosion of the battlefield. As Bousquet has directed our attention to in his work, after the birth of modern warfare the battlefield dissolves due to the increased range of weapons systems. Its disappearance is also facilitated by how the military logistics of perception conditions the appearances of targets, particularly through how the “eye of war” manages to move from the commander occupying a high-point next to the field of battle, to being facilitated by balloons, binoculars, aerial reconnaissance, satellites, algorithms, and cloud computing. It is as part of this process we eventually reach the contemporary era where targeting is characterised by polar inertia, as targets arrive as digital images from anywhere on the globe in front of a stationary targeteer. However, I would like to argue that, parallel to this, there is a corresponding process taking place, which erases and remodels the battlefield as a result of the military disposition that is born with the operational dimension of warfare.

To grasp this disposition and its consequences we need to ponder the fact that it is no coincidence that the operational dimension emerges at precisely the time when the traditional battlefield is starting to disappear. As The Eye of War outlines, global targeting is enabled by a logistics of perception. However, the demand for maps and images as well as the attempts to make sense of the battlefield arguably receives its impetus and frame of reference from elsewhere. It finds its nexus in standard operating procedures, regulations, instructions and manuals, military working groups, administrative ideals, organisational routines, and bureaucratic rituals. And, as the battlefield is managed, coded, and homogenised, it simultaneously starts to become an external point of reference, enacted through operational analysis and planning far from the battlefield itself.

Matthew Ford‘Totalising the State through Vision and War’

The technologies of vision that Antoine describes emerge from and enable the political and military imaginaries that inspired them. The technological fix that this mentality produces is, however, one that locks military strategy into a paradox that privileges tactical engagement over identifying political solutions. For the modern battlefield is a battlefield of fleeting targets, where speed and concealment reduce the chance of being attacked and create momentary opportunities to produce strategic effects (Bolt, 2012). The assemblages of perspective, sensing, imaging and mapping, described in The Eye of War may make it possible to anticipate and engage adversaries before they can achieve these effects but by definition they achieve these outcomes at the tactical level.

The trap of the martial gaze is, then, twofold. On the one hand, by locking technologies of vision into orientalist ways of seeing, strategies that draw on these systems tend towards misrepresenting adversaries in a manner that finds itself being reproduced in military action. At the same time, in an effort to deliver decisive battle, the state has constructed increasingly exquisite military techniques. These hold out the prospect of military success but only serve to further atomise war down tactical lines as armed forces find more exquisite ways to identify adversaries and adversaries find more sophisticated ways to avoid detection. The result is that the military constructs enemies according to a preconceived calculus and fights them in ways that at best manage outcomes but at worst struggle to deliver political reconciliation.

Jairus Grove, ‘A Martial Gaze Conscious of Itself

If we take the assemblage and the more-than-human approach of Bousquet’s book seriously, which I do, then we ought not believe that the dream of sensing, imaging, mapping, and targeting ends with the intact human individual. As an early peak at what this could become, consider Bousquet’s review of the late 1970’s research on ‘cognitive cartography’ and the concern that human technology would need to be altered to truly take advantage of the mapping revolution. More than the development of GIS and other targeting technologies, the dream of cognitive mapping and conditioning was to manage the complex informatics of space and the human uses of it from the ground up. That is in the making of user-friendly human subjects. One can image targeting following similar pathways. The “martial gaze that roams our planet” will not be satisfied with the individual any more than it was satisfied with the factory, the silo, the unit, or the home.

The vast data revolutions in mapping individual and collective behavior utilized in the weaponization of fake and real news, marketing research, fMRI advances and brain mapping, as well nanodrones, directed energy weapons, and on and on, suggest to me that just as there has never been an end of history for politics, or for that matter war, there will be no end of history or limit to what the martial gaze dreams of targeting. I can imagine returns to punishment where pieces of the enemy’s body are taken. Jasbir Puar’s work on debility suggests (see our recent symposium) already suggests such a martial vision of the enemy at play in the new wars of the 21st century. Following the long tails of Bousquet’s machinic history, I can further imagine the targeting of ideas and behaviors for which ‘pattern-of-life’ targeting and gait analysis are use are only crude and abstract prototypes.

If we, like the machines we design, are merely technical assemblages, then the molecularization of war described by Bousquet is not likely to remain at the level of the intact human, as if individuals were the martial equivalent of Plank’s quanta of energy. The martial gaze will want more unless fundamentally interrupted by other forces of abstraction and concretization.

Antoine‘s response is here.

Lots to think about here for me – especially since one of my current projects on ‘woundscapes‘ (from the First World War through to the present) is located at the intersection of the military gaze (‘the target’) and the medical gaze (‘the wound’) but rapidly spirals beyond these acutely visual registers, as it surely must….  More soon!

Googled

Following up my post on Google and Project Maven here, there’s an open letter (via the International Committee for Robot Arms Control) in support of Google employees opposed to the tech giant’s participation in Project Maven here: it’s open for for (many) more signatures…

An Open Letter To:

Larry Page, CEO of Alphabet;
Sundar Pichai, CEO of Google;
Diane Greene, CEO of Google Cloud;
and Fei-Fei Li, Chief Scientist of AI/ML and Vice President, Google Cloud,

As scholars, academics, and researchers who study, teach about, and develop information technology, we write in solidarity with the 3100+ Google employees, joined by other technology workers, who oppose Google’s participation in Project Maven. We wholeheartedly support their demand that Google terminate its contract with the DoD, and that Google and its parent company Alphabet commit not to develop military technologies and not to use the personal data that they collect for military purposes. The extent to which military funding has been a driver of research and development in computing historically should not determine the field’s path going forward. We also urge Google and Alphabet’s executives to join other AI and robotics researchers and technology executives in calling for an international treaty to prohibit autonomous weapon systems.

Google has long sought to organize and enhance the usefulness of the world’s information. Beyond searching for relevant webpages on the internet, Google has become responsible for compiling our email, videos, calendars, and photographs, and guiding us to physical destinations. Like many other digital technology companies, Google has collected vast amounts of data on the behaviors, activities and interests of their users. The private data collected by Google comes with a responsibility not only to use that data to improve its own technologies and expand its business, but also to benefit society. The company’s motto “Don’t Be Evil” famously embraces this responsibility.

Project Maven is a United States military program aimed at using machine learning to analyze massive amounts of drone surveillance footage and to label objects of interest for human analysts. Google is supplying not only the open source ‘deep learning’ technology, but also engineering expertise and assistance to the Department of Defense.

According to Defense One, Joint Special Operations Forces “in the Middle East” have conducted initial trials using video footage from a small ScanEagle surveillance drone. The project is slated to expand “to larger, medium-altitude Predator and Reaper drones by next summer” and eventually to Gorgon Stare, “a sophisticated, high-tech series of cameras…that can view entire towns.” With Project Maven, Google becomes implicated in the questionable practice of targeted killings. These include so-called signature strikesand pattern-of-life strikes that target people based not on known activities but on probabilities drawn from long range surveillance footage. The legality of these operations has come into question under international[1] and U.S. law.[2] These operations also have raised significant questions of racial and gender bias (most notoriously, the blanket categorization of adult males as militants) in target identification and strike analysis.[3]These problems cannot be reduced to the accuracy of image analysis algorithms, but can only be addressed through greater accountability to international institutions and deeper understanding of geopolitical situations on the ground.

While the reports on Project Maven currently emphasize the role of human analysts, these technologies are poised to become a basis for automated target recognition and autonomous weapon systems. As military commanders come to see the object recognition algorithms as reliable, it will be tempting to attenuate or even remove human review and oversight for these systems. According to Defense One, the DoD already plans to install image analysis technologies on-board the drones themselves, including armed drones. We are then just a short step away from authorizing autonomous drones to kill automatically, without human supervision or meaningful human control. If ethical action on the part of tech companies requires consideration of who might benefit from a technology and who might be harmed, then we can say with certainty that no topic deserves more sober reflection – no technology has higher stakes – than algorithms meant to target and kill at a distance and without public accountability.

We are also deeply concerned about the possible integration of Google’s data on people’s everyday lives with military surveillance data, and its combined application to targeted killing. Google has moved into military work without subjecting itself to public debate or deliberation, either domestically or internationally. While Google regularly decides the future of technology without democratic public engagement, its entry into military technologies casts the problems of private control of information infrastructure into high relief.

Should Google decide to use global internet users’ personal data for military purposes, it would violate the public trust that is fundamental to its business by putting its users’ lives and human rights in jeopardy. The responsibilities of global companies like Google must be commensurate with the transnational makeup of their users. The DoD contracts under consideration by Google, and similar contracts already in place at Microsoft and Amazon, signal a dangerous alliance between the private tech industry, currently in possession of vast quantities of sensitive personal data collected from people across the globe, and one country’s military. They also signal a failure to engage with global civil society and diplomatic institutions that have already highlighted the ethical stakes of these technologies.

We are at a critical moment. The Cambridge Analytica scandal demonstrates growing public concern over allowing the tech industries to wield so much power. This has shone only one spotlight on the increasingly high stakes of information technology infrastructures, and the inadequacy of current national and international governance frameworks to safeguard public trust. Nowhere is this more true than in the case of systems engaged in adjudicating who lives and who dies.
We thus ask Google, and its parent company Alphabet, to:

  • Terminate its Project Maven contract with the DoD.

  • Commit not to develop military technologies, nor to allow the personal data it has collected to be used for military operations.

  • Pledge to neither participate in nor support the development, manufacture, trade or use of autonomous weapons; and to support efforts to ban autonomous weapons.

Google eyes

The Oxford English Dictionary recognised ‘google’ as a verb in 2006, and its active form is about to gain another dimension.  One of the most persistent anxieties amongst those executing remote warfare, with its extraordinary dependence on (and capacity for) real-time full motion video surveillance as an integral moment of the targeting cycle, has been the ever-present risk of ‘swimming in sensors and drowning in data‘.

But now Kate Conger and Dell Cameron report for Gizmodo on a new collaboration between Google and the Pentagon as part of Project Maven:

Project Maven, a fast-moving Pentagon project also known as the Algorithmic Warfare Cross-Functional Team (AWCFT), was established in April 2017. Maven’s stated mission is to “accelerate DoD’s integration of big data and machine learning.” In total, the Defense Department spent $7.4 billion on artificial intelligence-related areas in 2017, the Wall Street Journal reported.

The project’s first assignment was to help the Pentagon efficiently process the deluge of video footage collected daily by its aerial drones—an amount of footage so vast that human analysts can’t keep up, according to Greg Allen, an adjunct fellow at the Center for a New American Security, who co-authored a lengthy July 2017 report on the military’s use of artificial intelligence. Although the Defense Department has poured resources into the development of advanced sensor technology to gather information during drone flights, it has lagged in creating analysis tools to comb through the data.

“Before Maven, nobody in the department had a clue how to properly buy, field, and implement AI,” Allen wrote.

Maven was tasked with using machine learning to identify vehicles and other objects in drone footage, taking that burden off analysts. Maven’s initial goal was to provide the military with advanced computer vision, enabling the automated detection and identification of objects in as many as 38 categories captured by a drone’s full-motion camera, according to the Pentagon. Maven provides the department with the ability to track individuals as they come and go from different locations.

Google has reportedly attempted to allay fears about its involvement:

A Google spokesperson told Gizmodo in a statement that it is providing the Defense Department with TensorFlow APIs, which are used in machine learning applications, to help military analysts detect objects in images. Acknowledging the controversial nature of using machine learning for military purposes, the spokesperson said the company is currently working “to develop polices and safeguards” around its use.

“We have long worked with government agencies to provide technology solutions. This specific project is a pilot with the Department of Defense, to provide open source TensorFlow APIs that can assist in object recognition on unclassified data,” the spokesperson said. “The technology flags images for human review, and is for non-offensive uses only. Military use of machine learning naturally raises valid concerns. We’re actively discussing this important topic internally and with others as we continue to develop policies and safeguards around the development and use of our machine learning technologies.”

 

As Mehreen Kasana notes, Google has indeed ‘long worked with government agencies’:

2017 report in Quartz shed light on the origins of Google and how a significant amount of funding for the company came from the CIA and NSA for mass surveillance purposes. Time and again, Google’s funding raises questions. In 2013, a Guardian report highlighted Google’s acquisition of the robotics company Boston Dynamics, and noted that most of the projects were funded by the Defense Advanced Research Projects Agency (DARPA).

‘We’re not in Kansas anymore’

Today’s Guardian has a report from Roy Wenzl called ‘The kill-chain: inside the unit that tracks targets in America’s drone wars’.  There’s not much there that won’t be familiar to regular readers, but the focus is not on the pilots and sensor operators but on the screeners – the analysts who scrutinise the full-motion video feeds from the drones to provide ISR (intelligence, surveillance and reconnaissance).

The report describes the work of the 184th Intelligence Wing of the National Air Guard at McConnell AFB in Kansas:

‘They video-stalk enemy combatants, and tell warfighters what they see… The group does this work in the middle of America, at an air base surrounded by flat cow pastures and soybean fields….

The work is top secret.They say that they see things in those drone images that no one wants to see. Sometimes, it’s terrorists beheading civilians. Sometimes it’s civilians dying accidentally in missions that the Kansans help coordinate.

They agonize over those deaths. The most frequently heard phrase in drone combat, one airman says, is: “Don’t push the button.”

“You see [enemy combatants] kiss their kids goodbye, and kiss their wives goodbye, and then they walk down the street,” said a squadron chief master sergeant. “As soon as they get over that hill, the missile is released.”

The Americans wait to fire, he says, “because we don’t want the family to see it”.

One of those involved marvels at the technology involved: ‘The technology we use is just insane, it’s so good.’  As the report notes, critics of the programme have a more literal meaning of insanity in their minds….

The report also confirms the intensity (and, as part of that intensity, the tedium) of the shift-work involved:

Back in Kansas, in the SCIF (Sensitive Compartmented Intelligence Facility), members of Col Brad Hilbert’s group watch dozens of screens. One eight-hour shift will watch multiple targets, then hand off surveillance to the next shift. Multiple missions run simultaneously.

While enemy combatants walk around carrying weapons, the group studies their movements. They can watch one person, or one building, or one small neighborhood. The drones loiter high and unseen, giving clear, hi-tech visuals….

Most of what they watch is tedious. “They will sometimes watch one pile of sand every day for a month,” their chaplain says.

But sometimes, they see that an enemy is about to attack US troops. The commanders decide to “neutralize” him. When commanders order attacks, the Kansans become one link in a kill chain, which can include armed Reaper and Predator drone operators, fighter pilots, ground artillery commanders – and commanders with authority to approve or deny strikes.

Those who don’t count and those who can’t count

An excellent article from the unfailing New York Times in a recent edition of the Magazine: Azmat Khan and Anand Gopal on ‘The Uncounted‘, a brilliant, forensic and – crucially – field-based investigation into civilian casualties in the US air war against ISIS:

American military planners go to great lengths to distinguish today’s precision strikes from the air raids of earlier wars, which were carried out with little or no regard for civilian casualties. They describe a target-selection process grounded in meticulously gathered intelligence, technological wizardry, carefully designed bureaucratic hurdles and extraordinary restraint. Intelligence analysts pass along proposed targets to “targeteers,” who study 3-D computer models as they calibrate the angle of attack. A team of lawyers evaluates the plan, and — if all goes well — the process concludes with a strike so precise that it can, in some cases, destroy a room full of enemy fighters and leave the rest of the house intact.

The coalition usually announces an airstrike within a few days of its completion. It also publishes a monthly report assessing allegations of civilian casualties. Those it deems credible are generally explained as unavoidable accidents — a civilian vehicle drives into the target area moments after a bomb is dropped, for example. The coalition reports that since August 2014, it has killed tens of thousands of ISIS fighters and, according to our tally of its monthly summaries, 466 civilians in Iraq.

What Azmat and Anand found on the ground, however, was radically different:

Our own reporting, conducted over 18 months, shows that the air war has been significantly less precise than the coalition claims. Between April 2016 and June 2017, we visited the sites of nearly 150 airstrikes across northern Iraq, not long after ISIS was evicted from them. We toured the wreckage; we interviewed hundreds of witnesses, survivors, family members, intelligence informants and local officials; we photographed bomb fragments, scoured local news sources, identified ISIS targets in the vicinity and mapped the destruction through satellite imagery. We also visited the American air base in Qatar where the coalition directs the air campaign. There, we were given access to the main operations floor and interviewed senior commanders, intelligence officials, legal advisers and civilian-casualty assessment experts. We provided their analysts with the coordinates and date ranges of every airstrike — 103 in all — in three ISIS-controlled areas and examined their responses. The result is the first systematic, ground-based sample of airstrikes in Iraq since this latest military action began in 2014.

We found that one in five of the coalition strikes we identified resulted in civilian death, a rate more than 31 times that acknowledged by the coalition. It is at such a distance from official claims that, in terms of civilian deaths, this may be the least transparent war in recent American history [my emphasis].  Our reporting, moreover, revealed a consistent failure by the coalition to investigate claims properly or to keep records that make it possible to investigate the claims at all. While some of the civilian deaths we documented were a result of proximity to a legitimate ISIS target, many others appear to be the result simply of flawed or outdated intelligence that conflated civilians with combatants. In this system, Iraqis are considered guilty until proved innocent. Those who survive the strikes …  remain marked as possible ISIS sympathizers, with no discernible path to clear their names.

They provide immensely powerful, moving case studies of innocents ‘lost in the wreckage’.  They also describe the US Air Force’s targeting process at US Central Command’s Combined Air Operations Center (CAOC) at Al Udeid Air Base in Qatar (the image above shows the Intelligence, Surveillance and Reconnaissance Division at the CAOC, which ‘provides a common threat and targeting picture’):

The process seemed staggeringly complex — the wall-to-wall monitors, the soup of acronyms, the army of lawyers — but the impressively choreographed operation was designed to answer two basic questions about each proposed strike: Is the proposed target actually ISIS? And will attacking this ISIS target harm civilians in the vicinity?

As we sat around a long conference table, the officers explained how this works in the best-case scenario, when the coalition has weeks or months to consider a target. Intelligence streams in from partner forces, informants on the ground, electronic surveillance and drone footage. Once the coalition decides a target is ISIS, analysts study the probability that striking it will kill civilians in the vicinity, often by poring over drone footage of patterns of civilian activity. The greater the likelihood of civilian harm, the more mitigating measures the coalition takes. If the target is near an office building, the attack might be rescheduled for nighttime. If the area is crowded, the coalition might adjust its weaponry to limit the blast radius. Sometimes aircraft will even fire a warning shot, allowing people to escape targeted facilities before the strike. An official showed us grainy night-vision footage of this technique in action: Warning shots hit the ground near a shed in Deir al-Zour, Syria, prompting a pair of white silhouettes to flee, one tripping and picking himself back up, as the cross hairs follow.

Once the targeting team establishes the risks, a commander must approve the strike, taking care to ensure that the potential civilian harm is not “excessive relative to the expected military advantage gained,” as Lt. Col. Matthew King, the center’s deputy legal adviser, explained.

After the bombs drop, the pilots and other officials evaluate the strike. Sometimes a civilian vehicle can suddenly appear in the video feed moments before impact. Or, through studying footage of the aftermath, they might detect signs of a civilian presence. Either way, such a report triggers an internal assessment in which the coalition determines, through a review of imagery and testimony from mission personnel, whether the civilian casualty report is credible. If so, the coalition makes refinements to avoid future civilian casualties, they told us, a process that might include reconsidering some bit of intelligence or identifying a flaw in the decision-making process.

There are two issues here.  First, this is indeed the ‘best-case scenario’, and one that very often does not obtain.  One of the central vectors of counterinsurgency and counterterrorism is volatility: targets are highly mobile and often the ‘window of opportunity’ is exceedingly narrow.  I’ve reproduced this image from the USAF’s own targeting guide before, in relation to my analysis of the targeting cycle for a different US air strike against IS in Iraq in March 2015, but it is equally applicable here:

Second, that ‘window of opportunity’ is usually far from transparent, often frosted and frequently opaque.  For what is missing from the official analysis described by Azmat and Anand turns out to be the leitmotif of all remote operations (and there is a vital sense in which all forms of aerial violence are ‘remote’, whether the pilot is 7,000 miles away or 30,000 feet above the target [see for example here]):

Lt. Gen. Jeffrey Harrigian, commander of the United States Air Forces Central Command at Udeid, told us what was missing. “Ground truth, that’s what you’re asking for,” he said. “We see what we see from altitude and pull in from other reports. Your perspective is talking to people on the ground.” He paused, and then offered what he thought it would take to arrive at the truth: “It’s got to be a combination of both.”

The military view, perhaps not surprisingly, is that civilian casualties are unavoidable but rarely intentional:

Supreme precision can reduce civilian casualties to a very small number, but that number will never reach zero. They speak of every one of the acknowledged deaths as tragic but utterly unavoidable.

Azmat and Anand reached a numbingly different conclusion: ‘Not all civilian casualties are unavoidable tragedies; some deaths could be prevented if the coalition recognizes its past failures and changes its operating assumptions accordingly. But in the course of our investigation, we found that it seldom did either.’

Part of the problem, I suspect, is that whenever there is an investigation into reports of civilian casualties that may have been caused by US military operations it must be independent of all other investigations and can make no reference to them in its findings; in other words, as I’ve noted elsewhere, there is no ‘case law’: bizarre but apparently true.

But that is only part of the problem.  The two investigators cite multiple intelligence errors (‘In about half of the strikes that killed civilians, we could find no discernible ISIS target nearby. Many of these strikes appear to have been based on poor or outdated intelligence’) and even errors and discrepancies in recording and locating strikes after the event.

It’s worth reading bellingcat‘s analysis here, which also investigates the coalition’s geo-locational reporting and notes that the official videos ‘appear only to showcase the precision and efficiency of coalition bombs and missiles, and rarely show people, let alone victims’.  The image above, from CNN, is unusual in showing the collection of the bodies of victims of a US air strike in Mosul, this time in March 2017; the target was a building from which two snipers were firing; more than 100 civilians sheltering there were killed.  The executive summary of the subsequent investigation is here – ‘The Target Engagement Authority (TEA) was unaware of and could not have predicted the presence of civilians in the structure prior to the engagement’ – and report from W.J. Hennigan and Molly Hennessy-Fiske is here.

Included in bellingcat’s account is a discussion of a video which the coalition uploaded to YouTube and then deleted; Azmat retrieved and archived it – the video shows a strike on two buildings in Mosul on 20 September 2015 that turned out to be focal to her investigation with Anand:

The video caption identifies the target as a ‘VBIED [car bomb] facility’.  But Bellingcat asks:

Was this really a “VBIED network”? Under the original upload, a commenter starting posting that the houses shown were his family’s residence in Mosul.

“I will NEVER forget my innocent and dear cousins who died in this pointless airstrike. Do you really know who these people were? They were innocent and happy family members of mine.”

Days after the strike, Dr Zareena Grewal, a relative living in the US wrote in the New York Times that four family members had died in the strike. On April 2, 2017 – 588 days later – the Coalition finally admitted that it indeed bombed a family home which they confused for an IS headquarters and VBIED facility.

“The case was brought to our attention by the media and we discovered the oversight, relooked [at] the case based on the information provided by the journalist and family, which confirmed the 2015 assessment,” Colonel Joe Scrocca, Director of Public Affairs for the Coalition, told Airwars.

Even though the published strike video actually depicted the killing of a family, it remained – wrongly captioned – on the official Coalition YouTube channel for more than a year.

This is but one, awful example of a much wider problem.  The general conclusion reached by Azmat and Anand is so chilling it is worth re-stating:

According to the coalition’s available data, 89 of its more than 14,000 airstrikes in Iraq have resulted in civilian deaths, or about one of every 157 strikes. The rate we found on the ground — one out of every five — is 31 times as high.

One of the houses [shown above] mistakenly identified as a ‘VBIED facility’ in that video belonged to Basim Razzo, and he became a key informant in Azmat and Anand’s investigation; he was subsequently interviewed by Amy Goodman: the transcript is here. She also interviewed Azmat and Anand: that transcript is here.  In the course of the conversation Anand makes a point that amply and awfully confirms Christiane Wilke‘s suggestion – in relation to air strikes in Afghanistan – that the burden of recognition, of what in international humanitarian law is defined as ‘distinction’, is tacitly being passed from combatant to civilian: that those in the cross-hairs of the US military are required to perform their civilian status to those watching from afar.

It goes back to this issue of Iraqis having to prove that they are not ISIS, which is the opposite of what we would think. We would think that the coalition would do the work to find out whether somebody is a member of ISIS or not. Essentially, they assume people are ISIS until proven otherwise.

To make matters worse, they have to perform their ‘civilianness’ according to a script recognised and approved by the US military, however misconceived it may be.  In the case of one (now iconic) air strike in Afghanistan being an adolescent or adult male, travelling in a group, praying at one of the times prescribed by Islam, and carrying a firearm in a society where that is commonplace was enough for civilians to be judged as hostile by drone crews and attacked from the air with dreadful results (see here and here).

This is stunning investigative journalism, but it’s more than that: the two authors are both at Arizona State University, and they have provided one of the finest examples of critical, probing and accessible scholarship I have ever read.

Tracking and targeting

News from Lucy Suchman of a special issue of Science, Technology and Human Values [42 (6) (2017)]  on Tracking and targeting: sociotechnologies of (in)security, which she’s co-edited with Karolina Follis and Jutta Weber.

Here’s the line-up:

Lucy Suchman, Karolina Follis and Jutta Weber: Tracking and targeting

This introduction to the special issue of the same title sets out the context for a critical examination of contemporary developments in sociotechnical systems deployed in the name of security. Our focus is on technologies of tracking, with their claims to enable the identification of those who comprise legitimate targets for the use of violent force. Taking these claims as deeply problematic, we join a growing body of scholarship on the technopolitical logics that underpin an increasingly violent landscape of institutions, infrastructures, and actions, promising protection to some but arguably contributing to our collective insecurity. We examine the asymmetric distributions of sociotechnologies of (in)security; their deadly and injurious effects; and the legal, ethical, and moral questions that haunt their operations.

Karolina Follis: Visions and transterritory: the borders of Europe

This essay is about the role of visual surveillance technologies in the policing of the external borders of the European Union (EU). Based on an analysis of documents published by EU institutions and independent organizations, I argue that these technological innovations fundamentally alter the nature of national borders. I discuss how new technologies of vision are deployed to transcend the physical limits of territories. In the last twenty years, EU member states and institutions have increasingly relied on various forms of remote tracking, including the use of drones for the purposes of monitoring frontier zones. In combination with other facets of the EU border management regime (such as transnational databases and biometrics), these technologies coalesce into a system of governance that has enabled intervention into neighboring territories and territorial waters of other states to track and target migrants for interception in the “prefrontier.” For jurisdictional reasons, this practice effectively precludes the enforcement of legal human rights obligations, which European states might otherwise have with regard to these persons. This article argues that this technologically mediated expansion of vision has become a key feature of post–cold war governance of borders in Europe. The concept of transterritory is proposed to capture its effects.

Christiane Wilke: Seeing and unmaking civilians in Afghanistan: visual technologies and contested professional visions

While the distinction between civilians and combatants is fundamental to international law, it is contested and complicated in practice. How do North Atlantic Treaty Organization (NATO) officers see civilians in Afghanistan? Focusing on 2009 air strike in Kunduz, this article argues that the professional vision of NATO officers relies not only on recent military technologies that allow for aerial surveillance, thermal imaging, and precise targeting but also on the assumptions, vocabularies, modes of attention, and hierarchies of knowledges that the officers bring to the interpretation of aerial surveillance images. Professional vision is socially situated and frequently contested with communities of practice. In the case of the Kunduz air strike, the aerial vantage point and the military visual technologies cannot fully determine what would be seen. Instead, the officers’ assumptions about Afghanistan, threats, and the gender of the civilian inform the vocabulary they use for coding people and places as civilian or noncivilian. Civilians are not simply “found,” they are produced through specific forms of professional vision.

Jon Lindsay: Target practice: Counterterrorism and the amplification of data friction

The nineteenth-century strategist Carl von Clausewitz describes “fog” and “friction” as fundamental features of war. Military leverage of sophisticated information technology in the twenty-first century has improved some tactical operations but has not lifted the fog of war, in part, because the means for reducing uncertainty create new forms of it. Drawing on active duty experience with an American special operations task force in Western Iraq from 2007 to 2008, this article traces the targeting processes used to “find, fix, and finish” alleged insurgents. In this case they did not clarify the political reality of Anbar province but rather reinforced a parochial worldview informed by the Naval Special Warfare community. The unit focused on the performance of “direct action” raids during a period in which “indirect action” engagement with the local population was arguably more appropriate for the strategic circumstances. The concept of “data friction”, therefore, can be understood not simply as a form of resistance within a sociotechnical system but also as a form of traction that enables practitioners to construct representations of the world that amplify their own biases.

M.C. Elish: Remote split: a history of US drone operations and the distributed labour of war

This article analyzes US drone operations through a historical and ethnographic analysis of the remote split paradigm used by the US Air Force. Remote split refers to the globally distributed command and control of drone operations and entails a network of human operators and analysts in the Middle East, Europe, and Southeast Asia as well as in the continental United States. Though often viewed as a teleological progression of “unmanned” warfare, this paper argues that historically specific technopolitical logics establish the conditions of possibility for the work of war to be divisible into discreet and computationally mediated tasks that are viewed as effective in US military engagements. To do so, the article traces how new forms of authorized evidence and expertise have shaped developments in military operations and command and control priorities from the Cold War and the “electronic battlefield” of Vietnam through the Gulf War and the conflict in the Balkans to contemporary deployments of drone operations. The article concludes by suggesting that it is by paying attention to divisions of labor and human–machine configurations that we can begin to understand the everyday and often invisible structures that sustain perpetual war as a military strategy of the United States.

I’ve discussed Christiane’s excellent article in detail before, but the whole issue repays careful reading.

And if you’re curious about the map that heads this post, it’s based on the National Security Agency’s Strategic Mission List (dated 2007 and published in the New York Times on 2 November 2013), and mapped at Electrospaces: full details here.

Intelligence and War

Vue d’artiste de l’évolution de l’Homme peinte sur un mur, stencil graffiti on Vali-ye-Asr Avenue in central Tehran. By Paul Keller, 4 November 2007

A new edition of the ever-interesting Mediatropes is now online (it’s open access), this time on Intelligence and War: you can access the individual essays (or download the whole issue) here.  Previous issues are all available here.

The issue opens with an editorial introduction (‘Intelligence and War’ by Stuart J Murray, Jonathan Chau, Twyla Gibson.  And here is Stuart’s summary of the rest of the issue:

Michael Dorland’s “The Black Hole of Memory: French Mnemotechniques in the Erasure of the Holocaust” interrogates the role of memory and memorialization in the constitution of post-World War II France. Dorland hones in on the precarity of a France that grapples with its culpability in the Vel’ d’Hiv Round-up, spotlighting the role of the witness and the perpetually problematized function of testimony as key determinants in challenging both the public memory and the historical memory of a nation.

Sara Kendall’s essay, “Unsettling Redemption: The Ethics of Intra-subjectivity in The Act of Killing” navigates the problematic representation of mass atrocity. Employing Joshua Oppenheimer’s investigation of the Indonesian killings of 1965–1966, Kendall unsettles the documentary’s attempts to foreground the practices of healing and redemption, while wilfully sidestepping any acknowledgment of the structural dimensions of violence. To Kendall, the documentary’s focus on the narratives of the perpetrators, who function as proxies for the state, makes visible the aporia of the film, substituting a framework based on affect and empathy in place of critical political analyses of power imbalances.

Kevin Howley is concerned with the spatial ramifications of drone warfare. In “Drone Warfare: Twenty-First Century Empire and Communications,” Howley examines the battlefield deployment of drones through the lens of Harold Innis’s distinction between time-biased and space-biased media. By considering the drone as a space-biased technology that can transmit information across vast distances, yet only remain vital for short periods of time, Howley sees the drone as emblematic of the American impulse to simultaneously and paradoxically collapse geographical distance while expanding cultural differences between America and other nations.

Avital Ronell’s essay, entitled “BIGLY Mistweated: On Civic Grievance,” takes direct aim at the sitting US president, offering a rhetorical analysis of what she calls “Trumpian obscenity.” Ronell exposes the foundations of the current administration, identifying a government bereft of authority, stitched together by audacity, and punctuated by an almost unfathomable degree of absurdity. In her attempt to make sense of the fundamentally nonsensical and nihilistic discourse that Trump represents, Ronell walks alongside Paul Celan, Melanie Klein, and especially Jacques Derrida, concluding with a suggestive, elusive, and allusive possibility for negotiating the contemporary, Trumpian moment.

In “The Diseased ‘Terror Tunnels’ in Gaza: Israeli Surveillance and the Autoimmunization of an Illiberal Democracy,” Marouf Hasian, Jr. explains how Israel’s state-sanctioned use of autoimmunizing rhetorics depict the lives of Israelis as precarious and under threat. Here, the author’s preoccupation is with the Israeli strategy of rhetorically reconfiguring smuggling tunnels as “terror tunnels” that present an existential threat to Israeli citizens. In doing so, he shows how the non-combatant status of Gazan civilians is dissolved through the intervening effects of these media tropes.

Derek Gregory’s essay, “The Territory of the Screen,” offers a different perspective on drone warfare. Gregory leverages Owen Sheers’s novel, I Saw a Man, to explore the ways in which modern combat is contested through a series of mediating layers, a series of screens through which the United States, as Gregory argues, dematerializes the corporeality of human targets. For Gregory, drone warfare’s facilitation of remote killings is predicated on technical practices that reduce the extinguishing of life to technological processes that produce, and then execute, “killable bodies.”

But how is the increasingly unsustainable illusion of intelligence as being centralized and definitive maintained? Julie B. Wiest’s “Entertaining Genius: U.S. Media Representations of Exceptional Intelligence” identifies the media trope of exceptionally intelligent characters across mainstream film and television programs as key to producing and reinforcing popular understandings of intelligence. Through her analysis of such fictional savants, Wiest connects these patterns of representation to the larger social structures that reflect and reinforce narrowly defined notions of intelligence, and those who are permitted to possess it.

We end this issue with a poem from Sanita Fejzić, who offers a perspective on the human costs of war that is framed not by technology, but through poetic language.

My own essay is a reworked version of the penultimate section of “Dirty Dancing” (DOWNLOADS tab) which we had to cut because it really did stretch the length limitations for Life in the Age of Drone Warfare; so, as Stuart notes, I re-worked it, adding an extended riff on Owen Sheers‘ luminous I saw a man and looping towards the arguments I since developed in ‘Meatspace?