Googled

Following up my post on Google and Project Maven here, there’s an open letter (via the International Committee for Robot Arms Control) in support of Google employees opposed to the tech giant’s participation in Project Maven here: it’s open for for (many) more signatures…

An Open Letter To:

Larry Page, CEO of Alphabet;
Sundar Pichai, CEO of Google;
Diane Greene, CEO of Google Cloud;
and Fei-Fei Li, Chief Scientist of AI/ML and Vice President, Google Cloud,

As scholars, academics, and researchers who study, teach about, and develop information technology, we write in solidarity with the 3100+ Google employees, joined by other technology workers, who oppose Google’s participation in Project Maven. We wholeheartedly support their demand that Google terminate its contract with the DoD, and that Google and its parent company Alphabet commit not to develop military technologies and not to use the personal data that they collect for military purposes. The extent to which military funding has been a driver of research and development in computing historically should not determine the field’s path going forward. We also urge Google and Alphabet’s executives to join other AI and robotics researchers and technology executives in calling for an international treaty to prohibit autonomous weapon systems.

Google has long sought to organize and enhance the usefulness of the world’s information. Beyond searching for relevant webpages on the internet, Google has become responsible for compiling our email, videos, calendars, and photographs, and guiding us to physical destinations. Like many other digital technology companies, Google has collected vast amounts of data on the behaviors, activities and interests of their users. The private data collected by Google comes with a responsibility not only to use that data to improve its own technologies and expand its business, but also to benefit society. The company’s motto “Don’t Be Evil” famously embraces this responsibility.

Project Maven is a United States military program aimed at using machine learning to analyze massive amounts of drone surveillance footage and to label objects of interest for human analysts. Google is supplying not only the open source ‘deep learning’ technology, but also engineering expertise and assistance to the Department of Defense.

According to Defense One, Joint Special Operations Forces “in the Middle East” have conducted initial trials using video footage from a small ScanEagle surveillance drone. The project is slated to expand “to larger, medium-altitude Predator and Reaper drones by next summer” and eventually to Gorgon Stare, “a sophisticated, high-tech series of cameras…that can view entire towns.” With Project Maven, Google becomes implicated in the questionable practice of targeted killings. These include so-called signature strikesand pattern-of-life strikes that target people based not on known activities but on probabilities drawn from long range surveillance footage. The legality of these operations has come into question under international[1] and U.S. law.[2] These operations also have raised significant questions of racial and gender bias (most notoriously, the blanket categorization of adult males as militants) in target identification and strike analysis.[3]These problems cannot be reduced to the accuracy of image analysis algorithms, but can only be addressed through greater accountability to international institutions and deeper understanding of geopolitical situations on the ground.

While the reports on Project Maven currently emphasize the role of human analysts, these technologies are poised to become a basis for automated target recognition and autonomous weapon systems. As military commanders come to see the object recognition algorithms as reliable, it will be tempting to attenuate or even remove human review and oversight for these systems. According to Defense One, the DoD already plans to install image analysis technologies on-board the drones themselves, including armed drones. We are then just a short step away from authorizing autonomous drones to kill automatically, without human supervision or meaningful human control. If ethical action on the part of tech companies requires consideration of who might benefit from a technology and who might be harmed, then we can say with certainty that no topic deserves more sober reflection – no technology has higher stakes – than algorithms meant to target and kill at a distance and without public accountability.

We are also deeply concerned about the possible integration of Google’s data on people’s everyday lives with military surveillance data, and its combined application to targeted killing. Google has moved into military work without subjecting itself to public debate or deliberation, either domestically or internationally. While Google regularly decides the future of technology without democratic public engagement, its entry into military technologies casts the problems of private control of information infrastructure into high relief.

Should Google decide to use global internet users’ personal data for military purposes, it would violate the public trust that is fundamental to its business by putting its users’ lives and human rights in jeopardy. The responsibilities of global companies like Google must be commensurate with the transnational makeup of their users. The DoD contracts under consideration by Google, and similar contracts already in place at Microsoft and Amazon, signal a dangerous alliance between the private tech industry, currently in possession of vast quantities of sensitive personal data collected from people across the globe, and one country’s military. They also signal a failure to engage with global civil society and diplomatic institutions that have already highlighted the ethical stakes of these technologies.

We are at a critical moment. The Cambridge Analytica scandal demonstrates growing public concern over allowing the tech industries to wield so much power. This has shone only one spotlight on the increasingly high stakes of information technology infrastructures, and the inadequacy of current national and international governance frameworks to safeguard public trust. Nowhere is this more true than in the case of systems engaged in adjudicating who lives and who dies.
We thus ask Google, and its parent company Alphabet, to:

  • Terminate its Project Maven contract with the DoD.

  • Commit not to develop military technologies, nor to allow the personal data it has collected to be used for military operations.

  • Pledge to neither participate in nor support the development, manufacture, trade or use of autonomous weapons; and to support efforts to ban autonomous weapons.

Google eyes

The Oxford English Dictionary recognised ‘google’ as a verb in 2006, and its active form is about to gain another dimension.  One of the most persistent anxieties amongst those executing remote warfare, with its extraordinary dependence on (and capacity for) real-time full motion video surveillance as an integral moment of the targeting cycle, has been the ever-present risk of ‘swimming in sensors and drowning in data‘.

But now Kate Conger and Dell Cameron report for Gizmodo on a new collaboration between Google and the Pentagon as part of Project Maven:

Project Maven, a fast-moving Pentagon project also known as the Algorithmic Warfare Cross-Functional Team (AWCFT), was established in April 2017. Maven’s stated mission is to “accelerate DoD’s integration of big data and machine learning.” In total, the Defense Department spent $7.4 billion on artificial intelligence-related areas in 2017, the Wall Street Journal reported.

The project’s first assignment was to help the Pentagon efficiently process the deluge of video footage collected daily by its aerial drones—an amount of footage so vast that human analysts can’t keep up, according to Greg Allen, an adjunct fellow at the Center for a New American Security, who co-authored a lengthy July 2017 report on the military’s use of artificial intelligence. Although the Defense Department has poured resources into the development of advanced sensor technology to gather information during drone flights, it has lagged in creating analysis tools to comb through the data.

“Before Maven, nobody in the department had a clue how to properly buy, field, and implement AI,” Allen wrote.

Maven was tasked with using machine learning to identify vehicles and other objects in drone footage, taking that burden off analysts. Maven’s initial goal was to provide the military with advanced computer vision, enabling the automated detection and identification of objects in as many as 38 categories captured by a drone’s full-motion camera, according to the Pentagon. Maven provides the department with the ability to track individuals as they come and go from different locations.

Google has reportedly attempted to allay fears about its involvement:

A Google spokesperson told Gizmodo in a statement that it is providing the Defense Department with TensorFlow APIs, which are used in machine learning applications, to help military analysts detect objects in images. Acknowledging the controversial nature of using machine learning for military purposes, the spokesperson said the company is currently working “to develop polices and safeguards” around its use.

“We have long worked with government agencies to provide technology solutions. This specific project is a pilot with the Department of Defense, to provide open source TensorFlow APIs that can assist in object recognition on unclassified data,” the spokesperson said. “The technology flags images for human review, and is for non-offensive uses only. Military use of machine learning naturally raises valid concerns. We’re actively discussing this important topic internally and with others as we continue to develop policies and safeguards around the development and use of our machine learning technologies.”

 

As Mehreen Kasana notes, Google has indeed ‘long worked with government agencies’:

2017 report in Quartz shed light on the origins of Google and how a significant amount of funding for the company came from the CIA and NSA for mass surveillance purposes. Time and again, Google’s funding raises questions. In 2013, a Guardian report highlighted Google’s acquisition of the robotics company Boston Dynamics, and noted that most of the projects were funded by the Defense Advanced Research Projects Agency (DARPA).

Tracking and targeting

News from Lucy Suchman of a special issue of Science, Technology and Human Values [42 (6) (2017)]  on Tracking and targeting: sociotechnologies of (in)security, which she’s co-edited with Karolina Follis and Jutta Weber.

Here’s the line-up:

Lucy Suchman, Karolina Follis and Jutta Weber: Tracking and targeting

This introduction to the special issue of the same title sets out the context for a critical examination of contemporary developments in sociotechnical systems deployed in the name of security. Our focus is on technologies of tracking, with their claims to enable the identification of those who comprise legitimate targets for the use of violent force. Taking these claims as deeply problematic, we join a growing body of scholarship on the technopolitical logics that underpin an increasingly violent landscape of institutions, infrastructures, and actions, promising protection to some but arguably contributing to our collective insecurity. We examine the asymmetric distributions of sociotechnologies of (in)security; their deadly and injurious effects; and the legal, ethical, and moral questions that haunt their operations.

Karolina Follis: Visions and transterritory: the borders of Europe

This essay is about the role of visual surveillance technologies in the policing of the external borders of the European Union (EU). Based on an analysis of documents published by EU institutions and independent organizations, I argue that these technological innovations fundamentally alter the nature of national borders. I discuss how new technologies of vision are deployed to transcend the physical limits of territories. In the last twenty years, EU member states and institutions have increasingly relied on various forms of remote tracking, including the use of drones for the purposes of monitoring frontier zones. In combination with other facets of the EU border management regime (such as transnational databases and biometrics), these technologies coalesce into a system of governance that has enabled intervention into neighboring territories and territorial waters of other states to track and target migrants for interception in the “prefrontier.” For jurisdictional reasons, this practice effectively precludes the enforcement of legal human rights obligations, which European states might otherwise have with regard to these persons. This article argues that this technologically mediated expansion of vision has become a key feature of post–cold war governance of borders in Europe. The concept of transterritory is proposed to capture its effects.

Christiane Wilke: Seeing and unmaking civilians in Afghanistan: visual technologies and contested professional visions

While the distinction between civilians and combatants is fundamental to international law, it is contested and complicated in practice. How do North Atlantic Treaty Organization (NATO) officers see civilians in Afghanistan? Focusing on 2009 air strike in Kunduz, this article argues that the professional vision of NATO officers relies not only on recent military technologies that allow for aerial surveillance, thermal imaging, and precise targeting but also on the assumptions, vocabularies, modes of attention, and hierarchies of knowledges that the officers bring to the interpretation of aerial surveillance images. Professional vision is socially situated and frequently contested with communities of practice. In the case of the Kunduz air strike, the aerial vantage point and the military visual technologies cannot fully determine what would be seen. Instead, the officers’ assumptions about Afghanistan, threats, and the gender of the civilian inform the vocabulary they use for coding people and places as civilian or noncivilian. Civilians are not simply “found,” they are produced through specific forms of professional vision.

Jon Lindsay: Target practice: Counterterrorism and the amplification of data friction

The nineteenth-century strategist Carl von Clausewitz describes “fog” and “friction” as fundamental features of war. Military leverage of sophisticated information technology in the twenty-first century has improved some tactical operations but has not lifted the fog of war, in part, because the means for reducing uncertainty create new forms of it. Drawing on active duty experience with an American special operations task force in Western Iraq from 2007 to 2008, this article traces the targeting processes used to “find, fix, and finish” alleged insurgents. In this case they did not clarify the political reality of Anbar province but rather reinforced a parochial worldview informed by the Naval Special Warfare community. The unit focused on the performance of “direct action” raids during a period in which “indirect action” engagement with the local population was arguably more appropriate for the strategic circumstances. The concept of “data friction”, therefore, can be understood not simply as a form of resistance within a sociotechnical system but also as a form of traction that enables practitioners to construct representations of the world that amplify their own biases.

M.C. Elish: Remote split: a history of US drone operations and the distributed labour of war

This article analyzes US drone operations through a historical and ethnographic analysis of the remote split paradigm used by the US Air Force. Remote split refers to the globally distributed command and control of drone operations and entails a network of human operators and analysts in the Middle East, Europe, and Southeast Asia as well as in the continental United States. Though often viewed as a teleological progression of “unmanned” warfare, this paper argues that historically specific technopolitical logics establish the conditions of possibility for the work of war to be divisible into discreet and computationally mediated tasks that are viewed as effective in US military engagements. To do so, the article traces how new forms of authorized evidence and expertise have shaped developments in military operations and command and control priorities from the Cold War and the “electronic battlefield” of Vietnam through the Gulf War and the conflict in the Balkans to contemporary deployments of drone operations. The article concludes by suggesting that it is by paying attention to divisions of labor and human–machine configurations that we can begin to understand the everyday and often invisible structures that sustain perpetual war as a military strategy of the United States.

I’ve discussed Christiane’s excellent article in detail before, but the whole issue repays careful reading.

And if you’re curious about the map that heads this post, it’s based on the National Security Agency’s Strategic Mission List (dated 2007 and published in the New York Times on 2 November 2013), and mapped at Electrospaces: full details here.

Surveillance beyond borders and boundaries

An unusually interesting Call for Papers for a conference organised by the Surveillance Studies Network in Aarhus (Denmark) 7 – 9 June 2018. This will be their 8th conference; the banner above is ripped from their 6th in 2014.

You can download the full CfP (from which I’ve extracted the details below) here.

SURVEILLANCE BEYOND BORDERS AND BOUNDARIES

Recent years have witnessed the increasing scope, reach and pervasiveness of surveil- lance. It now operates on a scale ranging from the genome to the universe. Across the spheres of private and public life and the spaces between, surveillance mediates, documents and facilitates a wide range of activities. At the same time, surveillance practices now reach beyond the corporal and temporal boundaries of life itself, no longer resting on the individual as subject, but instead falling both within and beyond it. This emphasises the porosity of such categories. Pervasive surveillance produces new articulations of power and animates ows of people, information and capital, harbouring potential for myriad opportunities as well as harms. With this growth of surveillance comes in- creasing complexity and paradox.

Within this milieu, these issues are particularly pronounced, controversial and prescient in relation to borders and boundaries. Surveillance practices have long been associated with shoring up territorial and categorical borders, yet in the digital age such practices become accelerated, in many cases beyond the speed of human comprehension. Highly dynamic inscriptions of difference, abnormality and undesirability are now commonplace. At the same time, surveillance practices transcend and challenge erstwhile articulations of borders and boundaries, including enabling mobility for some, uniting formerly fractured assemblies of information and the capacitating borderless passage of data.

The conference

Since 2004 the Biennial Surveillance Studies Network conference has become established as the world’s most significant gathering of surveillance studies experts. The Surveillance Studies Network is a registered charitable company dedicated to the study of surveillance in all its forms, and the free distribution of scholarly information, and con- stitutes the largest association of surveillance scholars in the world. The Surveillance Studies Network owns the leading surveillance-focused peer-reviewed journal Surveillance & Society, which has held long association with the conference. We encourage presenters to submit fully formed papers to the journal to be considered for publication.

We call for papers and panels from all areas of critical enquiry that seek to examine such complex articulations and impacts of surveillance in contemporary society. We invite participants to discuss, develop or demolish the borders and boundaries of surveillance. In particular, we welcome interventions that are truly interdisciplinary, multi-disciplinary, or transdisciplinary in scope and reach, from academics, activists, artists, and policy-makers, especially those who sit on the borderlands between academia and practice-based knowledge production.

Key themes include, but are not limited to:

Authority, democracy and surveillance

Surveillance and everyday life

History of surveillance

Surveillance and digital/social media

Art, action and surveillance

Surveillance infrastructures and architectures

Managing borders and uncertainty

Theories of surveillance

Ethics, philosophy, trust and intimacy in and of surveillance

Regulation, politics and governance of surveillance

Algorithmic surveillance and big data

Resistance to surveillance

Non-technological and interpersonal surveillance

Paper Proposals

Paper sessions will be composed by the Organising Committee based on the individual Paper Proposals submitted. Paper Proposals should consist of: Name(s) of Author(s); Affiliation(s) of Author(s); Proposed Title of Paper; An abstract of up to 150 words.

On acceptance paper proposers will be invited to submit an extended abstract, presentation summary, paper outline or developed paper draft of at least 2000 words for publication in the delegates area of the conference website ahead of the event. This can be submitted anytime up until the May deadline.

Visual or other artistic submissions

We welcome and encourage alternative formats, including but not limited to visual dis- plays and other artistic installations. These may include but are not limited to lms, documentaries, photographic exhibitions, architectural modeling and digital-mediated artistic forms. Artistic submissions should consist of: Name(s) of proposer/artist; Affiliation(s); An overview of the proposed submission of up to 250 words.

Panel Proposals

Panels are sessions that bring together a diverse group of panelists with varied views on a topic related to the conference theme. The session format should engage the panelists and audience in an interactive discussion. Panels should be designed to fit in a 90-minute session. Panel Proposals should consist of: Name(s) of Organiser(s); Affiliations; Proposed Title of Panel; An abstract of up to 300 words describing the panel, including why the panel is of interest to the conference, and the proposed format of the panel; Name(s) and Affiliation(s) and abstracts for all included papers (150 words) of all proposed panelists. Speakers included in successful panel proposals also will be required to later submit the more developed extended abstract, presentation summary, paper outline or developed paper draft of at least 2000 words as per the instructions for paper proposals above. NB: Organisers must secure the agreement of all proposed panelists before submitting the Panel Proposal.

Deadline

All proposals should be submitted by December 31st 2017.  Decisions will be returned by January 31st 2018

All extended outlines/presentation summaries/paper drafts to be submitted by May 1st 2018

Submissions should be made through the EasyChair submission webportal here.

Contact: ssn2018@cc.au.dk

Incoming, upcoming

mosse-incoming-still

Richard Mosse‘s Incoming opens at the Barbican Art Centre in London on 17 February and runs until 23 April.  In collaboration with composer Ben Frost and cinematographer Trevor Tweeten, Richard has created an immersive multi-channel video installation (shown across three 26-foot wide screens) that turns military technology against itself – using a camera ‘that sees as a missile sees’ – to show the journeys of refugees (hence the artful title).  He explains :

I am European. I am complicit. I wanted to foreground this perspective in a way, to try to see refugees and illegal immigrants as our governments see them. I wanted to enter into that logic in order to create an image that reveals it. So I chose to represent these stories, really a journey or series of journeys, using an ambivalent and perhaps sinister new European weapons camera technology. The camera is intrusive of individual privacy, yet the imagery that this technology produces is so dehumanized – the person literally glows – that the medium anonymizes the subject in ways that are both insidious and humane. Working against the camera’s intended purpose, my collaborators and I listened carefully to the camera, to understand what it wanted to do — and then tried to reconcile that with these harsh, disparate, unpredictable and frequently tragic narratives of migration and displacement.

If you can’t get to it, there is a book version from Mack:

The major humanitarian and political issue of our time is migration and with his latest video work, Irish artist Richard Mosse has created a searing, haunting and unique artwork. Projected across three 8 meter wide screens, the film is accompanied by a loud dissonant soundtrack to create an overwhelming, immersive experience. Moving from footage of a live battle inside Syria, in which a US aircraft strafes Daesh positions on the ground, to a scene showing pathologists extracting DNA from the bones of unidentified corpses of refugees drowned off the Aegean island of Leros, the film opens a testimonial space of historical document – bearing witness to significant chapters in recent events – mediated through an advanced weapons-grade camera technology. Narratives of the journeys made by refugees and migrants across the Middle East, North Africa, and Europe, are captured using an extremely powerful thermal camera not generally available to the public. This super-telephoto military camera can perceive the human body beyond 50km day or night, reading the biological trace of human life. The camera translates the world into a heat signature of apparent temperature difference, producing a dazzling monochrome halo-image which alludes literally and metaphorically to hypothermia, climate change, weapons targeting, border surveillance, xenophobia, and the ‘bare life’ of stateless people.

The book version recreates the immersive nature of the film, combining still images from the entire sequence over nearly 600 pages to represent the harsh and compelling narrative in a full bleed layout.

mosse-idomeni-2016

A related exhibition of Richard’s photographs from the same body of work – entitled Heat Maps – has opened at the Jack Shainman Gallery in New York.  At the New Yorker Max Campbell describes the exhibition like this:

[U]sing a new “weapon of war,” as he describes it, Mosse captured encampment structures, servicemen, border police, boats at full capacity, and migrants of all ages. Mosse would spend time in the refugee camps before photographing, and some of the migrants sheltered there helped him to arrange his shots. But in the images his subjects are always seen at a distance, photographed from an above-eye-level perspective. Each “Heat Map” was constructed from hundreds of frames shot using a telephoto lens; a robotic system was used to scan the landscapes and interiors and meticulously capture every corner…

By adopting a tool of surveillance, Mosse’s photographs consciously play into narratives that count families as statistics and stigmatize refugees as potential threats. He recognizes that operating the infrared camera entails brushing up against the violent intentions with which the device has been put to use. “We weren’t attempting to rescue this apparatus from its sinister purpose,” he said. Instead, his project acts as a challenge. The people in his images appear as inverted silhouettes, sometimes disjointed, torn by the time passing between individual frames. The thermal readouts rub features out of faces and render flesh in washy, anonymous tones. Someone lays back on a cot, looking at a cell phone. Someone else hangs laundry. We can imagine what these people might look like in person, guess at the expressions on their faces or the color of their skin. Yet seeing them in Mosse’s shadowy renderings erases the lines that have been drawn between refugees, immigrants, natives, citizens, and the rest. His camera makes little distinction between the heat that each body emits.

mosse-larissa-2016

Heat Maps was shown in Berlin last year, where the links with the work of Michel Foucault and Giorgio Agamben were made explicit:

Heat Maps attempts to foreground the biopolitical aspects of the refugee and migration situation that is facing Europe, the Middle East and North Africa. The project charts refugee camps and other staging sites using an extreme telephoto military grade thermographic camera that was designed to detect and identify subjects from as far away as fifty kilometers, day or night.

The camera itself is export controlled under the International Traffic in Arms Regulations — it is regarded as a component in advanced weapons systems and embargoed as such — and was designed for border surveillance and regulation. It can be seen as a technology of governance, a key tool in what Foucault and Agamben have described as biopower. It is an apparatus of the military-humanitarian complex.

The camera translates the world into a heat signature of relative temperature difference, literally reading the biological trace of human life – imperceptive of skin colour – as well as proximity to death through exposure or hypothermia, even from a great distance. The living subject literally glows, and heat radiation creates dazzling optical flare.

Instead of individuals, the camera sees the mass — in Foucault’s words: massifying, that is directed not as man-as-body, but as man-as-species. It elicits an alienating and invasive form of imagery, but also occasionally tender and intimate, tending to both dehumanize and then rehumanize the bare life (Agamben) of the human figure of the stateless refugee and illegal economic migrant, which the camera was specifically designed to detect, monitor, and police.

The camera is used against itself to map landscapes of global displacement and more powerfully represent ambivalent and charged narratives of migration. Reading heat as both metaphor and index, these images attempt to reveal the harsh struggle for human survival lived daily by millions of refugees and migrants, seen but overlooked by our governments, and ignored by many.

You can find out more from a helpful interview with Iona Goulder which puts these twin projects in the context of Richard’s previous work in the Congo (see here and here).  En route, Richard says this:

Reading heat as both metaphor and index, I wanted to reveal the harsh struggle for survival lived daily by millions of refugees and migrants, while investigating one of the sinister technologies that our governments are using against them.

By attaching this camera to a robotic motion-control tripod, I scanned refugee camps across Europe from a high eye-level, to create detailed panoramic thermal images. Each artwork has been painstakingly constructed from a grid of almost a thousand smaller frames, each with its own vanishing point.

Seamlessly blended into a single expansive thermal panorama, I was surprised to find that some of the resulting images seem to evoke the spatial description, minute detail, and human narratives of certain kinds of classical painting, such as Breughel or Bosch. Yet they are also documents disclosing the fence architecture, security gates, loudspeakers, food queues, tents and temporary shelters of camp architecture. Very large in scale, these Heat Maps disclose intimate details of fragile human life in squalid, nearly unliveable conditions in the margins and gutters of first world economies.

Seeing machines

 

graham-drone-cover

The Transnational Institute has published a glossy version of a chapter from Steve Graham‘s Vertical – called Drone: Robot Imperium, you can download it here (open access).  Not sure about either of the terms in the subtitle, but it’s a good read and richly illustrated.

Steve includes a discussion of the use of drones to patrol the US-Mexico border, and Josh Begley has published a suggestive account of the role of drones but also other ‘seeing machines’ in visualizing the border.

One way the border is performed — particularly the southern border of the United States — can be understood through the lens of data collection. In the border region, along the Rio Grande and westward through the desert Southwest, Customs and Border Protection (CBP) deploys radar blimps, drones, fixed-wing aircraft, helicopters, seismic sensors, ground radar, face recognition software, license-plate readers, and high-definition infrared video cameras. Increasingly, they all feed data back into something called “The Big Pipe.”

Josh downloaded 20,000 satellite images of the border, stitched them together, and then worked with Laura Poitras and her team at Field of Vision to produce a short film – Best of Luck with the Wall – that traverses the entire length of the border (1, 954 miles) in six minutes:

The southern border is a space that has been almost entirely reduced to metaphor. It is not even a geography. Part of my intention with this film is to insist on that geography.

By focusing on the physical landscape, I hope viewers might gain a sense of the enormity of it all, and perhaps imagine what it would mean to be a political subject of that terrain.

begley-fatal-migrations-1

If you too wonder about that last sentence and its latent bio-physicality – and there is of course a rich stream of work on the bodies that seek to cross that border – then you might visit another of Josh’s projects, Fatal Migrations, 2011-2016 (see above and below).

begley-fatal-migrations-2

There’s an interview with Josh that, among other things, links these projects with his previous work.

I have a couple of projects that are smartphone centered. One of them is about mapping the geography of places around the world where the CIA carries out drone strikes—mostly in Pakistan, Yemen, and Somalia. Another was about looking at the geography of incarceration in the United States—there are more than 5,000 prisons—and trying to map all of them and see them through satellites. I currently have an app that is looking at the geography of police violence in the United States. Most of these apps are about creating a relationship between data and the body, where you can receive a notification every time something unsettling happens. What does that mean for the rest of your day? How do you live with that data—data about people? In some cases the work grows out of these questions, but in other cases the work really is about landscape….

There’s just so much you can never know from looking at satellite imagery. By definition it flattens and distorts things. A lot of folks who fly drones, for instance, think they know a space just from looking at it from above. I firmly reject that idea. The bird’s eye view is never what it means to be on the ground somewhere, or what it means to have meaningful relationships with people on the ground. I feel like I can understand the landscape from 30,000 feet, but it is not the same as spending time in a space.

Anjali Nath has also provided a new commentary on one of Josh’s earlier projects, Metadata, that he cites in that interview – ‘Touched from below: on drones, screens and navigation’, Visual Anthropology 29 (3) (2016) 315-30.

It’s part of a special issue on ‘Visual Revolutions in the Middle East’, and as I explore the visual interventions I’ve included in this post I find myself once again thinking of a vital remark by Edward Said:

we-are-also-looking-at-our-observers-001

That’s part of the message behind the #NotaBugSplat image on the cover of Steve’s essay: but what might Said’s remark mean more generally today, faced with the proliferation of these seeing machines?

 

Matters of definition

Since my post on the use of drones to provide intelligence, surveillance and reconnaissance over Iraq and Syria I’ve been thinking about the image stream provided by Predators and Reapers.  Then I used an image from what I think must be an MQ-9 Reaper operated by France which was in full colour and – this is the important part – in high definition.  Over the weekend the New York Times published a report, culled from the Italian magazine L’Espresso, which – together with the accompanying video clip (the link is to the Italian original not the Times version) – confirmed the power of HD full motion video, this time from a Reaper operated by Italy:

The footage … begins with grainy black-and-white images of an airstrike on what appears to have been a checkpoint on a road in northern Iraq, beneath a huge black flag.

Then there is something altogether different: high-resolution, color video of four distinct armed figures walking out of a house and along the streets of a town. At one stage, the picture suddenly zooms in on two of the suspected militants to reveal that one of them is almost certainly a child, propping a rifle on his shoulder that indicates how small he is relative to the man next to him. The images are so clear that even the shadows of the figures can be examined.

Italian Drone video BItalian drone video CItalian drone video AItalian drone video DItalian drone video I

But the significance of all this is less straightforward than it might appear.

First, not all drones have this HD capability.  We know from investigations into civilian casualty incidents in Afghanistan that the feeds from Predators but also early model (‘Block’) Reapers are frequently grainy and imprecise.  Sean Davies reports that the video compression necessary for data transmission squeezed 560 x 480 pixel resolution images into 3.2 MBps at 30 frames per second whereas the newer (Block 5) Reapers provide 1280 x 720 pixel resolution images resolution images at 6.4 MBps.  The enhanced video feeds can be transmitted not only to the Ground Control Stations from which the aircraft are flown – and those too have been upgraded (see image below) – but also to operations centres monitoring the missions and, crucially, to ruggedized laptops (‘ROVERs’) used by special forces and other troops on the ground.

ground-control-stations

The significance of HD full-motion video is revealed in the slide below, taken from a briefing on ‘small footprint operations’ in Somalia and Yemen prepared in February 2013 and published as part of The Intercept‘s Drone Papers, which summarises its impact on the crucial middle stage of the ‘find, fix, finish‘ cycle of targeted killing:

HD FMV impact on Fix

As you can see, HD FMV was involved in as many as 72 per cent of the successful ‘fixes’ and was absent from 88 per cent of the unsuccessful ones.

Second, Eyal Weizman cautions that the image stream shown on the Italian video was captured ‘either very early or very late in the day.  Without shadows we could not identify these as weapons at all.’  Infra-red images captured at night could obviously not provide definition of this quality, but even so-called ‘Day TV’ would not show clear shadows at most times of the day. In Eyal’s view, ‘showing these rare instances could skew our understanding of how much can be seen by drones and how clear what we see is.’

Third, no matter how high the resolution of the video feeds, we need to remember that their interpretation is a techno-cultural process.  One of the figures shown in the Italian video ‘is almost certainly a child’, reports the New York Times.  So bear in mind this exchange between the crew of a Predator circling over three vehicles travelling through the mountains of Uruzgan in February 2010 (see also here and here):

1:07 􏰀(MC):􏰀 screener􏰀 said 􏰀at least 􏰀one 􏰀child 􏰀near 􏰀SUV􏰀

1:07 􏰀(Sensor):􏰀 bull􏰀 (expletive 􏰀deleted)…where!?􏰀

1:07 􏰀(Sensor): 􏰀send 􏰀me 􏰀a 􏰀(expletive􏰀deleted) 􏰀still,􏰀􏰀 I􏰀 don’t 􏰀think 􏰀they 􏰀have 􏰀kids 􏰀out 􏰀at 􏰀this 􏰀hour, 􏰀I 􏰀know􏰀 they’re 􏰀shady 􏰀but􏰀 come􏰀 on􏰀

1:07􏰀 (Pilot):􏰀 at 􏰀least 􏰀one 􏰀child…􏰀Really?􏰀 Listing 􏰀the􏰀 MAM [Military-Aged Male], 􏰀uh, 􏰀that 􏰀means 􏰀he’s 􏰀guilty􏰀

1:07􏰀 (Sensor):􏰀 well 􏰀may be􏰀 a 􏰀teenager 􏰀but 􏰀I 􏰀haven’t􏰀 seen􏰀 anything 􏰀that 􏰀looked 􏰀that 􏰀short, 􏰀granted 􏰀they’e􏰀 all 􏰀grouped 􏰀up 􏰀here,􏰀 but.􏰀..

1:07 􏰀(MC): 􏰀They’re 􏰀reviewing􏰀

1:07 􏰀(Pilot):􏰀Yeah 􏰀review 􏰀that􏰀 (expletive 􏰀deleted)…why􏰀 didn’t 􏰀he 􏰀say􏰀 possible􏰀 child,􏰀 why􏰀 are􏰀 they􏰀 so 􏰀quick􏰀 to 􏰀call 􏰀(expletive􏰀 deleted) 􏰀kids 􏰀but􏰀 not 􏰀to 􏰀call 􏰀(expletive􏰀deleted) 􏰀a 􏰀rifle􏰀….

03:10 􏰀(Pilot):􏰀 And 􏰀Kirk􏰀97, 􏰀good 􏰀copy􏰀 on􏰀 that.􏰀 We 􏰀are 􏰀with 􏰀you.􏰀 Our 􏰀screener􏰀 updated􏰀 only􏰀 one􏰀 adolescent 􏰀so 􏰀that’s 􏰀one􏰀 double 􏰀digit􏰀 age 􏰀range.􏰀 How􏰀 Copy?􏰀

03:10 􏰀(JAG25):􏰀We’ll􏰀 pass 􏰀that 􏰀along 􏰀to 􏰀the 􏰀ground 􏰀force􏰀 commander.􏰀 But 􏰀like 􏰀I 􏰀said, 􏰀12 􏰁13 􏰀years 􏰀old 􏰀with􏰀 a 􏰀weapon 􏰀is􏰀 just 􏰀as􏰀 dangerous.􏰀􏰀

In other words – it’s more than a matter of high definition; it’s also a matter of political and cultural definition.