Death machines

New from Elke Schwarz, Death machines: the ethics of violent technologies (Manchester UP):

As innovations in military technologies race toward ever-greater levels of automation and autonomy, debates over the ethics of violent technologies tread water. Death Machines reframes these debates, arguing that the way we conceive of the ethics of contemporary warfare is itself imbued with a set of bio-technological rationalities that work as limits. The task for critical thought must therefore be to unpack, engage, and challenge these limits. Drawing on the work of Hannah Arendt, the book offers a close reading of the technology-biopolitics-complex that informs and produces contemporary subjectivities, highlighting the perilous implications this has for how we think about the ethics of political violence, both now and in the future.

Contents:

Introduction: The conditioned human
1. Biopolitics and the technological subject
2. Biopolitical technologies in Arendt and Foucault
3. Anti-political (post)modernity
4. Procedural violence
5. Ethics as technics
6. All hail our robot overlords
7. Prescription drones
Conclusion: For an ethics beyond technics

In my crosshairs

Two new books on the military gaze:

First, from the ever-interesting Roger StahlThrough the Crosshairs: War, Visual Culture, and the Weaponized Gaze (Rutgers).

Now that it has become so commonplace, we rarely blink an eye at camera footage framed by the crosshairs of a sniper’s gun or from the perspective of a descending smart bomb. But how did this weaponized gaze become the norm for depicting war, and how has it influenced public perceptions?

Through the Crosshairs traces the genealogy of this weapon’s-eye view across a wide range of genres, including news reports, military public relations images, action movies, video games, and social media posts. As he tracks how gun-camera footage has spilled from the battlefield onto the screens of everyday civilian life, Roger Stahl exposes how this raw video is carefully curated and edited to promote identification with military weaponry, rather than with the targeted victims. He reveals how the weaponized gaze is not only a powerful propagandistic frame, but also a prime site of struggle over the representation of state violence.

Contents:

1 A Strike of the Eye
2 Smart Bomb Vision
3 Satellite Vision
4 Drone Vision
5 Sniper Vision
6 Resistant Vision
7 Afterword: Bodies Inhabited and Disavowed

And here’s Lisa Parks on the book:

“Immersing readers in the perilous visualities of smart bombs, snipers, and drones, Through the Crosshairs delivers a riveting analysis of the weaponized gaze and powerfully explicates the political stakes of screen culture’s militarization.  Packed with insights about the current conjuncture, the book positions Stahl as a leading critic of war and media.”

Incidentally, if you don’t know Roger’s collaborative project, The vision machine: media, war, peace – I first blogged about it five years ago – now is the time to visit: here.

And from Antoine Bousquet, The Eye of War: military perception from the telescope to the drone (Minnesota):

From ubiquitous surveillance to drone strikes that put “warheads onto foreheads,” we live in a world of globalized, individualized targeting. The perils are great. In The Eye of War, Antoine Bousquet provides both a sweeping historical overview of military perception technologies and a disquieting lens on a world that is, increasingly, one in which anything or anyone that can be perceived can be destroyed—in which to see is to destroy.

Arguing that modern-day global targeting is dissolving the conventionally bounded spaces of armed conflict, Bousquet shows that over several centuries, a logistical order of militarized perception has come into ascendancy, bringing perception and annihilation into ever-closer alignment. The efforts deployed to evade this deadly visibility have correspondingly intensified, yielding practices of radical concealment that presage a wholesale disappearance of the customary space of the battlefield. Beginning with the Renaissance’s fateful discovery of linear perspective, The Eye of War discloses the entanglement of the sciences and techniques of perception, representation, and localization in the modern era amid the perpetual quest for military superiority. In a survey that ranges from the telescope, aerial photograph, and gridded map to radar, digital imaging, and the geographic information system, Bousquet shows how successive technological systems have profoundly shaped the history of warfare and the experience of soldiering.

A work of grand historical sweep and remarkable analytical power, The Eye of War explores the implications of militarized perception for the character of war in the twenty-first century and the place of human subjects within its increasingly technical armature.

Contents:

Introduction: Visibility Equals Death
1. Perspective
2. Sensing
3. Imaging
4. Mapping
5. Hiding
Conclusion: A Global Imperium of Targeting

And here is Daniel Monk on the book:

The Eye of War is a masterful contemporary history of the martial gaze that reviews the relation between seeing and targeting. The expansion of ocularcentrism—the ubiquitization of vision as power—Antoine Bousquet shows us, coincides with the inverse: the relegation of the eye to an instrument of a war order that relies on the sensorium as the means to its own ends. As he traces the development of a technocracy of military vision, Bousquet discloses the vision of a military technocracy that has transformed the given world into units of perception indistinct from ‘kill boxes.’

Coming from excellent US university presses that – unlike the commercial behemoths (you know who you are) favoured by too many authors in my own field (you know who you are too) – these books are both attractively designed and accessibly priced.

Googled

Following up my post on Google and Project Maven here, there’s an open letter (via the International Committee for Robot Arms Control) in support of Google employees opposed to the tech giant’s participation in Project Maven here: it’s open for for (many) more signatures…

An Open Letter To:

Larry Page, CEO of Alphabet;
Sundar Pichai, CEO of Google;
Diane Greene, CEO of Google Cloud;
and Fei-Fei Li, Chief Scientist of AI/ML and Vice President, Google Cloud,

As scholars, academics, and researchers who study, teach about, and develop information technology, we write in solidarity with the 3100+ Google employees, joined by other technology workers, who oppose Google’s participation in Project Maven. We wholeheartedly support their demand that Google terminate its contract with the DoD, and that Google and its parent company Alphabet commit not to develop military technologies and not to use the personal data that they collect for military purposes. The extent to which military funding has been a driver of research and development in computing historically should not determine the field’s path going forward. We also urge Google and Alphabet’s executives to join other AI and robotics researchers and technology executives in calling for an international treaty to prohibit autonomous weapon systems.

Google has long sought to organize and enhance the usefulness of the world’s information. Beyond searching for relevant webpages on the internet, Google has become responsible for compiling our email, videos, calendars, and photographs, and guiding us to physical destinations. Like many other digital technology companies, Google has collected vast amounts of data on the behaviors, activities and interests of their users. The private data collected by Google comes with a responsibility not only to use that data to improve its own technologies and expand its business, but also to benefit society. The company’s motto “Don’t Be Evil” famously embraces this responsibility.

Project Maven is a United States military program aimed at using machine learning to analyze massive amounts of drone surveillance footage and to label objects of interest for human analysts. Google is supplying not only the open source ‘deep learning’ technology, but also engineering expertise and assistance to the Department of Defense.

According to Defense One, Joint Special Operations Forces “in the Middle East” have conducted initial trials using video footage from a small ScanEagle surveillance drone. The project is slated to expand “to larger, medium-altitude Predator and Reaper drones by next summer” and eventually to Gorgon Stare, “a sophisticated, high-tech series of cameras…that can view entire towns.” With Project Maven, Google becomes implicated in the questionable practice of targeted killings. These include so-called signature strikesand pattern-of-life strikes that target people based not on known activities but on probabilities drawn from long range surveillance footage. The legality of these operations has come into question under international[1] and U.S. law.[2] These operations also have raised significant questions of racial and gender bias (most notoriously, the blanket categorization of adult males as militants) in target identification and strike analysis.[3]These problems cannot be reduced to the accuracy of image analysis algorithms, but can only be addressed through greater accountability to international institutions and deeper understanding of geopolitical situations on the ground.

While the reports on Project Maven currently emphasize the role of human analysts, these technologies are poised to become a basis for automated target recognition and autonomous weapon systems. As military commanders come to see the object recognition algorithms as reliable, it will be tempting to attenuate or even remove human review and oversight for these systems. According to Defense One, the DoD already plans to install image analysis technologies on-board the drones themselves, including armed drones. We are then just a short step away from authorizing autonomous drones to kill automatically, without human supervision or meaningful human control. If ethical action on the part of tech companies requires consideration of who might benefit from a technology and who might be harmed, then we can say with certainty that no topic deserves more sober reflection – no technology has higher stakes – than algorithms meant to target and kill at a distance and without public accountability.

We are also deeply concerned about the possible integration of Google’s data on people’s everyday lives with military surveillance data, and its combined application to targeted killing. Google has moved into military work without subjecting itself to public debate or deliberation, either domestically or internationally. While Google regularly decides the future of technology without democratic public engagement, its entry into military technologies casts the problems of private control of information infrastructure into high relief.

Should Google decide to use global internet users’ personal data for military purposes, it would violate the public trust that is fundamental to its business by putting its users’ lives and human rights in jeopardy. The responsibilities of global companies like Google must be commensurate with the transnational makeup of their users. The DoD contracts under consideration by Google, and similar contracts already in place at Microsoft and Amazon, signal a dangerous alliance between the private tech industry, currently in possession of vast quantities of sensitive personal data collected from people across the globe, and one country’s military. They also signal a failure to engage with global civil society and diplomatic institutions that have already highlighted the ethical stakes of these technologies.

We are at a critical moment. The Cambridge Analytica scandal demonstrates growing public concern over allowing the tech industries to wield so much power. This has shone only one spotlight on the increasingly high stakes of information technology infrastructures, and the inadequacy of current national and international governance frameworks to safeguard public trust. Nowhere is this more true than in the case of systems engaged in adjudicating who lives and who dies.
We thus ask Google, and its parent company Alphabet, to:

  • Terminate its Project Maven contract with the DoD.

  • Commit not to develop military technologies, nor to allow the personal data it has collected to be used for military operations.

  • Pledge to neither participate in nor support the development, manufacture, trade or use of autonomous weapons; and to support efforts to ban autonomous weapons.

Google eyes

The Oxford English Dictionary recognised ‘google’ as a verb in 2006, and its active form is about to gain another dimension.  One of the most persistent anxieties amongst those executing remote warfare, with its extraordinary dependence on (and capacity for) real-time full motion video surveillance as an integral moment of the targeting cycle, has been the ever-present risk of ‘swimming in sensors and drowning in data‘.

But now Kate Conger and Dell Cameron report for Gizmodo on a new collaboration between Google and the Pentagon as part of Project Maven:

Project Maven, a fast-moving Pentagon project also known as the Algorithmic Warfare Cross-Functional Team (AWCFT), was established in April 2017. Maven’s stated mission is to “accelerate DoD’s integration of big data and machine learning.” In total, the Defense Department spent $7.4 billion on artificial intelligence-related areas in 2017, the Wall Street Journal reported.

The project’s first assignment was to help the Pentagon efficiently process the deluge of video footage collected daily by its aerial drones—an amount of footage so vast that human analysts can’t keep up, according to Greg Allen, an adjunct fellow at the Center for a New American Security, who co-authored a lengthy July 2017 report on the military’s use of artificial intelligence. Although the Defense Department has poured resources into the development of advanced sensor technology to gather information during drone flights, it has lagged in creating analysis tools to comb through the data.

“Before Maven, nobody in the department had a clue how to properly buy, field, and implement AI,” Allen wrote.

Maven was tasked with using machine learning to identify vehicles and other objects in drone footage, taking that burden off analysts. Maven’s initial goal was to provide the military with advanced computer vision, enabling the automated detection and identification of objects in as many as 38 categories captured by a drone’s full-motion camera, according to the Pentagon. Maven provides the department with the ability to track individuals as they come and go from different locations.

Google has reportedly attempted to allay fears about its involvement:

A Google spokesperson told Gizmodo in a statement that it is providing the Defense Department with TensorFlow APIs, which are used in machine learning applications, to help military analysts detect objects in images. Acknowledging the controversial nature of using machine learning for military purposes, the spokesperson said the company is currently working “to develop polices and safeguards” around its use.

“We have long worked with government agencies to provide technology solutions. This specific project is a pilot with the Department of Defense, to provide open source TensorFlow APIs that can assist in object recognition on unclassified data,” the spokesperson said. “The technology flags images for human review, and is for non-offensive uses only. Military use of machine learning naturally raises valid concerns. We’re actively discussing this important topic internally and with others as we continue to develop policies and safeguards around the development and use of our machine learning technologies.”

 

As Mehreen Kasana notes, Google has indeed ‘long worked with government agencies’:

2017 report in Quartz shed light on the origins of Google and how a significant amount of funding for the company came from the CIA and NSA for mass surveillance purposes. Time and again, Google’s funding raises questions. In 2013, a Guardian report highlighted Google’s acquisition of the robotics company Boston Dynamics, and noted that most of the projects were funded by the Defense Advanced Research Projects Agency (DARPA).

Drone Imaginaries and Society

News from Kathrin Maurer of a conference on Drone Imaginaries and Society, 5-6 June 2018, at the University of Southern Denmark (Odense).  I’ll be giving a keynote, and look forward to the whole event very much (and not only because, while I’ve visited Denmark lots of times and loved every one of them, I’ve never made it to Odense):

Drones are in the air. The production of civilian drones for rescue, transport, and leisure activity is booming. The Danish government, for example, proclaimed civilian drones a national strategy in 2016. Accordingly, many research institutions as well as the industry focus on the development, usage, and promotion of drone technology. These efforts often prioritize commercialization and engineering as well as setting-up UAV (Unmanned Arial Vehicle) test centers. As a result, urgent questions regarding how drone technology impacts our identity as humans as well as their effects on how we envision the human society are frequently underexposed in these initiatives.

Our conference aims to change this perspective. By investigating cultural representations of civilian and military drones in visual arts, film, and literature, we intend to shed light on drone technology from a humanities’ point of view. This aesthetic “drone imaginary” forms not only the empirical material of our discussions but also a prism of knowledge which provides new insights into the meaning of drone technology for society today.

Several artists, authors, film makers, and thinkers have already engaged in this drone imaginary. While some of these inquiries provide critical reflection on contemporary and future drone technologies – for instance issues such as privacy, surveillance, automation, and security – others allow for alternative ways of seeing and communicating as well as creative re-imagination of new ways of organizing human communities. The goal of the conference is to bring together these different aesthetic imaginaries to better understand the role of drone technologies in contemporary and future societies.

 The focus points of the conference are:

–     Aesthetic drone imaginaries: Which images, metaphors, ethics, emotions and affects are associated to drones through their representation in art, fiction and popular culture?

–     Drone technology and its implications for society: How do drones change our daily routines and push the balance between publicity and privacy?

–     Historical perspective on drones: In what way do drone imaginaries allow for a counter-memory that can challenge, for instance, the military implementation of drones?

–     Drones as vulnerability: Do drones make societies more resilient or more fragile, and are societies getting overly dependent on advanced technologies?

     Utopian or dystopian drone imaginaries: What dream or nightmare scenarios are provided by drone fiction and how do they allow for a (re)imagining of future societies?

–     Drones and remote sensing: In what way do drones mark a radical new way of seeing and sensing by their remotely vertical gaze and operative images?

     Drone warfare: Do drones mark a continuation or rupture of the way we understand war and conflict, and how do they change the military imaginary?

The conference is sponsored by the Drone Network (Danish Research Council) and Institute for the Study of Culture at the University of Southern Denmark.

 You can contact Kathrin at  kamau@sdu.dk

The conference website is here.

Army of None

Coming next spring from Norton – and with a very clever title – Paul Scharre‘s Army of None: Autonomous weapons and the future of war.

A Pentagon defense expert and former U.S. Army Ranger traces the emergence of autonomous weapons.

What happens when a Predator drone has as much autonomy as a Google car? Although it sounds like science fiction, the technology to create weapons that could hunt and destroy targets on their own already exists. Paul Scharre, a leading expert in emerging weapons technologies, draws on incisive research and firsthand experience to explore how increasingly autonomous weapons are changing warfare.

This far-ranging investigation examines the emergence of fully autonomous weapons, the movement to ban them, and the legal and ethical issues surrounding their use. Scharre spotlights the role of artificial intelligence in military technology, spanning decades of innovation from German noise-seeking Wren torpedoes in World War II—antecedents of today’s armed drones—to autonomous cyber weapons. At the forefront of a game-changing debate, Army of None engages military history, global policy, and bleeding-edge science to explore what it would mean to give machines authority over the ultimate decision: life or death.

You can get a taste – in fact a whole tasting menu – of Paul’s arguments at Just Security here.

Paul is at the Centre for a New American Security, and you can download two reports (which have been endlessly ripped by others) on Robotics on the Battlefield: Part I is Range, persistence and daring and Part II is The Coming Swarm (both 2014).

Tracking and targeting

News from Lucy Suchman of a special issue of Science, Technology and Human Values [42 (6) (2017)]  on Tracking and targeting: sociotechnologies of (in)security, which she’s co-edited with Karolina Follis and Jutta Weber.

Here’s the line-up:

Lucy Suchman, Karolina Follis and Jutta Weber: Tracking and targeting

This introduction to the special issue of the same title sets out the context for a critical examination of contemporary developments in sociotechnical systems deployed in the name of security. Our focus is on technologies of tracking, with their claims to enable the identification of those who comprise legitimate targets for the use of violent force. Taking these claims as deeply problematic, we join a growing body of scholarship on the technopolitical logics that underpin an increasingly violent landscape of institutions, infrastructures, and actions, promising protection to some but arguably contributing to our collective insecurity. We examine the asymmetric distributions of sociotechnologies of (in)security; their deadly and injurious effects; and the legal, ethical, and moral questions that haunt their operations.

Karolina Follis: Visions and transterritory: the borders of Europe

This essay is about the role of visual surveillance technologies in the policing of the external borders of the European Union (EU). Based on an analysis of documents published by EU institutions and independent organizations, I argue that these technological innovations fundamentally alter the nature of national borders. I discuss how new technologies of vision are deployed to transcend the physical limits of territories. In the last twenty years, EU member states and institutions have increasingly relied on various forms of remote tracking, including the use of drones for the purposes of monitoring frontier zones. In combination with other facets of the EU border management regime (such as transnational databases and biometrics), these technologies coalesce into a system of governance that has enabled intervention into neighboring territories and territorial waters of other states to track and target migrants for interception in the “prefrontier.” For jurisdictional reasons, this practice effectively precludes the enforcement of legal human rights obligations, which European states might otherwise have with regard to these persons. This article argues that this technologically mediated expansion of vision has become a key feature of post–cold war governance of borders in Europe. The concept of transterritory is proposed to capture its effects.

Christiane Wilke: Seeing and unmaking civilians in Afghanistan: visual technologies and contested professional visions

While the distinction between civilians and combatants is fundamental to international law, it is contested and complicated in practice. How do North Atlantic Treaty Organization (NATO) officers see civilians in Afghanistan? Focusing on 2009 air strike in Kunduz, this article argues that the professional vision of NATO officers relies not only on recent military technologies that allow for aerial surveillance, thermal imaging, and precise targeting but also on the assumptions, vocabularies, modes of attention, and hierarchies of knowledges that the officers bring to the interpretation of aerial surveillance images. Professional vision is socially situated and frequently contested with communities of practice. In the case of the Kunduz air strike, the aerial vantage point and the military visual technologies cannot fully determine what would be seen. Instead, the officers’ assumptions about Afghanistan, threats, and the gender of the civilian inform the vocabulary they use for coding people and places as civilian or noncivilian. Civilians are not simply “found,” they are produced through specific forms of professional vision.

Jon Lindsay: Target practice: Counterterrorism and the amplification of data friction

The nineteenth-century strategist Carl von Clausewitz describes “fog” and “friction” as fundamental features of war. Military leverage of sophisticated information technology in the twenty-first century has improved some tactical operations but has not lifted the fog of war, in part, because the means for reducing uncertainty create new forms of it. Drawing on active duty experience with an American special operations task force in Western Iraq from 2007 to 2008, this article traces the targeting processes used to “find, fix, and finish” alleged insurgents. In this case they did not clarify the political reality of Anbar province but rather reinforced a parochial worldview informed by the Naval Special Warfare community. The unit focused on the performance of “direct action” raids during a period in which “indirect action” engagement with the local population was arguably more appropriate for the strategic circumstances. The concept of “data friction”, therefore, can be understood not simply as a form of resistance within a sociotechnical system but also as a form of traction that enables practitioners to construct representations of the world that amplify their own biases.

M.C. Elish: Remote split: a history of US drone operations and the distributed labour of war

This article analyzes US drone operations through a historical and ethnographic analysis of the remote split paradigm used by the US Air Force. Remote split refers to the globally distributed command and control of drone operations and entails a network of human operators and analysts in the Middle East, Europe, and Southeast Asia as well as in the continental United States. Though often viewed as a teleological progression of “unmanned” warfare, this paper argues that historically specific technopolitical logics establish the conditions of possibility for the work of war to be divisible into discreet and computationally mediated tasks that are viewed as effective in US military engagements. To do so, the article traces how new forms of authorized evidence and expertise have shaped developments in military operations and command and control priorities from the Cold War and the “electronic battlefield” of Vietnam through the Gulf War and the conflict in the Balkans to contemporary deployments of drone operations. The article concludes by suggesting that it is by paying attention to divisions of labor and human–machine configurations that we can begin to understand the everyday and often invisible structures that sustain perpetual war as a military strategy of the United States.

I’ve discussed Christiane’s excellent article in detail before, but the whole issue repays careful reading.

And if you’re curious about the map that heads this post, it’s based on the National Security Agency’s Strategic Mission List (dated 2007 and published in the New York Times on 2 November 2013), and mapped at Electrospaces: full details here.