Googled

Following up my post on Google and Project Maven here, there’s an open letter (via the International Committee for Robot Arms Control) in support of Google employees opposed to the tech giant’s participation in Project Maven here: it’s open for for (many) more signatures…

An Open Letter To:

Larry Page, CEO of Alphabet;
Sundar Pichai, CEO of Google;
Diane Greene, CEO of Google Cloud;
and Fei-Fei Li, Chief Scientist of AI/ML and Vice President, Google Cloud,

As scholars, academics, and researchers who study, teach about, and develop information technology, we write in solidarity with the 3100+ Google employees, joined by other technology workers, who oppose Google’s participation in Project Maven. We wholeheartedly support their demand that Google terminate its contract with the DoD, and that Google and its parent company Alphabet commit not to develop military technologies and not to use the personal data that they collect for military purposes. The extent to which military funding has been a driver of research and development in computing historically should not determine the field’s path going forward. We also urge Google and Alphabet’s executives to join other AI and robotics researchers and technology executives in calling for an international treaty to prohibit autonomous weapon systems.

Google has long sought to organize and enhance the usefulness of the world’s information. Beyond searching for relevant webpages on the internet, Google has become responsible for compiling our email, videos, calendars, and photographs, and guiding us to physical destinations. Like many other digital technology companies, Google has collected vast amounts of data on the behaviors, activities and interests of their users. The private data collected by Google comes with a responsibility not only to use that data to improve its own technologies and expand its business, but also to benefit society. The company’s motto “Don’t Be Evil” famously embraces this responsibility.

Project Maven is a United States military program aimed at using machine learning to analyze massive amounts of drone surveillance footage and to label objects of interest for human analysts. Google is supplying not only the open source ‘deep learning’ technology, but also engineering expertise and assistance to the Department of Defense.

According to Defense One, Joint Special Operations Forces “in the Middle East” have conducted initial trials using video footage from a small ScanEagle surveillance drone. The project is slated to expand “to larger, medium-altitude Predator and Reaper drones by next summer” and eventually to Gorgon Stare, “a sophisticated, high-tech series of cameras…that can view entire towns.” With Project Maven, Google becomes implicated in the questionable practice of targeted killings. These include so-called signature strikesand pattern-of-life strikes that target people based not on known activities but on probabilities drawn from long range surveillance footage. The legality of these operations has come into question under international[1] and U.S. law.[2] These operations also have raised significant questions of racial and gender bias (most notoriously, the blanket categorization of adult males as militants) in target identification and strike analysis.[3]These problems cannot be reduced to the accuracy of image analysis algorithms, but can only be addressed through greater accountability to international institutions and deeper understanding of geopolitical situations on the ground.

While the reports on Project Maven currently emphasize the role of human analysts, these technologies are poised to become a basis for automated target recognition and autonomous weapon systems. As military commanders come to see the object recognition algorithms as reliable, it will be tempting to attenuate or even remove human review and oversight for these systems. According to Defense One, the DoD already plans to install image analysis technologies on-board the drones themselves, including armed drones. We are then just a short step away from authorizing autonomous drones to kill automatically, without human supervision or meaningful human control. If ethical action on the part of tech companies requires consideration of who might benefit from a technology and who might be harmed, then we can say with certainty that no topic deserves more sober reflection – no technology has higher stakes – than algorithms meant to target and kill at a distance and without public accountability.

We are also deeply concerned about the possible integration of Google’s data on people’s everyday lives with military surveillance data, and its combined application to targeted killing. Google has moved into military work without subjecting itself to public debate or deliberation, either domestically or internationally. While Google regularly decides the future of technology without democratic public engagement, its entry into military technologies casts the problems of private control of information infrastructure into high relief.

Should Google decide to use global internet users’ personal data for military purposes, it would violate the public trust that is fundamental to its business by putting its users’ lives and human rights in jeopardy. The responsibilities of global companies like Google must be commensurate with the transnational makeup of their users. The DoD contracts under consideration by Google, and similar contracts already in place at Microsoft and Amazon, signal a dangerous alliance between the private tech industry, currently in possession of vast quantities of sensitive personal data collected from people across the globe, and one country’s military. They also signal a failure to engage with global civil society and diplomatic institutions that have already highlighted the ethical stakes of these technologies.

We are at a critical moment. The Cambridge Analytica scandal demonstrates growing public concern over allowing the tech industries to wield so much power. This has shone only one spotlight on the increasingly high stakes of information technology infrastructures, and the inadequacy of current national and international governance frameworks to safeguard public trust. Nowhere is this more true than in the case of systems engaged in adjudicating who lives and who dies.
We thus ask Google, and its parent company Alphabet, to:

  • Terminate its Project Maven contract with the DoD.

  • Commit not to develop military technologies, nor to allow the personal data it has collected to be used for military operations.

  • Pledge to neither participate in nor support the development, manufacture, trade or use of autonomous weapons; and to support efforts to ban autonomous weapons.

Google eyes

The Oxford English Dictionary recognised ‘google’ as a verb in 2006, and its active form is about to gain another dimension.  One of the most persistent anxieties amongst those executing remote warfare, with its extraordinary dependence on (and capacity for) real-time full motion video surveillance as an integral moment of the targeting cycle, has been the ever-present risk of ‘swimming in sensors and drowning in data‘.

But now Kate Conger and Dell Cameron report for Gizmodo on a new collaboration between Google and the Pentagon as part of Project Maven:

Project Maven, a fast-moving Pentagon project also known as the Algorithmic Warfare Cross-Functional Team (AWCFT), was established in April 2017. Maven’s stated mission is to “accelerate DoD’s integration of big data and machine learning.” In total, the Defense Department spent $7.4 billion on artificial intelligence-related areas in 2017, the Wall Street Journal reported.

The project’s first assignment was to help the Pentagon efficiently process the deluge of video footage collected daily by its aerial drones—an amount of footage so vast that human analysts can’t keep up, according to Greg Allen, an adjunct fellow at the Center for a New American Security, who co-authored a lengthy July 2017 report on the military’s use of artificial intelligence. Although the Defense Department has poured resources into the development of advanced sensor technology to gather information during drone flights, it has lagged in creating analysis tools to comb through the data.

“Before Maven, nobody in the department had a clue how to properly buy, field, and implement AI,” Allen wrote.

Maven was tasked with using machine learning to identify vehicles and other objects in drone footage, taking that burden off analysts. Maven’s initial goal was to provide the military with advanced computer vision, enabling the automated detection and identification of objects in as many as 38 categories captured by a drone’s full-motion camera, according to the Pentagon. Maven provides the department with the ability to track individuals as they come and go from different locations.

Google has reportedly attempted to allay fears about its involvement:

A Google spokesperson told Gizmodo in a statement that it is providing the Defense Department with TensorFlow APIs, which are used in machine learning applications, to help military analysts detect objects in images. Acknowledging the controversial nature of using machine learning for military purposes, the spokesperson said the company is currently working “to develop polices and safeguards” around its use.

“We have long worked with government agencies to provide technology solutions. This specific project is a pilot with the Department of Defense, to provide open source TensorFlow APIs that can assist in object recognition on unclassified data,” the spokesperson said. “The technology flags images for human review, and is for non-offensive uses only. Military use of machine learning naturally raises valid concerns. We’re actively discussing this important topic internally and with others as we continue to develop policies and safeguards around the development and use of our machine learning technologies.”

 

As Mehreen Kasana notes, Google has indeed ‘long worked with government agencies’:

2017 report in Quartz shed light on the origins of Google and how a significant amount of funding for the company came from the CIA and NSA for mass surveillance purposes. Time and again, Google’s funding raises questions. In 2013, a Guardian report highlighted Google’s acquisition of the robotics company Boston Dynamics, and noted that most of the projects were funded by the Defense Advanced Research Projects Agency (DARPA).

Drone Imaginaries and Society

News from Kathrin Maurer of a conference on Drone Imaginaries and Society, 5-6 June 2018, at the University of Southern Denmark (Odense).  I’ll be giving a keynote, and look forward to the whole event very much (and not only because, while I’ve visited Denmark lots of times and loved every one of them, I’ve never made it to Odense):

Drones are in the air. The production of civilian drones for rescue, transport, and leisure activity is booming. The Danish government, for example, proclaimed civilian drones a national strategy in 2016. Accordingly, many research institutions as well as the industry focus on the development, usage, and promotion of drone technology. These efforts often prioritize commercialization and engineering as well as setting-up UAV (Unmanned Arial Vehicle) test centers. As a result, urgent questions regarding how drone technology impacts our identity as humans as well as their effects on how we envision the human society are frequently underexposed in these initiatives.

Our conference aims to change this perspective. By investigating cultural representations of civilian and military drones in visual arts, film, and literature, we intend to shed light on drone technology from a humanities’ point of view. This aesthetic “drone imaginary” forms not only the empirical material of our discussions but also a prism of knowledge which provides new insights into the meaning of drone technology for society today.

Several artists, authors, film makers, and thinkers have already engaged in this drone imaginary. While some of these inquiries provide critical reflection on contemporary and future drone technologies – for instance issues such as privacy, surveillance, automation, and security – others allow for alternative ways of seeing and communicating as well as creative re-imagination of new ways of organizing human communities. The goal of the conference is to bring together these different aesthetic imaginaries to better understand the role of drone technologies in contemporary and future societies.

 The focus points of the conference are:

–     Aesthetic drone imaginaries: Which images, metaphors, ethics, emotions and affects are associated to drones through their representation in art, fiction and popular culture?

–     Drone technology and its implications for society: How do drones change our daily routines and push the balance between publicity and privacy?

–     Historical perspective on drones: In what way do drone imaginaries allow for a counter-memory that can challenge, for instance, the military implementation of drones?

–     Drones as vulnerability: Do drones make societies more resilient or more fragile, and are societies getting overly dependent on advanced technologies?

     Utopian or dystopian drone imaginaries: What dream or nightmare scenarios are provided by drone fiction and how do they allow for a (re)imagining of future societies?

–     Drones and remote sensing: In what way do drones mark a radical new way of seeing and sensing by their remotely vertical gaze and operative images?

     Drone warfare: Do drones mark a continuation or rupture of the way we understand war and conflict, and how do they change the military imaginary?

The conference is sponsored by the Drone Network (Danish Research Council) and Institute for the Study of Culture at the University of Southern Denmark.

 You can contact Kathrin at  kamau@sdu.dk

The conference website is here.

Army of None

Coming next spring from Norton – and with a very clever title – Paul Scharre‘s Army of None: Autonomous weapons and the future of war.

A Pentagon defense expert and former U.S. Army Ranger traces the emergence of autonomous weapons.

What happens when a Predator drone has as much autonomy as a Google car? Although it sounds like science fiction, the technology to create weapons that could hunt and destroy targets on their own already exists. Paul Scharre, a leading expert in emerging weapons technologies, draws on incisive research and firsthand experience to explore how increasingly autonomous weapons are changing warfare.

This far-ranging investigation examines the emergence of fully autonomous weapons, the movement to ban them, and the legal and ethical issues surrounding their use. Scharre spotlights the role of artificial intelligence in military technology, spanning decades of innovation from German noise-seeking Wren torpedoes in World War II—antecedents of today’s armed drones—to autonomous cyber weapons. At the forefront of a game-changing debate, Army of None engages military history, global policy, and bleeding-edge science to explore what it would mean to give machines authority over the ultimate decision: life or death.

You can get a taste – in fact a whole tasting menu – of Paul’s arguments at Just Security here.

Paul is at the Centre for a New American Security, and you can download two reports (which have been endlessly ripped by others) on Robotics on the Battlefield: Part I is Range, persistence and daring and Part II is The Coming Swarm (both 2014).

Tracking and targeting

News from Lucy Suchman of a special issue of Science, Technology and Human Values [42 (6) (2017)]  on Tracking and targeting: sociotechnologies of (in)security, which she’s co-edited with Karolina Follis and Jutta Weber.

Here’s the line-up:

Lucy Suchman, Karolina Follis and Jutta Weber: Tracking and targeting

This introduction to the special issue of the same title sets out the context for a critical examination of contemporary developments in sociotechnical systems deployed in the name of security. Our focus is on technologies of tracking, with their claims to enable the identification of those who comprise legitimate targets for the use of violent force. Taking these claims as deeply problematic, we join a growing body of scholarship on the technopolitical logics that underpin an increasingly violent landscape of institutions, infrastructures, and actions, promising protection to some but arguably contributing to our collective insecurity. We examine the asymmetric distributions of sociotechnologies of (in)security; their deadly and injurious effects; and the legal, ethical, and moral questions that haunt their operations.

Karolina Follis: Visions and transterritory: the borders of Europe

This essay is about the role of visual surveillance technologies in the policing of the external borders of the European Union (EU). Based on an analysis of documents published by EU institutions and independent organizations, I argue that these technological innovations fundamentally alter the nature of national borders. I discuss how new technologies of vision are deployed to transcend the physical limits of territories. In the last twenty years, EU member states and institutions have increasingly relied on various forms of remote tracking, including the use of drones for the purposes of monitoring frontier zones. In combination with other facets of the EU border management regime (such as transnational databases and biometrics), these technologies coalesce into a system of governance that has enabled intervention into neighboring territories and territorial waters of other states to track and target migrants for interception in the “prefrontier.” For jurisdictional reasons, this practice effectively precludes the enforcement of legal human rights obligations, which European states might otherwise have with regard to these persons. This article argues that this technologically mediated expansion of vision has become a key feature of post–cold war governance of borders in Europe. The concept of transterritory is proposed to capture its effects.

Christiane Wilke: Seeing and unmaking civilians in Afghanistan: visual technologies and contested professional visions

While the distinction between civilians and combatants is fundamental to international law, it is contested and complicated in practice. How do North Atlantic Treaty Organization (NATO) officers see civilians in Afghanistan? Focusing on 2009 air strike in Kunduz, this article argues that the professional vision of NATO officers relies not only on recent military technologies that allow for aerial surveillance, thermal imaging, and precise targeting but also on the assumptions, vocabularies, modes of attention, and hierarchies of knowledges that the officers bring to the interpretation of aerial surveillance images. Professional vision is socially situated and frequently contested with communities of practice. In the case of the Kunduz air strike, the aerial vantage point and the military visual technologies cannot fully determine what would be seen. Instead, the officers’ assumptions about Afghanistan, threats, and the gender of the civilian inform the vocabulary they use for coding people and places as civilian or noncivilian. Civilians are not simply “found,” they are produced through specific forms of professional vision.

Jon Lindsay: Target practice: Counterterrorism and the amplification of data friction

The nineteenth-century strategist Carl von Clausewitz describes “fog” and “friction” as fundamental features of war. Military leverage of sophisticated information technology in the twenty-first century has improved some tactical operations but has not lifted the fog of war, in part, because the means for reducing uncertainty create new forms of it. Drawing on active duty experience with an American special operations task force in Western Iraq from 2007 to 2008, this article traces the targeting processes used to “find, fix, and finish” alleged insurgents. In this case they did not clarify the political reality of Anbar province but rather reinforced a parochial worldview informed by the Naval Special Warfare community. The unit focused on the performance of “direct action” raids during a period in which “indirect action” engagement with the local population was arguably more appropriate for the strategic circumstances. The concept of “data friction”, therefore, can be understood not simply as a form of resistance within a sociotechnical system but also as a form of traction that enables practitioners to construct representations of the world that amplify their own biases.

M.C. Elish: Remote split: a history of US drone operations and the distributed labour of war

This article analyzes US drone operations through a historical and ethnographic analysis of the remote split paradigm used by the US Air Force. Remote split refers to the globally distributed command and control of drone operations and entails a network of human operators and analysts in the Middle East, Europe, and Southeast Asia as well as in the continental United States. Though often viewed as a teleological progression of “unmanned” warfare, this paper argues that historically specific technopolitical logics establish the conditions of possibility for the work of war to be divisible into discreet and computationally mediated tasks that are viewed as effective in US military engagements. To do so, the article traces how new forms of authorized evidence and expertise have shaped developments in military operations and command and control priorities from the Cold War and the “electronic battlefield” of Vietnam through the Gulf War and the conflict in the Balkans to contemporary deployments of drone operations. The article concludes by suggesting that it is by paying attention to divisions of labor and human–machine configurations that we can begin to understand the everyday and often invisible structures that sustain perpetual war as a military strategy of the United States.

I’ve discussed Christiane’s excellent article in detail before, but the whole issue repays careful reading.

And if you’re curious about the map that heads this post, it’s based on the National Security Agency’s Strategic Mission List (dated 2007 and published in the New York Times on 2 November 2013), and mapped at Electrospaces: full details here.

The evolution of warfare

irrc-2016

The latest issue of the International Review of the Red Cross (open access here) focuses on the evolution of warfare:

To mark the 100th anniversary of the First World War, the Review asked historians, legal scholars and humanitarian practitioners to look back at the wars of the past century from a humanitarian point of view. In using what we know of the past to illuminate the present and the future, this issue of the Review adopts a long-term perspective, with the aim to illustrate the changing face of conflict by placing human suffering ‒ so often relegated to the backdrop of history ‒ front and center. It focuses on WWI and the period immediately leading up to it as a turning point in the history of armed conflict, drawing important parallels between the past and the changes we are witnessing today.

Among the highlights: an interview with Richard Overy on the history of bombing; Eric Germain, ‘Out of sight, out of reach: Moral issues in the globalization of the battlefield’; Lindsey Cameron, ‘The ICRC in the First World War: Unwavering belief in the power of law?’; Rain Liivoja, ‘Technological change and the evolution of the law of war’; Claudia McGoldrick, ‘The state of conflicts today: Can humanitarian action adapt?’; and Anna Di Lellio and Emanuele Castano, ‘The danger of “new norms” and the continuing relevance of IHL in the post-9/11 era’.

Incidentally, there may be something Darwinian about the trajectory of modern war – but I’m not sure that ‘evolution’ is exactly the right word…

Seeing machines

 

graham-drone-cover

The Transnational Institute has published a glossy version of a chapter from Steve Graham‘s Vertical – called Drone: Robot Imperium, you can download it here (open access).  Not sure about either of the terms in the subtitle, but it’s a good read and richly illustrated.

Steve includes a discussion of the use of drones to patrol the US-Mexico border, and Josh Begley has published a suggestive account of the role of drones but also other ‘seeing machines’ in visualizing the border.

One way the border is performed — particularly the southern border of the United States — can be understood through the lens of data collection. In the border region, along the Rio Grande and westward through the desert Southwest, Customs and Border Protection (CBP) deploys radar blimps, drones, fixed-wing aircraft, helicopters, seismic sensors, ground radar, face recognition software, license-plate readers, and high-definition infrared video cameras. Increasingly, they all feed data back into something called “The Big Pipe.”

Josh downloaded 20,000 satellite images of the border, stitched them together, and then worked with Laura Poitras and her team at Field of Vision to produce a short film – Best of Luck with the Wall – that traverses the entire length of the border (1, 954 miles) in six minutes:

The southern border is a space that has been almost entirely reduced to metaphor. It is not even a geography. Part of my intention with this film is to insist on that geography.

By focusing on the physical landscape, I hope viewers might gain a sense of the enormity of it all, and perhaps imagine what it would mean to be a political subject of that terrain.

begley-fatal-migrations-1

If you too wonder about that last sentence and its latent bio-physicality – and there is of course a rich stream of work on the bodies that seek to cross that border – then you might visit another of Josh’s projects, Fatal Migrations, 2011-2016 (see above and below).

begley-fatal-migrations-2

There’s an interview with Josh that, among other things, links these projects with his previous work.

I have a couple of projects that are smartphone centered. One of them is about mapping the geography of places around the world where the CIA carries out drone strikes—mostly in Pakistan, Yemen, and Somalia. Another was about looking at the geography of incarceration in the United States—there are more than 5,000 prisons—and trying to map all of them and see them through satellites. I currently have an app that is looking at the geography of police violence in the United States. Most of these apps are about creating a relationship between data and the body, where you can receive a notification every time something unsettling happens. What does that mean for the rest of your day? How do you live with that data—data about people? In some cases the work grows out of these questions, but in other cases the work really is about landscape….

There’s just so much you can never know from looking at satellite imagery. By definition it flattens and distorts things. A lot of folks who fly drones, for instance, think they know a space just from looking at it from above. I firmly reject that idea. The bird’s eye view is never what it means to be on the ground somewhere, or what it means to have meaningful relationships with people on the ground. I feel like I can understand the landscape from 30,000 feet, but it is not the same as spending time in a space.

Anjali Nath has also provided a new commentary on one of Josh’s earlier projects, Metadata, that he cites in that interview – ‘Touched from below: on drones, screens and navigation’, Visual Anthropology 29 (3) (2016) 315-30.

It’s part of a special issue on ‘Visual Revolutions in the Middle East’, and as I explore the visual interventions I’ve included in this post I find myself once again thinking of a vital remark by Edward Said:

we-are-also-looking-at-our-observers-001

That’s part of the message behind the #NotaBugSplat image on the cover of Steve’s essay: but what might Said’s remark mean more generally today, faced with the proliferation of these seeing machines?