Google eyes

The Oxford English Dictionary recognised ‘google’ as a verb in 2006, and its active form is about to gain another dimension.  One of the most persistent anxieties amongst those executing remote warfare, with its extraordinary dependence on (and capacity for) real-time full motion video surveillance as an integral moment of the targeting cycle, has been the ever-present risk of ‘swimming in sensors and drowning in data‘.

But now Kate Conger and Dell Cameron report for Gizmodo on a new collaboration between Google and the Pentagon as part of Project Maven:

Project Maven, a fast-moving Pentagon project also known as the Algorithmic Warfare Cross-Functional Team (AWCFT), was established in April 2017. Maven’s stated mission is to “accelerate DoD’s integration of big data and machine learning.” In total, the Defense Department spent $7.4 billion on artificial intelligence-related areas in 2017, the Wall Street Journal reported.

The project’s first assignment was to help the Pentagon efficiently process the deluge of video footage collected daily by its aerial drones—an amount of footage so vast that human analysts can’t keep up, according to Greg Allen, an adjunct fellow at the Center for a New American Security, who co-authored a lengthy July 2017 report on the military’s use of artificial intelligence. Although the Defense Department has poured resources into the development of advanced sensor technology to gather information during drone flights, it has lagged in creating analysis tools to comb through the data.

“Before Maven, nobody in the department had a clue how to properly buy, field, and implement AI,” Allen wrote.

Maven was tasked with using machine learning to identify vehicles and other objects in drone footage, taking that burden off analysts. Maven’s initial goal was to provide the military with advanced computer vision, enabling the automated detection and identification of objects in as many as 38 categories captured by a drone’s full-motion camera, according to the Pentagon. Maven provides the department with the ability to track individuals as they come and go from different locations.

Google has reportedly attempted to allay fears about its involvement:

A Google spokesperson told Gizmodo in a statement that it is providing the Defense Department with TensorFlow APIs, which are used in machine learning applications, to help military analysts detect objects in images. Acknowledging the controversial nature of using machine learning for military purposes, the spokesperson said the company is currently working “to develop polices and safeguards” around its use.

“We have long worked with government agencies to provide technology solutions. This specific project is a pilot with the Department of Defense, to provide open source TensorFlow APIs that can assist in object recognition on unclassified data,” the spokesperson said. “The technology flags images for human review, and is for non-offensive uses only. Military use of machine learning naturally raises valid concerns. We’re actively discussing this important topic internally and with others as we continue to develop policies and safeguards around the development and use of our machine learning technologies.”

 

As Mehreen Kasana notes, Google has indeed ‘long worked with government agencies’:

2017 report in Quartz shed light on the origins of Google and how a significant amount of funding for the company came from the CIA and NSA for mass surveillance purposes. Time and again, Google’s funding raises questions. In 2013, a Guardian report highlighted Google’s acquisition of the robotics company Boston Dynamics, and noted that most of the projects were funded by the Defense Advanced Research Projects Agency (DARPA).

Tracking and targeting

News from Lucy Suchman of a special issue of Science, Technology and Human Values [42 (6) (2017)]  on Tracking and targeting: sociotechnologies of (in)security, which she’s co-edited with Karolina Follis and Jutta Weber.

Here’s the line-up:

Lucy Suchman, Karolina Follis and Jutta Weber: Tracking and targeting

This introduction to the special issue of the same title sets out the context for a critical examination of contemporary developments in sociotechnical systems deployed in the name of security. Our focus is on technologies of tracking, with their claims to enable the identification of those who comprise legitimate targets for the use of violent force. Taking these claims as deeply problematic, we join a growing body of scholarship on the technopolitical logics that underpin an increasingly violent landscape of institutions, infrastructures, and actions, promising protection to some but arguably contributing to our collective insecurity. We examine the asymmetric distributions of sociotechnologies of (in)security; their deadly and injurious effects; and the legal, ethical, and moral questions that haunt their operations.

Karolina Follis: Visions and transterritory: the borders of Europe

This essay is about the role of visual surveillance technologies in the policing of the external borders of the European Union (EU). Based on an analysis of documents published by EU institutions and independent organizations, I argue that these technological innovations fundamentally alter the nature of national borders. I discuss how new technologies of vision are deployed to transcend the physical limits of territories. In the last twenty years, EU member states and institutions have increasingly relied on various forms of remote tracking, including the use of drones for the purposes of monitoring frontier zones. In combination with other facets of the EU border management regime (such as transnational databases and biometrics), these technologies coalesce into a system of governance that has enabled intervention into neighboring territories and territorial waters of other states to track and target migrants for interception in the “prefrontier.” For jurisdictional reasons, this practice effectively precludes the enforcement of legal human rights obligations, which European states might otherwise have with regard to these persons. This article argues that this technologically mediated expansion of vision has become a key feature of post–cold war governance of borders in Europe. The concept of transterritory is proposed to capture its effects.

Christiane Wilke: Seeing and unmaking civilians in Afghanistan: visual technologies and contested professional visions

While the distinction between civilians and combatants is fundamental to international law, it is contested and complicated in practice. How do North Atlantic Treaty Organization (NATO) officers see civilians in Afghanistan? Focusing on 2009 air strike in Kunduz, this article argues that the professional vision of NATO officers relies not only on recent military technologies that allow for aerial surveillance, thermal imaging, and precise targeting but also on the assumptions, vocabularies, modes of attention, and hierarchies of knowledges that the officers bring to the interpretation of aerial surveillance images. Professional vision is socially situated and frequently contested with communities of practice. In the case of the Kunduz air strike, the aerial vantage point and the military visual technologies cannot fully determine what would be seen. Instead, the officers’ assumptions about Afghanistan, threats, and the gender of the civilian inform the vocabulary they use for coding people and places as civilian or noncivilian. Civilians are not simply “found,” they are produced through specific forms of professional vision.

Jon Lindsay: Target practice: Counterterrorism and the amplification of data friction

The nineteenth-century strategist Carl von Clausewitz describes “fog” and “friction” as fundamental features of war. Military leverage of sophisticated information technology in the twenty-first century has improved some tactical operations but has not lifted the fog of war, in part, because the means for reducing uncertainty create new forms of it. Drawing on active duty experience with an American special operations task force in Western Iraq from 2007 to 2008, this article traces the targeting processes used to “find, fix, and finish” alleged insurgents. In this case they did not clarify the political reality of Anbar province but rather reinforced a parochial worldview informed by the Naval Special Warfare community. The unit focused on the performance of “direct action” raids during a period in which “indirect action” engagement with the local population was arguably more appropriate for the strategic circumstances. The concept of “data friction”, therefore, can be understood not simply as a form of resistance within a sociotechnical system but also as a form of traction that enables practitioners to construct representations of the world that amplify their own biases.

M.C. Elish: Remote split: a history of US drone operations and the distributed labour of war

This article analyzes US drone operations through a historical and ethnographic analysis of the remote split paradigm used by the US Air Force. Remote split refers to the globally distributed command and control of drone operations and entails a network of human operators and analysts in the Middle East, Europe, and Southeast Asia as well as in the continental United States. Though often viewed as a teleological progression of “unmanned” warfare, this paper argues that historically specific technopolitical logics establish the conditions of possibility for the work of war to be divisible into discreet and computationally mediated tasks that are viewed as effective in US military engagements. To do so, the article traces how new forms of authorized evidence and expertise have shaped developments in military operations and command and control priorities from the Cold War and the “electronic battlefield” of Vietnam through the Gulf War and the conflict in the Balkans to contemporary deployments of drone operations. The article concludes by suggesting that it is by paying attention to divisions of labor and human–machine configurations that we can begin to understand the everyday and often invisible structures that sustain perpetual war as a military strategy of the United States.

I’ve discussed Christiane’s excellent article in detail before, but the whole issue repays careful reading.

And if you’re curious about the map that heads this post, it’s based on the National Security Agency’s Strategic Mission List (dated 2007 and published in the New York Times on 2 November 2013), and mapped at Electrospaces: full details here.

‘By our algorithms we shall know them’

191webcoverweb

Radical Philosophy 191 is out now, including two contributions of particular interest to me as I continue to grapple with the surveillance apparatus that (mis)informs US drone strikes in the Federally Administered Tribal Areas.  This has come into sharper view after Obama’s rare admission of not only a strike in the FATA but of a mistake in targeting – though his statement was prompted by the death of an American and Italian hostage not by the previous deaths of innocent Pakistanis.

First, Grégoire Chamayou‘s ‘Oceanic enemy: a brief philosophical history of the NSA‘ which traces a path from the sonic surveillance of submarines off Barbados in 1962 to ‘pattern of life’ analysis in Afghanistan, Iraq and Pakistan, and which – not surprisingly – intersects with his Theory of the drone in all sorts of ways:

‘The premiss is the same as before: ‘in environments where there is no visual difference between friend and enemy, it is by their actions that enemies are visible.’ Today the task of establishing a distinction between friend and enemy is once again to be entrusted to algorithms.’

Second, Claudia Aradau‘s ‘The signature of security: big data, anticipation, surveillance‘ shatters the crystal balls of the intelligence agencies:

‘We are not crystal ball gazers. We are Intelligence Agencies’, noted the former GCHQ director Iain Lobban in a public inquiry on privacy and security by the Intelligence and Security Committee of the UK Parliament (ISC) in the wake of the Snowden revelations about mass surveillance….

I argue here that the disavowal of ‘crystal ball gazing’ is as important as the image of finding the clue through the data deluge in order to locate potential dangerous events or individuals in the future. Intelligence work is no stranger to the anticipation of the future – rather, it justifies itself precisely through the capacity to peer into the future in order to prevent or pre-empt future events from materializing. Big data has intensified the promise of anticipating the future and led to ‘exacerbat[ing] the severance of surveillance from history and memory’, while ‘the assiduous quest for pattern-discovery will justify unprecedented access to data’. ‘Knowledge discovery’ through big-data mining, and prediction through the recording of datafied traces of social life, have become the doxa of intelligence and security professionals. They claim that access to the digital traces that we leave online through commercial transactions or social interactions can hold a reading of the future. They repeat the mantra of data scientists and private corporations that the ‘digital bread crumbs’ of the online world ‘give a view of life in all its complexity’ and ‘will revolutionize the study of human behaviour’.

Unlike statistical technologies of governing populations, big data scientists promise that through big data ‘we can escape the straightjacket of group identities, and replace them with more granular predictions for each individual’. To resist their unreasonable promise of predicting crises, preventing diseases, pre-empting terrorist attacks and overall reshaping society and politics, I recast it as divination rather than detection. Big-data epistemics has more in common with the ‘pseudo-rationality’ of astrology than the method of clues. As such, it renders our vocabularies of epistemic critique inoperative…

‘There is nothing irrational about astrology’, concluded Adorno, ‘except its decisive contention that these two spheres of rational knowledge are interconnected, whereas not the slightest evidence of such an interconnection can be offered.’ The irrationality of big-data security is not in the data, its volume or messiness, but in how a hieroglyph of terrorist behaviour is produced from the data, without any possibility of error.

You can obtain the pdfs of both essays by following the links above – but they are time-limited so do it now.

Security archipelagos

Three short contributions that have caught my eye raise a series of interesting questions about contemporary ‘security archipelagos’ (in multiple sense of the term, hence the plural).

amar-security-archipelagoThe term itself comes from Paul Amar, and Austin Zeiderman has a short but interesting review of his The Security Archipelago: Human-Security States, Sexuality Politics, and the End of Neoliberalism (Duke, 2013) over at Public Books (Public Culture‘s public site):

‘Amar asserts that we need an analytical framework focused on the rise of human security—a governance regime that “aim[s] to protect, rescue, and secure certain idealized forms of humanity.” This new regime is gradually replacing neoliberalism, Amar contends, “as the hegemonic project of global governance and of state administration.” This shift is evident in how security is now justified and pursued by states. The antagonistic relationship between security and human rights that characterized the “neoliberal market states” of the late 20th century is no longer so evident. The repressive security strategies that underpinned earlier development paradigms have been succeeded by the “promise to reconcile human rights and national security interests” in the interest of economic prosperity. Progressive and conservative security doctrines now agree on the imperative to “humanize” (or “humanitarianize”) both state and parastatal security apparatuses. The result, Amar argues, is what he calls the “human-security state”: a globally emergent governance regime with “consistent character and political profile.” From Latin America to the Middle East, political legitimacy is increasingly based on securing humanity against a range of malicious forces….

If the megacities of the Global South are indeed “laboratories” in which new logics and techniques of global governance are being created, it is up to other researchers to fill out and develop further Amar’s concept of the “security archipelago.” Though his study provides both the theoretical rationale and the analytical tools with which to do so, it may be worth questioning whether the “human” is necessarily central to emerging security regimes. For along with human security apparatuses and the human actors struggling to articulate progressive alternatives, a host of non-humans—drones, border fences, hurricanes—are actively producing the security landscape of the future.’

Secondly, I’ve been thinking about the ways in which the work of these ‘laboratories’ often relies on non-state, which is to say corporate, commercial sites (this isn’t news to Paul, of course, even if he wants to challenge our ideas about neoliberalism).  We surely know that the traditional concept of the military-industrial complex now needs wholesale revision, and I’ve noted before the timely and important essay by Jeremy Crampton, Sue Roberts and Ate Poorthuis on ‘The new political economy of geospatial intelligence‘ in the Annals of the Association of American Geographers 104 (1)  (2014) (to which I plan to return in a later post).  The latest MIT Technology Review has a short but suggestive essay by Antonio Regalado, ‘Spinoffs from Spyland’, which describes some of the pathways through which the National Security Agency commercializes (and thus potentially subcontracts and, in some cases, even subverts) its surveillance technology:

In 2011, the NSA released 200,000 lines of code to the Apache Foundation. When Atlas Venture’s Lynch read about that, he jumped—here was a technology already developed, proven to work on tens of terabytes of data, and with security features sorely needed by heavily regulated health-care and banking customers. When Fuchs’s NSA team got cold feet about leaving, says Lynch, “I said ‘Either you do it, or I’ll find five kids from MIT to do it and they’ll steal your thunder.’”

Eventually, Fuchs and several others left the NSA, and now their company [Sqrrl] is part of a land grab in big data, where several companies, like Splunk, Palantir, and Cloudera, have quickly become worth a billion dollars or more.

Over the summer, when debate broke out over NSA surveillance of Americans and others, Sqrrl tried to keep a low profile. But since then, it has found that its connection to the $10-billion-a-year spy agency is a boost, says Ely Kahn, Sqrrl’s head of business development and a cofounder. “Large companies want enterprise-scale technology. They want the same technology the NSA has,” he says.

SQRRL

And finally, before we rush to radicalise and globalise Foucault’s critique of the Panopticon, it’s worth reading my friend Gaston Gordillo‘s cautionary note – prompted by the search for missing Malaysian Airlines Flight 370 – on ‘The Opaque Planet’:

The fascination with, and fetishization of, technologies of global location and surveillance often makes us forget that, for all their sophistication, we live on a planet riddled with opaque zones that will always erode the power of human-made systems of orientation, for the simple fact that no such system (contrary to what the NSA seems to believe) will ever manage to create an all-seeing God. This opacity is intrinsic to the textured, three-dimensional materiality of the surface of the planet, and is especially marked in the liquid vastness of the ocean.

MH370-military_radar-tracking-peninsula-170314-eng-graphcs-tmi-kamarul

Phil Steinberg has already commented on the geopolitics of the search, but Gaston draws out attention to the gaps in the surveillance capabilities of states, and here too the geopolitical meshes (and sometimes jibes against) the geoeconomic, as described in this report from Reuters:

Analysts say the gaps in Southeast Asia’s air defenses are likely to be mirrored in other parts of the developing world, and may be much greater in areas with considerably lower geopolitical tensions.

“Several nations will be embarrassed by how easy it is to trespass their airspace,” said Air Vice Marshal Michael Harwood, a retired British Royal Air Force pilot and ex-defense attache to Washington DC. “Too many movies and Predator (unmanned military drone) feeds from Afghanistan have suckered people into thinking we know everything and see everything. You get what you pay for. And the world, by and large, does not pay.”

Black spots and blank spots

Over at Guernica, Trevor Paglen has a short essay on the rise of what he calls ‘the terror state’ that connects the dots between several recent posts:

For more than a decade, we’ve seen the rise of what we might call a “Terror State,” of which the NSA’s surveillance capabilities represent just one part. Its rise occurs at a historical moment when state agencies and programs designed to enable social mobility, provide economic security and enhance civic life have been targeted for significant cuts. The last three decades, in fact, have seen serious and consistent attacks on social security, food assistance programs, unemployment benefits and education and health programs. As the social safety net has shrunk, the prison system has grown. The United States now imprisons its own citizens at a higher rate than any other country in the world.

While civic parts of the state have been in retreat, institutions of the Terror State have grown dramatically. In the name of an amorphous and never-ending “war on terror,” the Department of Homeland Security was created, while institutions such as the CIA, FBI and NSA, and darker parts of the military like the Joint Special Operations Command (JSOC) have expanded considerably in size and political influence. The world has become a battlefield—a stage for extralegal renditions, indefinite detentions without trial, drone assassination programs and cyberwarfare. We have entered an era of secret laws, classified interpretations of laws and the retroactive “legalization” of classified programs that were clearly illegal when they began. Funding for the secret parts of the state comes from a “black budget” hidden from Congress—not to mention the people—that now tops $100 billion annually. Finally, to ensure that only government-approved “leaks” appear in the media, the Terror State has waged an unprecedented war on whistleblowers, leakers and journalists. All of these state programs and capacities would have been considered aberrant only a short time ago. Now, they are the norm.

This ought to be depressingly familiar stuff, though it is important to connect those dots.  I highlight Trevor’s argument here (which radiates far beyond the paragraphs I’ve extracted above) for two reasons.

PAGLEN BLank Spots on the MapFirst, the practices that Trevor disentangles work through distinctively different geographies, at once material and virtual. Trevor’s own work addresses different dimensions of what he’s also called the Blank Spots on the Map – here definitely be dragons! though there’s a delicious irony in the US finding Edward Snowden’s whereabouts (at least this morning) to be one of them. There’s some small comfort to be had in the raging impotence of the state apparatus, which is evidently neither all-seeing nor all-knowing.  As part of his project, Trevor has done much to bring into (sometimes long-distance) focus the prying eyes of the ‘terror state’ – see for example here – but I’m particularly interested in the differential modalities of ‘watching’ and ‘acting’.  The US Air Force has become preoccupied with the predicament of ‘swimming in sensors, drowning in data‘, for example, which makes it exceptionally difficult to convert its enhanced capacity for intelligence, surveillance and reconnaissance into focused strikes and, as I noted earlier, this is only one version of a wider divergence outlined by Peter Scheer:

The logic of warfare and intelligence have flipped, each becoming the mirror image of the other. Warfare has shifted from the scaling of military operations to the selective targeting of individual enemies. Intelligence gathering has shifted from the selective targeting of known threats to wholesale data mining for the purpose of finding hidden threats.

The resulting paradigms, in turn, go a long way to account for our collective discomfort with the government’s activities in these areas. Americans are understandably distressed over the targeted killing of suspected terrorists because the very individualized nature of the drone attacks converts acts of war into de facto executions — and that in turn gives rise to demands for high standards of proof and adjudicative due process.

Similarly, intelligence activities that gather data widely, without fact-based suspicions about specific individuals to whom the data pertain, are seen as intrusive and subject to abuse.

TREVOR PAGLEN Keyhole 12-3 Optical reconnaissance satelliteThis is an interesting suggestion, a simple schematic to think with, and at present I’m working through its implications (and complications) for other dimensions of later modern war – specifically the geographies of cyberwarfare that I briefly outlined in my early essay on ‘The everywhere war’ (DOWNLOADS tab).  So for the book I’m splicing  cyberwarfare into the now explosive debate over surveillance in cyberspace, and the transformation of James Gibson‘s Fordist version of ‘Technowar’ into its post-Fordist incarnation.  In a report for Vanity Fair Michael Joseph Gross calls cyberwarfare ‘silent war’ and ‘war you cannot see’, and yet it too (as Trevor’s work implies) is material as well as virtual, not only in its consequences but also in its very architecture: see, for example, here and here (and the wonderful graphic that accompanies the report).  So, with patience, skill and effort, it can indeed be seen.  And, contrary to Thomas Rid‘s Cyber war will not take place (2013), there is a crucial sense – one which my dear friend Allan Pred constantly emphasised – in which these capacities and activities do indeed take place… More soon.

There’s a second reason for noting Trevor’s essay (he was, not incidentally, a student of Allan’s): it originates from Creative Time Reports edited by Marisa Mazria Katz:

Creative Time Reports strives to be a global leader in publishing the unflinching and provocative perspectives of artists on the most challenging issues of our times. We distribute this content to the public and media free of charge.

Asserting that culture and the free exchange of ideas are at the core of a vibrant democracy, Creative Time Reports aims to publish dispatches that speak truth to power and upend traditional takes on current issues. We believe that artists play a crucial role as thought leaders in society, and are uniquely capable of inspiring and encouraging a more engaged and informed public, whether they are addressing elections or climate change, censorship or immigration, protest movements or politically motivated violence.

In an era of unprecedented interconnectedness, Creative Time Reports provides artists with a space to voice analysis and commentary on issues too often overlooked by mainstream media. We believe in the importance of highlighting cultural producers’ distinctive viewpoints on world events and urgent issues of social justice to ensure a livelier, more nuanced and more imaginative public debate.

Given everything I’ve said about the importance of the arts to creative critical research the relevance of this will, I hope, be obvious: art not simply as a means to represent the results of research but rather as a medium through which to conduct research.  Good to think with, as Lévi-Strauss might have said, but also good to act with.  (More on Creative Time here; they are holding a ‘summit’ on Art, Place and Dislocation in the 21st Century City in New York, 25-26 October 2013).