‘Sweet target, sweet child’

My keynote (‘Sweet target, sweet child: Aerial violence and the imaginaries of remote warfare’) at the conference on Drone Imaginaries and Society at the University of Southern Denmark in June is now available online here.

In February 2010 a US air strike on three vehicles in Uruzgan province, Afghanistan in support of US and allied ground forces caused multiple civilian casualties. The attack was the direct result of surveillancecarried out by a Predator drone, and a US Army investigation into the incident criticised the flightcrew for persistently misinterpreting the full-motion video feeds from the remotely operated aircraft.This has become the signature strike for critics of remote warfare, yet they have all relied solely on a transcript of communications between US Special Forces in the vicinity, the drone crew at Creech AirForce Base in Nevada, and the helicopter pilots who executed the strike. But an examination of the interviews carried out by the investigation team reveals a more complicated – and in some respects even more disturbing – picture. This presentation uses those transcripts to brings other actors into the frame, pursues the narrative beyond the strike itself, and raises a series of questions about civilian casualties. During the post-strike examination of the site the casualties were rendered as (still) suspicious bodies and, as they were evacuated to military hospitals, as inventories of injuries. Drawing on Sonia Kennebeck’s documentary film ”National Bird” I also track the dead as they are returned to their villages and the survivors as they struggle with rehabilitation: both provide vivid illustrations of the embodied nature of nominally remote warfare and of the violent bioconvergence that lies on the otherside of the screen.

In my crosshairs

Two new books on the military gaze:

First, from the ever-interesting Roger StahlThrough the Crosshairs: War, Visual Culture, and the Weaponized Gaze (Rutgers).

Now that it has become so commonplace, we rarely blink an eye at camera footage framed by the crosshairs of a sniper’s gun or from the perspective of a descending smart bomb. But how did this weaponized gaze become the norm for depicting war, and how has it influenced public perceptions?

Through the Crosshairs traces the genealogy of this weapon’s-eye view across a wide range of genres, including news reports, military public relations images, action movies, video games, and social media posts. As he tracks how gun-camera footage has spilled from the battlefield onto the screens of everyday civilian life, Roger Stahl exposes how this raw video is carefully curated and edited to promote identification with military weaponry, rather than with the targeted victims. He reveals how the weaponized gaze is not only a powerful propagandistic frame, but also a prime site of struggle over the representation of state violence.

Contents:

1 A Strike of the Eye
2 Smart Bomb Vision
3 Satellite Vision
4 Drone Vision
5 Sniper Vision
6 Resistant Vision
7 Afterword: Bodies Inhabited and Disavowed

And here’s Lisa Parks on the book:

“Immersing readers in the perilous visualities of smart bombs, snipers, and drones, Through the Crosshairs delivers a riveting analysis of the weaponized gaze and powerfully explicates the political stakes of screen culture’s militarization.  Packed with insights about the current conjuncture, the book positions Stahl as a leading critic of war and media.”

Incidentally, if you don’t know Roger’s collaborative project, The vision machine: media, war, peace – I first blogged about it five years ago – now is the time to visit: here.

And from Antoine Bousquet, The Eye of War: military perception from the telescope to the drone (Minnesota):

From ubiquitous surveillance to drone strikes that put “warheads onto foreheads,” we live in a world of globalized, individualized targeting. The perils are great. In The Eye of War, Antoine Bousquet provides both a sweeping historical overview of military perception technologies and a disquieting lens on a world that is, increasingly, one in which anything or anyone that can be perceived can be destroyed—in which to see is to destroy.

Arguing that modern-day global targeting is dissolving the conventionally bounded spaces of armed conflict, Bousquet shows that over several centuries, a logistical order of militarized perception has come into ascendancy, bringing perception and annihilation into ever-closer alignment. The efforts deployed to evade this deadly visibility have correspondingly intensified, yielding practices of radical concealment that presage a wholesale disappearance of the customary space of the battlefield. Beginning with the Renaissance’s fateful discovery of linear perspective, The Eye of War discloses the entanglement of the sciences and techniques of perception, representation, and localization in the modern era amid the perpetual quest for military superiority. In a survey that ranges from the telescope, aerial photograph, and gridded map to radar, digital imaging, and the geographic information system, Bousquet shows how successive technological systems have profoundly shaped the history of warfare and the experience of soldiering.

A work of grand historical sweep and remarkable analytical power, The Eye of War explores the implications of militarized perception for the character of war in the twenty-first century and the place of human subjects within its increasingly technical armature.

Contents:

Introduction: Visibility Equals Death
1. Perspective
2. Sensing
3. Imaging
4. Mapping
5. Hiding
Conclusion: A Global Imperium of Targeting

And here is Daniel Monk on the book:

The Eye of War is a masterful contemporary history of the martial gaze that reviews the relation between seeing and targeting. The expansion of ocularcentrism—the ubiquitization of vision as power—Antoine Bousquet shows us, coincides with the inverse: the relegation of the eye to an instrument of a war order that relies on the sensorium as the means to its own ends. As he traces the development of a technocracy of military vision, Bousquet discloses the vision of a military technocracy that has transformed the given world into units of perception indistinct from ‘kill boxes.’

Coming from excellent US university presses that – unlike the commercial behemoths (you know who you are) favoured by too many authors in my own field (you know who you are too) – these books are both attractively designed and accessibly priced.

Googled

Following up my post on Google and Project Maven here, there’s an open letter (via the International Committee for Robot Arms Control) in support of Google employees opposed to the tech giant’s participation in Project Maven here: it’s open for for (many) more signatures…

An Open Letter To:

Larry Page, CEO of Alphabet;
Sundar Pichai, CEO of Google;
Diane Greene, CEO of Google Cloud;
and Fei-Fei Li, Chief Scientist of AI/ML and Vice President, Google Cloud,

As scholars, academics, and researchers who study, teach about, and develop information technology, we write in solidarity with the 3100+ Google employees, joined by other technology workers, who oppose Google’s participation in Project Maven. We wholeheartedly support their demand that Google terminate its contract with the DoD, and that Google and its parent company Alphabet commit not to develop military technologies and not to use the personal data that they collect for military purposes. The extent to which military funding has been a driver of research and development in computing historically should not determine the field’s path going forward. We also urge Google and Alphabet’s executives to join other AI and robotics researchers and technology executives in calling for an international treaty to prohibit autonomous weapon systems.

Google has long sought to organize and enhance the usefulness of the world’s information. Beyond searching for relevant webpages on the internet, Google has become responsible for compiling our email, videos, calendars, and photographs, and guiding us to physical destinations. Like many other digital technology companies, Google has collected vast amounts of data on the behaviors, activities and interests of their users. The private data collected by Google comes with a responsibility not only to use that data to improve its own technologies and expand its business, but also to benefit society. The company’s motto “Don’t Be Evil” famously embraces this responsibility.

Project Maven is a United States military program aimed at using machine learning to analyze massive amounts of drone surveillance footage and to label objects of interest for human analysts. Google is supplying not only the open source ‘deep learning’ technology, but also engineering expertise and assistance to the Department of Defense.

According to Defense One, Joint Special Operations Forces “in the Middle East” have conducted initial trials using video footage from a small ScanEagle surveillance drone. The project is slated to expand “to larger, medium-altitude Predator and Reaper drones by next summer” and eventually to Gorgon Stare, “a sophisticated, high-tech series of cameras…that can view entire towns.” With Project Maven, Google becomes implicated in the questionable practice of targeted killings. These include so-called signature strikesand pattern-of-life strikes that target people based not on known activities but on probabilities drawn from long range surveillance footage. The legality of these operations has come into question under international[1] and U.S. law.[2] These operations also have raised significant questions of racial and gender bias (most notoriously, the blanket categorization of adult males as militants) in target identification and strike analysis.[3]These problems cannot be reduced to the accuracy of image analysis algorithms, but can only be addressed through greater accountability to international institutions and deeper understanding of geopolitical situations on the ground.

While the reports on Project Maven currently emphasize the role of human analysts, these technologies are poised to become a basis for automated target recognition and autonomous weapon systems. As military commanders come to see the object recognition algorithms as reliable, it will be tempting to attenuate or even remove human review and oversight for these systems. According to Defense One, the DoD already plans to install image analysis technologies on-board the drones themselves, including armed drones. We are then just a short step away from authorizing autonomous drones to kill automatically, without human supervision or meaningful human control. If ethical action on the part of tech companies requires consideration of who might benefit from a technology and who might be harmed, then we can say with certainty that no topic deserves more sober reflection – no technology has higher stakes – than algorithms meant to target and kill at a distance and without public accountability.

We are also deeply concerned about the possible integration of Google’s data on people’s everyday lives with military surveillance data, and its combined application to targeted killing. Google has moved into military work without subjecting itself to public debate or deliberation, either domestically or internationally. While Google regularly decides the future of technology without democratic public engagement, its entry into military technologies casts the problems of private control of information infrastructure into high relief.

Should Google decide to use global internet users’ personal data for military purposes, it would violate the public trust that is fundamental to its business by putting its users’ lives and human rights in jeopardy. The responsibilities of global companies like Google must be commensurate with the transnational makeup of their users. The DoD contracts under consideration by Google, and similar contracts already in place at Microsoft and Amazon, signal a dangerous alliance between the private tech industry, currently in possession of vast quantities of sensitive personal data collected from people across the globe, and one country’s military. They also signal a failure to engage with global civil society and diplomatic institutions that have already highlighted the ethical stakes of these technologies.

We are at a critical moment. The Cambridge Analytica scandal demonstrates growing public concern over allowing the tech industries to wield so much power. This has shone only one spotlight on the increasingly high stakes of information technology infrastructures, and the inadequacy of current national and international governance frameworks to safeguard public trust. Nowhere is this more true than in the case of systems engaged in adjudicating who lives and who dies.
We thus ask Google, and its parent company Alphabet, to:

  • Terminate its Project Maven contract with the DoD.

  • Commit not to develop military technologies, nor to allow the personal data it has collected to be used for military operations.

  • Pledge to neither participate in nor support the development, manufacture, trade or use of autonomous weapons; and to support efforts to ban autonomous weapons.

Google eyes

The Oxford English Dictionary recognised ‘google’ as a verb in 2006, and its active form is about to gain another dimension.  One of the most persistent anxieties amongst those executing remote warfare, with its extraordinary dependence on (and capacity for) real-time full motion video surveillance as an integral moment of the targeting cycle, has been the ever-present risk of ‘swimming in sensors and drowning in data‘.

But now Kate Conger and Dell Cameron report for Gizmodo on a new collaboration between Google and the Pentagon as part of Project Maven:

Project Maven, a fast-moving Pentagon project also known as the Algorithmic Warfare Cross-Functional Team (AWCFT), was established in April 2017. Maven’s stated mission is to “accelerate DoD’s integration of big data and machine learning.” In total, the Defense Department spent $7.4 billion on artificial intelligence-related areas in 2017, the Wall Street Journal reported.

The project’s first assignment was to help the Pentagon efficiently process the deluge of video footage collected daily by its aerial drones—an amount of footage so vast that human analysts can’t keep up, according to Greg Allen, an adjunct fellow at the Center for a New American Security, who co-authored a lengthy July 2017 report on the military’s use of artificial intelligence. Although the Defense Department has poured resources into the development of advanced sensor technology to gather information during drone flights, it has lagged in creating analysis tools to comb through the data.

“Before Maven, nobody in the department had a clue how to properly buy, field, and implement AI,” Allen wrote.

Maven was tasked with using machine learning to identify vehicles and other objects in drone footage, taking that burden off analysts. Maven’s initial goal was to provide the military with advanced computer vision, enabling the automated detection and identification of objects in as many as 38 categories captured by a drone’s full-motion camera, according to the Pentagon. Maven provides the department with the ability to track individuals as they come and go from different locations.

Google has reportedly attempted to allay fears about its involvement:

A Google spokesperson told Gizmodo in a statement that it is providing the Defense Department with TensorFlow APIs, which are used in machine learning applications, to help military analysts detect objects in images. Acknowledging the controversial nature of using machine learning for military purposes, the spokesperson said the company is currently working “to develop polices and safeguards” around its use.

“We have long worked with government agencies to provide technology solutions. This specific project is a pilot with the Department of Defense, to provide open source TensorFlow APIs that can assist in object recognition on unclassified data,” the spokesperson said. “The technology flags images for human review, and is for non-offensive uses only. Military use of machine learning naturally raises valid concerns. We’re actively discussing this important topic internally and with others as we continue to develop policies and safeguards around the development and use of our machine learning technologies.”

 

As Mehreen Kasana notes, Google has indeed ‘long worked with government agencies’:

2017 report in Quartz shed light on the origins of Google and how a significant amount of funding for the company came from the CIA and NSA for mass surveillance purposes. Time and again, Google’s funding raises questions. In 2013, a Guardian report highlighted Google’s acquisition of the robotics company Boston Dynamics, and noted that most of the projects were funded by the Defense Advanced Research Projects Agency (DARPA).

Silent Witnesses

I’m still working on the mass murder in slow motion that is Ghouta; there’s so much to see, say and do that my promised post has been delayed.  Most readers will know of the stark declaration issued by UNICEF last month:

It was accompanied by this explanatory footnote:

We no longer have the words to describe children’s suffering and our outrage.  Do those inflicting the suffering still have words to justify their barbaric acts?

 

One of Allan Pred‘s favourite quotations from Walter Benjamin was this: ‘I have nothing to say.  Only to show.’  And perhaps the broken, mangled shards of montage are the most appropriate way to convey the collision of medieval and later modern violence that is sowing Syria’s killing fields with so many injured, dying and dead bodies.

You might think it’s always been so: in 1924 Ernst Friedrich introduced his collection of war photographs by insisting that ‘in the present and in the future, all the treasure of words is not enough to paint correctly the infamous carnage.’

These are suggestive claims, but two riders are necessary.  First,  images have such an extraordinary, if often insidious, subliminal power – even in our own, image-saturated culture – that they demand careful, critical interrogation and deployment.  They don’t speak for themselves.  And second, Benjamin described his method as ‘literary montage’: as Allan knew very well, words do not beat a silent retreat in the face of the image, and it’s in concert that the two produce some of their most exacting effects.

 In the course of my work on war in Syria and elsewhere I’ve encountered (and drawn upon) the work of many outstanding photographers; in some cases their images seem out-of-time, almost transcendent testimony to the enduring realities of war, while others disclose new horrors erupting in the midst of the all-too-bloody-familiar.  I think, for example, of the work of Narciso Contreras (see above and below, and also here and his collection, Syria’s War: a journal of pain, War Photo, 2014) – and I do know about the controversy over editing/cropping – or Nish Nalbandian (see also here and here and his book, A whole world blind: war and life in northern Syria, Daylight Books, 2016).

 

In my research on other conflicts I’ve also learned a lot from war artists, and in the case of Syria from graphic journalism: see, for example, the discussion by Nathalie Rosa Bucher here and the example of Molly Crabapple here. (Her work was based on cell-phone videos sent to her by a source inside IS-controlled Raqqa: another digital breach of siege warfare in Syria).

 

The point of all of this is to emphasise my debt to multiple (in this case, visual) sources that enable me – sometimes force me –  to see things differently: to turn those broken shards around, to have them catch the light and illuminate the situation anew.  And to see things I’d often rather not see.

 

It’s not a new experience. When I was completing The colonial present I was given access to a major image library, and in the course of three exhilarating days I learned more about Afghanistan, Palestine and Iraq than I had learned in three months of reading. The image bank included not only published but also unpublished images, which revealed aspects, dimensions, whole stories that had been left unremarked and unrecorded in the public record produced through editorial selection.

For my present work the Syrian Archive is invaluable:

The Syrian Archive is a Syrian-led and initiated collective of human rights activists dedicated to curating visual documentation relating to human rights violations and other crimes committed by all sides during the conflict in Syria with the goal of creating an evidence-based tool for reporting, advocacy and accountability purposes.

 

Its emphasis on visual documentation and analysis needs to be seen alongside the investigations of Forensic Architecture and bellingcat.

 

The Syrian Archive aims to support human rights investigators, advocates, media reporters, and journalists in their efforts to document human rights violations in Syria and worldwide through developing new open source tools as well as providing a transparent and replicable methodology for collecting, preserving, verifying and investigating visual documentation in conflict areas.

We believe that visual documentation of human rights violations that is transparent, detailed, and reliable are critical towards providing accountability and can positively contribute to post-conflict reconstruction and stability. Such documentation can humanise victims, reduce the space for dispute over numbers killed, help societies understand the true human costs of war, and support truth and reconciliation efforts.

Visual documentation is also valuable during conflict as it can feed into:

  • Humanitarian response planning by helping to identify areas of risk and need as well as contribute to the protection of civilians;
  • Mechanisms that support increased legal compliance by conflict parties and reductions in civilian harm;
  • Strengthening advocacy campaigns and legal accountability through building verified sets of materials documenting human rights violations in the Syrian conflict.

User-generated content is valuable during times of conflict. Verified visual documentation can feed into humanitarian response planning by helping to identify areas of risk and need as well as contributing to the protection of civilians.

Furthermore, visual documentation allows the Syrian Archive to tell untold stories through amplifying the voices of witnesses, victims and others who risked their lives to capture and document human rights violations in Syria. Not every incident in the Syrian conflict has been reported by journalists. The very challenging conditions have made it extremely difficult for local and especially international media to work in Syria, meaning the many incidents have been missed or under-reported.

Visual documentation aims to strengthen political campaigns of human rights advocates by providing content that supports their campaign. This could include content on the violation of children’s rights; sexual and gender based violence; violations against specifically protected persons and objects, or the use of illegal weapons.

Additionally, visual documentation aims to help human rights activists and Syrian citizens in setting up a memorialisation process and to create dialogues around issues related to peace and justice, to recognise and substantiate the suffering of citizens and provide multiple perspectives on the conflict that acts to prevent revisionist or simplified narratives while raising awareness of the situation in the country and highlighting the futility of violence to next generations. Video and images often compliments official narratives and press accounts of an event or situation, adding both detail and nuance. At other times, they directly rebut certain factual claims and contradict pervasive narratives.

 

Many of the videos on which this visual analysis relies (me too) were uploaded to YouTube.  Armin Rosen reports:

 

Google, which is YouTube’s parent company, knows how significant its platform has been during the war. “The Syrian civil war is in many ways the first YouTube conflict in the same way that Vietnam was the first television conflict,” Justin Kosslyn, the product manager for Jigsaw, formerly called Google Ideas, said during an interview on the sidelines of September’s Oslo Freedom Forum in New York, where Kosslyn had just spoken. “You have more hours of footage of the Syrian civil war on YouTube then there actually are hours of the war in real life.” In 2016, Jigsaw developed Montage, a Google Docs-like application that allows for collaborative analysis of online videos. Kosslyn said the project was undertaken with human rights-related investigations in mind.

The value of YouTube’s Syria videos is indisputable, especially since the regime and other armed actors have closed off much of the country to journalists and human rights observers. [Eliot] Higgins and his colleagues [at Bellingcat] proved beyond all doubt that Assad’s forces gassed a suburb of Damascus in August 2013, and a U.N. organization is now in the early stages of assessing YouTube’s Syria footage for its future use in war crimes trials. In December 2016, the U.N. General Assembly voted to establish the International Impartial and Independent Mechanism (IIIM) to assist in war crimes prosecutions related to Syria. In connection with the IIIM, Hiatt and his team at Benetech are developing software that can search and organize the estimated 4 million videos related to the conflict. The IIIM will facilitate the use of the videos in court if alleged human rights abusers ever face trial.

Last summer YouTube started deleting videos that violated its Terms of Service; the platform used algorithms to flag the offending materials and within days some 900 Syria-related channels were removed.

Alarm bells rang; here’s Chris Woods of Airwars talking to the New York Times:

“When the conflict in Syria started, independent media broke down and Syrians themselves have taken to YouTube to post news of the conflict…  What’s disappearing in front of our eyes is the history of this terrible war.”

And Eliot Higgins (on YouTube!):

After the concerted protests many of the videos were restored, but the cull continued and by the end of the year more than 200 channels providing over 400,000 videos had been removed.  Again, some of those were subsequently restored, but the Syrian Archive estimates that more than 200,000 videos are still offline.

 

 

The intervention was the product of an understandable desire to remove ‘propaganda’ videos – part of the fight back against ‘fake news’ – but here’s the rub:

Videos from the conflict could prove critical in cases where they might violate the site’s ToS—even ISIS propaganda videos help identify members of the organization and explain its internal hierarchies. “The difficulty in this type of work is that the information put out there on social media by the perpetrators of the violence can also be used to hold those perpetrators accountable,” Shabnam Mojtahedi [a legal analyst with the Syria Justice and Accountability Center: see also here for its statement on this issue] notes.

And it’s not just YouTube.  In an extended report for The Intercept Avi Asher Schapiro detected

a pattern that’s causing a quiet panic among human rights groups and war crimes investigators. Social media companies can, and do, remove content with little regard for its evidentiary value. First-hand accounts of extrajudicial killings, ethnic cleansing, and the targeting of civilians by armies can disappear with little warning, sometimes before investigators notice. When groups do realize potential evidence has been erased, recovering it can be a kafkaesque ordeal. Facing a variety of pressures — to safeguard user privacy, neuter extremist propaganda, curb harassment and, most recently, combat the spread of so-called fake news — social media companies have over and over again chosen to ignore, and, at times, disrupt the work of human rights groups scrambling to build cases against war criminals.

“It’s something that keeps me awake at night,” says Julian Nicholls, a senior trial lawyer at the International Criminal Court,  where he’s responsible for prosecuting cases against war criminals, “the idea that there’s a video or photo out there that I could use, but before we identify it or preserve it, it disappears.”

As Christoph Koettl, a senior analyst with Amnesty International and a founder of Citizen Evidence Lab put it, these social media platforms are ‘essentially privately-owned evidence lockers’.  And that should worry all of us.

Ground Truth

I’m just back from an invigorating conference on ‘The Intimacies of Remote Warfare’ at Utrecht – more on this shortly – and it was a wonderful opportunity to meet old friends and make new ones.  Chris Woods gave an outstanding review of air strikes in Iraq and Syria, and told me of an interview Airwars had conducted with Azmat Khan and Anand Gopal whose forensic investigation of civilian casualties in Iraq I discussed in a previous post.  The US-led Coalition has still not responded to their findings, even though they initially afforded a remarkable degree of co-operation.

The full interview really is worth reading, but here is Azmat explaining how their joint investigation started:

We began planning this in February 2016. By April I was on the ground [In Iraq] and I was embedding with local forces, both Shia militias and then with Peshmerga forces, in certain frontline towns. I remember early on seeing how pivotal these airstrikes were in terms of re-taking cities.

There was one town that was really important to Shias, and so dozens of Shia militias had tried to retake it — Bashir — from where ISIS had launched mortars with chemical agents into a neighboring town, Taza. I watched several Shia militias based in Taza try and fail to retake Bashir, putting in all of their troops. Then the peshmerga agreed to try and retake it, and they put in maybe a fraction of the number of troops, but were supported by Coalition airstrikes in a way the militias weren’t, and Bashir fell within hours.

It really showed me the extent to which these airstrikes played a pivotal role in re-taking territory, but also the level of devastation. Many parts of Bashir were just up in smoke, when I visited the day after it was re-taken.

Unless you were on the ground, you couldn’t get a real sense of that scale. There’d been good accounts looking at civilian casualties — but nobody had looked at both those that successfully hit ISIS targets and those that didn’t, so a systematic sample. That’s what we teamed up to do. As more cities were being retaken, we though there’s an opportunity to do this….

In terms of verifying allegations, our work went far beyond interviews and analyzing satellite imagery. In addition to interviewing hundreds of witnesses, we dug through rubble for bomb fragments, or materials that might suggest ISIS use, like artillery vests, ISIS literature, sometimes their bones, because nobody would bury them.

We also got our hands on more than 100 sets of coordinates for suspected ISIS sites passed on by local informants. Sometimes we were able to get photos and videos as well. And ultimately, we verified each civilian casualty allegation with health officials, security forces, or local administrators.

The interview also revisits the attack on the Rezzo family home, a pivot of their NYT essay, which includes even more disturbing details.  As Azmat and Anand explain, this was a strike which ought to have shown Coalition targeting at its most adept; far from it.

Khan: This is a deliberate airstrike, not a dynamic one. It was an “ISIS headquarters,” which we were told, when I was at the CAOC (Combined Air Operations Center), a very senior intelligence officer told me that a target with one of the highest thresholds to meet is usually an ISIS headquarters… In so many ways Basim’s case was the ultimate, highest most deliberative process.

Airwars: When you say the best case scenario, you mean the best case on the Coalition side in terms of what intelligence they could have, and they still screwed up in such a fundamental way?

Gopal: if there was ever a strike they could get right, this would be the one. They have weeks to plan it, they have it as an ISIS headquarters. And so you know, if it’s an ISIS headquarters, the threshold for actionable intelligence has to be much higher. It can’t just be drone footage that doesn’t see women and children.

Airwars: They identified it as a headquarters and what was the genesis of that? In the story you talk about – it’s infuriating to read – that they didn’t see women and children.

Khan: One of the things I asked at the CAOC in Qatar was how do you identify local patterns of behavior. For example, I said, under ISIS a lot of women are not leaving their homes. So when you are looking at these pattern of life videos, are you taking these variable local dynamics into account? How do you distinguish for example when you are bombing in Iraq and one of these areas, how do you distinguish between patterns of behavior that are specific to Iraq vs. bombing in Afghanistan. What are the differences?

I was told that they could not get into a great deal of detail about ISIS’ “TTPs” — tactics, techniques, and procedures — their understanding of how ISIS generally operates.  They told me that these are developed through the intelligence community, in coordination with a cultural expert, but that they could not offer more detail about it.

Gopal: At the end of the day, it appears there are no consequences for getting it wrong, so there are no incentives to try to get it right.

Drone Imaginaries and Society

News from Kathrin Maurer of a conference on Drone Imaginaries and Society, 5-6 June 2018, at the University of Southern Denmark (Odense).  I’ll be giving a keynote, and look forward to the whole event very much (and not only because, while I’ve visited Denmark lots of times and loved every one of them, I’ve never made it to Odense):

Drones are in the air. The production of civilian drones for rescue, transport, and leisure activity is booming. The Danish government, for example, proclaimed civilian drones a national strategy in 2016. Accordingly, many research institutions as well as the industry focus on the development, usage, and promotion of drone technology. These efforts often prioritize commercialization and engineering as well as setting-up UAV (Unmanned Arial Vehicle) test centers. As a result, urgent questions regarding how drone technology impacts our identity as humans as well as their effects on how we envision the human society are frequently underexposed in these initiatives.

Our conference aims to change this perspective. By investigating cultural representations of civilian and military drones in visual arts, film, and literature, we intend to shed light on drone technology from a humanities’ point of view. This aesthetic “drone imaginary” forms not only the empirical material of our discussions but also a prism of knowledge which provides new insights into the meaning of drone technology for society today.

Several artists, authors, film makers, and thinkers have already engaged in this drone imaginary. While some of these inquiries provide critical reflection on contemporary and future drone technologies – for instance issues such as privacy, surveillance, automation, and security – others allow for alternative ways of seeing and communicating as well as creative re-imagination of new ways of organizing human communities. The goal of the conference is to bring together these different aesthetic imaginaries to better understand the role of drone technologies in contemporary and future societies.

 The focus points of the conference are:

–     Aesthetic drone imaginaries: Which images, metaphors, ethics, emotions and affects are associated to drones through their representation in art, fiction and popular culture?

–     Drone technology and its implications for society: How do drones change our daily routines and push the balance between publicity and privacy?

–     Historical perspective on drones: In what way do drone imaginaries allow for a counter-memory that can challenge, for instance, the military implementation of drones?

–     Drones as vulnerability: Do drones make societies more resilient or more fragile, and are societies getting overly dependent on advanced technologies?

     Utopian or dystopian drone imaginaries: What dream or nightmare scenarios are provided by drone fiction and how do they allow for a (re)imagining of future societies?

–     Drones and remote sensing: In what way do drones mark a radical new way of seeing and sensing by their remotely vertical gaze and operative images?

     Drone warfare: Do drones mark a continuation or rupture of the way we understand war and conflict, and how do they change the military imaginary?

The conference is sponsored by the Drone Network (Danish Research Council) and Institute for the Study of Culture at the University of Southern Denmark.

 You can contact Kathrin at  kamau@sdu.dk

The conference website is here.