Climate Change, Tech Workers, Antiwar Activists Working Together

Heading For Extinction meeting in New York City January 30 2020

By Marc Eliot Stein, February 10, 2020

I was recently invited to speak at an Extinction Rebellion gathering in New York City on behalf of World BEYOND War. The event was designed to bring together three action groups: climate change activists, tech workers collectives, and antiwar activists. We began with a stirring personal account from climate change activist Ha Vu, who told the crowd of New Yorkers about an alarming experience few of us ever have: returning to her family’s home in Hanoi, Vietnam, where increased heat has already made it nearly impossible to walk outside during peak sunlight hours. Few Americans also know about the 2016 water pollution disaster in Ha Tinh in central Vietnam. We often speak of climate change as a potential problem in USA, Ha emphasized, but in Vietnam she can see it already disrupting lives and livelihoods, and rapidly getting worse.

Nick Mottern of KnowDrones.org spoke with similar urgency about the US military’s recent massive investment in futuristic artificial intelligence and cloud computing – and emphasized the military’s own conclusion that deployments of AI systems in nuclear weapon management and drone warfare will inevitably lead to errors of unpredictable magnitude. William Beckler of Extinction Rebellion NYC followed by explaining the organizing principles this important and fast growing organization puts into action, including disruptive actions designed to raise awareness of the critical importance of climate change. We heard from a New York City representative of Tech Workers Coalition, and I tried to pivot the gathering towards a sense of practical empowerment by speaking about a tech workers rebellion action that was unexpectedly successful.

This was in April 2018, when the so-called “defense industry” was buzzing about Project Maven, a highly publicized new US military initiative to develop artificial intelligence capabilities for drones and other weapons systems. Google, Amazon and Microsoft all offer off-the-shelf artificial intelligence platforms for paying customers, and Google was seen as the likely winner of the Project Maven military contract.

In early 2018, Google workers began to speak up. They didn’t understand why a company that had recruited them as employees with the pledge “Don’t Be Evil” was now bidding on military projects likely to resemble the horrific episode of “Black Mirror” in which AI-powered mechanical dogs hound human beings to death. They spoke up on social media and to traditional news outlets. They organized actions and circulated petitions and made themselves heard.

This workers rebellion was the genesis of Google Workers Rebellion movement, and it helped to bootstrap other tech workers collectives. But the most astonishing thing about the internal Google protest against Project Maven wasn’t that tech workers were speaking up. The most astonishing thing is that Google management yielded to workers’ demands.

Two years later, this fact still stuns me. I’ve seen many ethical problems in my decades as a tech worker, but I’ve rarely seen a large company categorically agree to address ethical problems in a significant way. The result of the Google rebellion against Project Maven was the publication of a set of AI principles that are worth reprinting here in full:

Artificial Intelligence at Google: Our Principles

Google aspires to create technologies that solve important problems and help people in their daily lives. We are optimistic about the incredible potential for AI and other advanced technologies to empower people, widely benefit current and future generations, and work for the common good.

Objectives for AI applications

We will assess AI applications in view of the following objectives. We believe that AI should:

1. Be socially beneficial.

The expanded reach of new technologies increasingly touches society as a whole. Advances in AI will have transformative impacts in a wide range of fields, including healthcare, security, energy, transportation, manufacturing, and entertainment. As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.

AI also enhances our ability to understand the meaning of content at scale. We will strive to make high-quality and accurate information readily available using AI, while continuing to respect cultural, social, and legal norms in the countries where we operate. And we will continue to thoughtfully evaluate when to make our technologies available on a non-commercial basis.

2. Avoid creating or reinforcing unfair bias.

AI algorithms and datasets can reflect, reinforce, or reduce unfair biases. We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.

3. Be built and tested for safety.

We will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm. We will design our AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research. In appropriate cases, we will test AI technologies in constrained environments and monitor their operation after deployment.

4. Be accountable to people.

We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control.

5. Incorporate privacy design principles.

We will incorporate our privacy principles in the development and use of our AI technologies. We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.

6. Uphold high standards of scientific excellence.

Technological innovation is rooted in the scientific method and a commitment to open inquiry, intellectual rigor, integrity, and collaboration. AI tools have the potential to unlock new realms of scientific research and knowledge in critical domains like biology, chemistry, medicine, and environmental sciences. We aspire to high standards of scientific excellence as we work to progress AI development.

We will work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches. And we will responsibly share AI knowledge by publishing educational materials, best practices, and research that enable more people to develop useful AI applications.

7. Be made available for uses that accord with these principles.

Many technologies have multiple uses. We will work to limit potentially harmful or abusive applications. As we develop and deploy AI technologies, we will evaluate likely uses in light of the following factors:

  • Primary purpose and use: the primary purpose and likely use of a technology and application, including how closely the solution is related to or adaptable to a harmful use
  • Nature and uniqueness: whether we are making available technology that is unique or more generally available
  • Scale: whether the use of this technology will have significant impact
  • Nature of Google’s involvement: whether we are providing general-purpose tools, integrating tools for customers, or developing custom solutions

AI applications we will not pursue

In addition to the above objectives, we will not design or deploy AI in the following application areas:

  1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  3. Technologies that gather or use information for surveillance violating internationally accepted norms.
  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

As our experience in this space deepens, this list may evolve.

Conclusion

We believe these principles are the right foundation for our company and our future development of AI. We acknowledge that this area is dynamic and evolving, and we will approach our work with humility, a commitment to internal and external engagement, and a willingness to adapt our approach as we learn over time.

This positive result doesn’t absolve the tech giant Google from complicity in various other areas of major concern, such as supporting ICE, police and other military activities, aggregating and selling access to private data about individuals, hiding controversial political statements from search engine results and, most importantly, allowing its employees to continue to speak out on these and other issues without being fired for doing so. The Google workers rebellion movement remains active and highly engaged.

At the same time, it’s important to recognize how impactful the Google workers movement was. This became immediately clear after the Google protests began: the Pentagon’s marketing departments stopped issuing new press releases about the once-exciting Project Maven, eventually “disappearing” the project entirely from the public visibility it had earlier sought. Instead, a new and much larger artificial intelligence initiative began to emerge from the Pentagon’s insidious Defense Innovation Board.

This was called Project JEDI, a new name for Pentagon spending on cutting-edge weapons. Project JEDI would spend much more money than Project Maven, but the publicity blitz for the new project (yes, the US military spends a lot of time and attention on publicity and marketing) was very different from the earlier one. All the sleek and sexy “Black Mirror” imagery was gone. Now, instead of emphasizing the exciting and cinematic dystopian horrors AI-powered drones could inflict on human beings, Project JEDI explained itself as a sober step forward for efficiency, combining various cloud databases in order to help “warfighters” (the Pentagon’s favorite term for front-line personnel) and back-office support teams maximize information effectiveness. Where Project Maven was designed to sound exciting and futuristic, Project JEDI was designed to sound sensible and practical.

There is nothing sensible or practical about the price tag for Project JEDI. It’s the largest military software contract in world history: $10.5 billion. Many of our eyes glaze over when we hear about scales of military spending, and we can skip over the difference between millions and billions. It’s essential to understand how much bigger Project JEDI is than any previous Pentagon software initiative. It’s a game changer, a wealth-generating engine, a blank check for profiteering at taxpayer expense.

It helps to scratch beneath the surface of government press releases when trying to comprehend a military spending blank check as large as $10.5 billion. Some information can be gleaned from the military’s own publications, like a disturbing August 2019 interview with Joint Artificial Intelligence Center Lieutenant General Jack Shanahan, a key figure in both the disappeared Project Maven and the new Project JEDI. I was able to get more insight into how defense industry insiders think about Project JEDI by listening to a defense industry podcast called “Project 38: The Future of Government Contracting”. Podcast guests often speak candidly and unabashedly about whatever topic they’re discussing. “A lot of people will be buying new swimming pools this year” was typical of this podcast’s insider chat about Project JEDI. We’re sure they will be.

Here’s the remarkable thing that ties back to Google’s AI principles. The obvious three frontrunners for the massive $10.5 billion JEDI contract would have been Google, Amazon and Microsoft – in that order, based on their reputations as AI innovators. Because of the workers protest against Project Maven in 2018, AI leader Google was out of consideration for the much larger Project JEDI in 2019. Late in 2019, it was announced that the contract went to Microsoft. A flurry of news coverage followed, but this coverage focused mainly on the rivalry between Amazon and Microsoft, and on the fact that 3rd place Microsoft was probably allowed to beat 2nd place Amazon for the win because of the Trump administration’s ongoing battles with Washington Post, which is owned by Amazon’s Jeff Bezos. Amazon is now going to court to fight the Pentagon’s $10.5 billion gift to Microsoft, and Oracle is suing as well. The specific remark from the Project 38 podcast mentioned above – “A lot of people will be buying new swimming pools this year” – referred not only to Microsoft’s financial boon but also to all the lawyers who will participate in these lawsuits. We can probably make an educated guess that more than 3% of Project JEDI’s $10.5 billion will go to lawyers. Too bad we can’t use it to help end world hunger instead.

The dispute over whether this transfer of taxpayer money to military contractors should benefit Microsoft, Amazon or Oracle has dominated news coverage of Project JEDI. The one positive message to be gleaned from this obscene graft – the fact that Google had stepped away from the biggest military software contract in world history because of the workers protest – has been virtually nonexistent in news coverage of Project JEDI. 

This is why it was important to tell this story to the tech-focused activists who were gathered in a crowded room in midtown Manhattan last week to talk about how we can save our planet, how we can fight against disinformation and politicization of climate science, how we can stand up to the massive power of fossil fuel profiteers and weapons profiteers. In this small room, we all seemed to grasp the dimensions of the problem we were facing, and the critical role we ourselves must begin to play. The tech community has significant power. Just as divestment campaigns can make a real difference, tech workers rebellions can make a real difference. There are many ways climate change activists, tech workers rebellion activists and antiwar activists can begin to work together, and we will be doing so in every way we can.

We had a hopeful start with this gathering, helpfully initiated by Extinction Rebellion NYC and World Can’t Wait. This movement will grow – it must grow. Fossil fuel abuse is the focus of climate change protesters. Fossil fuel abuse is also both the primary profit motive of US imperialism and a primary terrible result of the bloated US military’s wasteful activities. Indeed, the US military appears to be the single worst polluter in the world. Can tech workers use our organizing power for victories even more impactful than Google’s withdrawal from Project JEDI? We can and we must. Last week’s New York City meeting was just a tiny step forward. We must do more, and we must give our combined protest movement everything we’ve got.

Extinction Rebellion event announcement, January 2020

Marc Eliot Stein is director of technology and social media for World BEYOND War.

Photo by Gregory Schwedock.

One Response

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles

Our Theory of Change

How To End War

2024 WBW Film Festival
Antiwar Events
Help Us Grow

Small Donors Keep Us Going

If you select to make a recurring contribution of at least $15 per month, you may select a thank-you gift. We thank our recurring donors on our website.

This is your chance to reimagine a world beyond war
WBW Shop
Translate To Any Language