AI Commons Workshop

Jump to schedule or speakers list.


How can artificial intelligence be oriented toward the common good? The belief in AI for good has widespread acceptance in the industry and among governments. Declarations from around the globe—Canada, China, South Korea, France, and more—call for the development of AI to have a social purpose. But what is that purpose?

This workshop seeks to develop a commons-based vision for the future of AI as an intervention to understand transformations in citizen engagement as part of a larger research project to explore practices of citizenship in a skeptical world.

Without clear direction, AI risks becoming privatized and at odds with a common world. In a recent study, researchers calculated the costs of training a deep neural network model for use in natural language processing. Their findings are alarming. The energy required can result in CO2 emissions equal to the lifetime emissions of five cars. Meanwhile, the financial cost of the computing needed to carry out this research has become so high that academic researchers cannot participate, enclosing AI innovation within the profit-oriented technology industry.

A commons approach to AI seeks to mitigate these harms, just as commons approaches in other areas have intervened in environmental devastation and the privatization and commodification of knowledge. The term “commons” was initially rooted in theories about the conditions and consequences of sharing resources. But theorists and activists have worked to broaden it, naming new commons in order to advocate for their protection while developing praxis to govern them. This shift in understanding has been greatly informed by indigenous scholarship and indigenous people’s histories, epistemologies, and practices, which offer a wealth of approaches to the management and preservation of common resources, material and otherwise.

Without losing sight of the technology industry’s potential to abuse extant commons (for example, when IBM used photos published on Flickr under a Creative Commons license to train a facial recognition algorithm) while resisting commoning as a threat to their bottom line, we propose that commons theories and related ideas offer fertile ground for taking up the challenges and possibilities presented by AI. In the midst of well-intentioned but ad hoc discussion about making AI “ethical,” commons theory can provide a substantive perspective on AI development and governance.

This workshop is supported by the Social Sciences and Humanities Research Council of Canada, the Center for the Study of Democratic Citizenship and the Milieux Institute for Arts + Technology at Concordia University.

Format and Goals

The workshop seeks to produce:

  1. A cowritten report about the significance of commons theory to the development of AI; and,
  2. A short video or podcast documenting the ideas and discussion.

In the afternoon, all participants will help develop responses to the event’s key questions. We will break into groups of five and work in timed sessions of focused writing and collaboration.

We ask participants to approach the writing aphoristically, i.e., crafting short and independent statements about a key point or position. After the event, our team will edit these together into a common report. Groups will begin by focusing on one question then expand as the day goes forward.

Participants can also step out during the event for a short interview to be considered for part of a short video/podcast documenting the event. Participation is optional.


In this workshop, we invite you to reflect broadly on artificial intelligence and its relation to the commons as you consider the following questions:

  1. What should an AI Commons be?
    • How could a commons-based approach guide the development of AI?
    • How does a commons approach differ from proposed ethical or rights-based frameworks?
  2. How could the development of AI today—including the infrastructure and knowledge at its foundation—become a commons?
    • What forms of collective action and governance would be necessary? What movements and efforts already exist?
    • What latent commons or undercommons might we find in thinking about AI?
  3. Could AI reshape how we think about the commons, leading to new theories or practices?
    • How might related (or unrelated) approaches to the commons be understood through AI and the commons (e.g., making kin, new materialism, infrastructures of care, or platform cooperativism)?
    • What histories and instances of the commons does an AI commons require for context and inspiration?
  4. How might we imagine a future common world for the machines, environments, humans, and other life drawn together by the industrial efforts around AI?
    • How can humans, AI, and other agents collaborate equitably in these commons?
    • How might AI reproduce sustainably within the natural commons, unseating extractive and settler approaches to common worlds?

The workshop seeks to develop a vision for a commons-based approach to the future of AI. It is an intervention to develop democratic approaches to digital disruption and understand transformations in citizen engagement.


headshotSophie Bishop is a Lecturer in the Department of Digital Humanities at King’s College London. Dr. Bishop researches and teaches on cultures of content creation, digital marketing industries, and intersectional inequalities and experiences therein. Dr. Bishop has recently published in journals including New Media & Society, drawing on feminist political economy to study cultural practices on major platforms.

Sandra Braman is the Abbott Professor of Liberal Arts at Texas A&M University and a leading figure in the field of information policy. Dr. Braman has been studying the macro-level effects of the use of new information technologies and their policy implications since the mid-1980s. She is the author of Change of State: Information, Policy, and Power (2006, MIT Press) and editor of Communication Researchers and Policy-makers (2003, MIT Press) and The Emergent Global Information Policy Regime (2004, Palgrave Macmillan).

5ba04542-6eb7-4533-aa77-4f3902de2b38Brett M. Frischmann is the Charles Widger Endowed University Professor in Law, Business and Economics at Villanova University. Dr. Frischmann studies infrastructure, knowledge commons, and the technosocial engineering of humans (the relationships between the technosocial world and humanity). He is coeditor with Katherine Strandburg and Michael Madison of Governing Knowledge Commons (2014, Oxford University Press). His most recent book is Re-engineering Humanity, coauthored with Evan Selinger (2018, Cambridge University Press).


Eda Kranakis is professor in the Department of History at the University of Ottawa. Trained in the history of science and technology, she has served as President of the Canadian Science and Technology Historical Association and on the editorial boards of Technology and Culture, History and Technology, and Engineering Studies. She is coeditor of Cosmopolitan Commons: Sharing Resources and Risks across Borders (MIT Press, 2013).

Malka-Older3Malka Older is a writer, aid worker, and sociologist. Dr. Older is the author of Infomocracy, Null States, and State Tectonics. Her science-fiction political thriller Infomocracy was named one of the best books of 2016 by Kirkus, Book Riot, and the Washington Post. She is an advisor to the Arizona State University and New America’s Open Technology Institute (OTI) AI Policy Futures project.


8:30-9:00 Breakfast
9:00-9:20 Acknowledgements and Welcome – Fenwick McKelvey

Introduction – Bart Simon

9:30-10:30 Presentations by Eda Kranakis & Brett M. Frischmann
10:30-10:45 Coffee Break
10:45-11:45 Presentations by Sandra Braman, Sophie Bishop & Malka Older
11:45-12:30 pm Writing (stream of consciousness/warm-up)
12:30pm-1pm Lunch
1pm-1:30pm Reflections by Luke Stark & Fenwick McKelvey
1:30pm-2:20pm Writing Phase 1
20 minutes writing independently, then share with groups
10 minutes reviewing by commenting on shared document
20 minutes writing
2:20pm-2:40pm Break
2:40 to 4pm Writing Phase 2
20 minutes writing independently, then share with groups
10 minutes reviewing by commenting on shared document
20 minutes writing independently, then share with groups
30 minutes final revisions
4pm to 5pm Debrief and final remarks

How to Get Here

Milieux Map.jpg