Speaker Series

devon powers event

AI Governance & Governmentality Series

Date and Time: Tue, Apr 6, 2021, 12:00 PM EDT

Dr. Powers will present from her new book, “On Trends” published by the University of Illinois Press in 2019.

Of the book, Fred Turner writes, “this fascinating book pulls the curtain back on an entire industry devoted to shaping our perceptions of what matters–and with it, the future itself.” On Trend delves into one of the most powerful forces in global consumer culture. From forecasting to cool hunting to design thinking, the work done by trend professionals influences how we live, work, play, shop, and learn. Devon Powers’s provocative insights open up how the business of the future kindles exciting opportunity even as its practices raise questions about an economy increasingly built on nonstop disruption and innovation. Merging industry history with vivid portraits of today’s trend visionaries, Powers reveals how trends took over, what it means for cultural change, and the price all of us pay to see—and live—the future.

Devon Powers is Associate Professor in the Department of Advertising in the Klein College of Media and Communication at Temple University. Her research focuses on consumer culture, cultural intermediation/circulation, popular music, and promotional culture. She is the author of Writing the Record: The Village Voice and the Birth of Rock Criticism (2013), and the editor of Blowing Up the Brand: Critical Perspectives on Promotional Culture (2010, with Melissa Aronczyk). Her work has appeared in New Media & Society, Critical Studies in Media Communication, Popular Communication, and Journal of Consumer Culture, among other venues. Devon received her Ph.D. from New York University’s Media, Culture, and Communication program, and her BA is from Oberlin College.

You can register for this event here

Past events

johannes bruder event

Wed, February 17, 2021 (online)

Dr. Johannes Bruder presents from his ongoing research into the science of machine learning and artificial intelligence.

Zooming in on the design of machine learning algorithms, he discusses characteristics of the post-anthropocentric figure that is “the cognitive agent” and elaborates on leakages and cross-contaminations between North American social science, psychology and computing.

Dr. Johannes Bruder works at the intersection of anthropology, STS, and media studies. He studies the influence of machine learning and artificial intelligence on psychological categories, sociological models, artistic practices and speculative designs. His first book “Cognitive Code. Post-Anthropocentric Intelligence and the Infrastructural Brain” is based on fieldwork in neuroscience laboratories and provides deep insights into current exchanges between computational neuroscience and machine learning research. Johannes has a strong interest in experimenting with research methods, pedagogies and publication formats that unsettle disciplinary paradigms and render research in the humanities operational in real-world contexts. He’s a senior researcher at the Critical Media Lab Basel (criticalmedialab.ch), affiliated with the Department of Sociology and Anthropology & Milieux at Concordia University Montréal, and a co-founder of the awalkingcontradiction.org.

nanna bonde event

Tue, December 8, 2020 (online)

Dr. Nanna Bonde Thylstrup theoretical framework to understand the material, ethical and political implications of data reuse in AI.

Dr. Nanna Bonde Thylstrup presents from the ‘AI Reuse’ project, a collaboration with co-PIs Mikkel Flyverbom (MSC) and Louise Amoore (Durham University). The project’s purpose is two-fold: firstly, it will develop a much-needed theoretical framework for understanding the material, ethical and political implications of data reuse in machine learning technologies. Secondly, and on the basis of this, the project will develop strategies and recommendations that can help the Danish public sectortransition into the age of datafication without violating

Dr. Nanna Bonde Thylstrup is an Associate Professor of Communication and Digital Media at Copenhagen Business School. Teheir writing and teaching focus on knowledge infrastructures; infrastructures of ignorance; environmental media; and digital epistemologies. More specifically, Dr. Nanna Bonde Thylstrup is interested in how media theory, cultural theory and critical theory can unpack and unfold issues related to datafication and digitization. Their most recent book is The Politics of Mass Digitization published by MIT Press (2019).

louise-amoore_48680563

Tue, November 10, 2020 (online)

Dr. Louise Amoore presents their new book, Cloud Ethics: Algorithms and the Attributes of Ourselves and Others

In Cloud Ethics Louise Amoore examines how machine learning algorithms are transforming the ethics and politics of contemporary society. Conceptualizing algorithms as ethicopolitical entities that are entangled with the data attributes of people, Amoore outlines how algorithms give incomplete accounts of themselves, learn through relationships with human practices, and exist in the world in ways that exceed their source code. In these ways, algorithms and their relations to people cannot be understood by simply examining their code, nor can ethics be encoded into algorithms. Instead, Amoore locates the ethical responsibility of algorithms in the conditions of partiality and opacity that haunt both human and algorithmic decisions. To this end, she proposes what she calls cloud ethics—an approach to holding algorithms accountable by engaging with the social and technical conditions under which they emerge and operate.

Dr. Louise Amoore is Professor of Political Geography and Deputy Head of Department. Her research and teaching focuses on aspects of geopolitics, technology and security. She is particularly interested in how contemporary forms of data and algorithmic analysis are changing the pursuit of state security and the idea of society.

Co-hosted with the Infoscape Research Lab at Ryerson University.

sun-ha-hong_48680499

Tue, October 20, 2020 (online)

Dr. Sun-ha Hong presents their new book, Technologies of Speculation: The Limits of Knowledge in a Data-Driven Society

What counts as knowledge in the age of big data and smart machines? In the pursuit of ‘better’ knowledge, technology is reshaping what counts as knowledge in its own image. The push for algorithmic certainty sets loose an expansive array of incomplete archives, speculative judgments and simulated futures. All too often, data generates speculation as much as it does information.

Technologies of Speculation traces this twisted symbiosis of knowledge and uncertainty in emerging state and self-surveillance technologies. It tells the story of vast dragnet systems constructed to predict the next terrorist, and how familiar forms of prejudice seep into the data by the back door. At the same time, smart machines for ubiquitous and automated self-tracking are manufacturing knowledge that paradoxically lies beyond the human senses. For some, such a surfeit of data can seem an empowering thing, an opportunity to stride boldly towards a posthuman future. For others, to appear correctly in databases can be the unhappy obligation on which their lives depend.

Dr. Sun-ha Hong is an Assistant Professor of Communication at Simon Fraser University. Their work asks how new media and its data become invested with ideals of precision, objectivity and truth – especially through aesthetic, speculative, and otherwise apparently non-rational means. Dr. Sun-ha Hong analyses the contemporary faith in “raw” data, sensing machines, and algorithmic decision-making, and of their public promotion as the next great leap towards objective knowledge.

katzenbach talk

Tue, September 22, 2020

A leading global scholar on platform and AI governance, Dr. Katzenbach introduces a framework for the contested, informal governances of AI

Facial recognition, digital contact tracing and content moderation have all been major controversies involving AI in 2020. How do these controversies shape the global governance of AI? A leading global scholar on platform and AI governance, Dr. Katzenbach will introduce a framework for the contested, informal governances processes of AI unfolding across research, policy and media in Canada, France, the UK, and Germany. These controversies shape understanding of what kind of AI comes into being, which problems and challenges are to be addressed, and our expertise to shape its future developments for the public good. As scandal and outrage give way to public debate and regulation, Dr. Katzenbach will outline a way to understand a dominant concern for the future of media, social and technology policy.

Dr. Katzenbach is a Senior Researcher at the Alexander von Humboldt Institute for Internet and Society (Berlin, Germany). He directs the interdisciplinary research program “The Evolving Digital Society”. He is Chair of the Section Digital Communication of the German Association for Media and Communication ResearchIn the past and has acted as interim professor for communication policy and media economics at the Institute for Media and Communication Research at Freie Universität Berlin. His research addresses the intersection of technology, communication, and governance. He is a co-initiator of the open access journal Internet Policy Review and co-editor of the open access book series Digital Communication Research.

SS6_poster.jpg

Algorithmic Warfare as an Apparatus of Recognition

Lucy Suchman
Lancaster University, Department of Sociology

Friday, November 1, 2019
3 – 5 p.m.
Milieux Institute, EV 11.705 (Resource Room)

In June of 2018, following a campaign initiated by activist employees within the company, Google announced its intention not to renew a US Defense Department contract for Project Maven, an initiative to automate the identification of military targets based on drone video footage. Defendants of the program argued that that it would increase the efficiency and effectiveness of US drone operations, not least by enabling more accurate recognition of those who are the program’s legitimate targets and, by implication, sparing the lives of noncombatants. But this promise begs a more fundamental question: What relations of reciprocal familiarity does recognition presuppose? And in the absence of those relations, what schemas of categorization inform our readings of the Other? The focus of a growing body of scholarship, this question haunts not only US military operations but an expanding array of technologies of social sorting. Understood as apparatuses of recognition (Barad 2007: 171), Project Maven and the US program of targeted killing are implicated in perpetuating the very architectures of enmity that they take as their necessitating conditions. Building upon generative intersections of critical security studies and science and technology studies (STS), I argue that the promotion of automated data analysis under the sign of artificial intelligence can only serve to exacerbate military operations that are at once discriminatory and indiscriminate in their targeting, while remaining politically and legally unaccountable. I close with some thoughts on how we might interrupt the workings of these apparatuses, in the service of wider movements for social justice.

Lucy Suchman is Professor of Anthropology of Science and Technology in the Department of Sociology at Lancaster University. Before taking up her present post she was a Principal Scientist at Xerox’s Palo Alto Research Center, where she spent twenty years as a researcher. She is the author of Human-Machine Reconfigurations (2007) published by Cambridge University Press. Her current research extends her longstanding engagement with the fields of artificial intelligence and human-computer interaction to the domain of contemporary war fighting, including the figurations that animate military training and simulation, and problems of ‘situational awareness’ in remotely-controlled weapon systems. Her research is concerned with the question of whose bodies are incorporated into these systems, how, and with what consequences for social justice and the possibility for a less violent world.

Hosted by Machine Agencies Research Group, Milieux Institute

machineagencies.org


SS5_poster

Yelling at Computers

A Talk by Nicole He

Wednesday, October 2, 2019
3 – 4 p.m.
Milieux Institute, EV 11.705 (Resource Room)

Computers are able to understand human speech better than ever before, but voice technology is still mostly used for practical (and boring!) purposes, like playing music, smart home control, or customer service phone trees. What else can we experience in the very weird, yet intuitive act of talking out loud to machines? In this talk, Nicole will talk about her work making art and games using voice technology.
Nicole He is a programmer and artist based in Brooklyn, New York, currently making videogames, including an upcoming sci-fi voice-controlled game with the National Film Board of Canada. She has worked as a creative technologist at Google Creative Lab, an outreach lead at Kickstarter, and an adjunct faculty member at ITP at NYU, where she received her Master’s degree. Nicole’s work has been featured in places such as Wired, BBC, The Outline, and The New York Times.


AI Talks – IV

“The Real Life ‘Ex Machina’ Is Here”: Restoring the Gap between Science and Fiction

Teresa Heffernan Professor of English, Saint Mary’s University

Tuesday, April 23, 2019, 3 – 5 p.m. Milieux Institute, EV 10.625 (Speculative Life Research Cluster)

As a literary scholar, I have often been struck by the many references to fiction in discussions about the science of artificial intelligence and robotics, but how is fiction mobilized in this field? Why, for instance, are fictional robots so frequently collapsed with the robotics industry? And how do science and fiction differently imagine robots and artificial intelligence? The ubiquitous claims that fiction is coming true demonstrate a lack of understanding of how fiction works and thoroughly obfuscate the AI field, clouding the science and neutering the critical force of fiction. Referring to the new Schwartz Reisman Institute for Technology and Society in Toronto, Geoffrey Hinton said recently “My hope is that the Schwartz Reisman Institute will be the place where deep learning disrupts the humanities.” In contrast, this talk asks how the humanities might usefully challenge deep learning.

Teresa Heffernan is Professor of English at Saint Mary’s University, Halifax, NS. Her current research is on the science and fiction of robotics and AI. Her edited collection Cyborg Futures: Cross-disciplinary Perspectives on Artificial Intelligence and Robotics is forthcoming with Palgrave, where she is co-editor (with Cathrine Hasse and Kathleen Richardson) of the Social and Cultural Studies of Robots and AI book series. Her previous books include Veiled Figures: Women, Modernity, and the Spectres of Orientalism (2016) and Post-Apocalyptic Culture: Modernism, Postmodernism, and the Twentieth-Century Novel (2008).

SS4_poster_final


AI Talks – III

Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning

Luke Stark – Microsoft Research Montreal

Friday, February 22, 2019

2 – 4 p.m.

Milieux Institute, EV 11.705 (Resource Room)

We use frame analysis to examine recent high-profile values statements endorsing ethical design for artificial intelligence and machine learning (AI/ML). Guided by insights from values in design and the sociology of business ethics, we uncover the grounding assumptions and terms of debate that make some conversations about ethical design possible while forestalling alternative visions. Vision statements for ethical AI/ML co-opt the language of some critics, folding them into a limited, technologically deterministic, expert-driven view of what ethical AI/ML means and how it might work.

Luke Stark is a Postdoctoral Researcher in the Fairness, Accountability, Transparency and Ethics (FATE) Group at Microsoft Research Montreal, and an Affiliate of the Berkman Klein Center for Internet & Society at Harvard University. His work explores the history, ethics and social impacts of computational media and AI. Luke holds a PhD from the Department of Media, Culture, and Communication at New York University, and an Honours BA and MA from the University of Toronto.

SS3_poster


AI Talks – II

Experiencing Machine Agency

Chris Salter – University Research Chair in New Media, Technology and the Senses/Hexagram/Milieux

Tuesday January 29, 2019

3 – 5 p.m.

Milieux Institute, EV 11.705 (Resource Room)

What does it mean to experience a machine “acting” or “performing?” This talk focuses on artistic projects involving machine learning that explore how temporal agency and experience are re-imagined and re-configured. I aim to basically raise a group of questions: Do these technologies actually make possible new aesthetic experiences? Do they transform human perception or displace human makers and perceivers with reduced understandings of human emotion and behavior? What have artists done historically with these technologies and how do they differ from what engineers and computer scientists do?

Chris Salter is an artist, University Research Chair in New Media, Technology and the Senses at Concordia University and Co-Director of the Hexagram network for Research-Creation in Media Arts and Technology in Montreal. He studied philosophy and economics at Emory University and completed a PhD in directing/dramatic criticism at Stanford University where he also researched and studied at CCMRA. In the 1990s, he collaborated with Peter Sellars and William Forsythe/Frankfurt Ballet in Salzburg, Paris, and London. His artistic work has been seen in festivals and exhibitions all over the world including the Venice Biennale. He is the author of Entangled: Technology and the Transformation of Performance (MIT Press, 2010) and Alien Agency: Experimental Encounters with Art in the Making (MIT Press, 2015). He is creative consultant for the Barbican Centre’s (London) 2019 thematic season: Life Rewired. He is currently working on a book focused on how we make sense in an age of sensors, algorithms, machine learning and quantification.

ma_ss2_poster


AI Talks – I

You can listen to the first of our AI Talks series here:

Jonathan Roberge’s talk:

Q&A:

recorded by Tom Hackbarth

The New Silicon Valley of the North, Really? The Myth and Reality of Building an AI Hub in the Montreal Area

Jonathan Roberge Associate Professor, Institut National de la Recherche Scientifique, Canada Research Chair in Digital Culture

Tuesday December 4, 2018

3 – 5 p.m.

Milieux Institute, EV 11.705 (Resource Room)

MA_Speaker Series_1