From Theory to Practice: Where do Algorithmic Accountability and Explainability Frameworks Take Us in the Real World

Fanny Hidvegi (Access Now), Anna Bacciarelli (Amnesty International), Katarzyna Szymielewicz (Panoptykon Foundation), Matthias Spielkamp (AlgorithmWatch)


This hands-on session takes academic concepts and their formulation in policy initiatives around algorithmic accountability and explainability and test them against real cases. In small groups we will (1) test selected frameworks on algorithmic accountability and explainability against a concrete case study (that likely constitutes a human rights violation) and (2) test different formats to explain important aspects of an automated decision making process (such as input data, type of an algorithm used, design decisions and technical parameters, expected outcomes) to various audiences (end users, affected communities, watchdog organisations, public sector agencies and regulators). We invite participants with various backgrounds: researchers, technologists, human rights advocates, public servants and designers.


Bridging the Gap from AI Ethics Research to Practice [description will soon follow]

Kathy Baxter (Salesforce)



Incorporating Social Science Methods into the Algorithmic Fairness Toolkit [description will soon follow]

Ezra Goss (Georgia Institute of Technology), Lily Hu (Harvard University), Stephanie Teeple (University of Pennsylvania), Manuel Sabin (UC Berkeley).



Ethics on the Ground: From Principles to Practice

Marguerite Barry (University College Dublin), Aphra Kerr (Maynooth University), Oliver Smith (Health Moonshot, Telefónica Innovation Alpha, Barcelona)


Surveys of public attitudes show that people believe it is possible to design ethical AI. However, the everyday professional development context can offer minimal space for ethical reflection or oversight, creating a significant gap between public expectations and the performance of ethics in practice. This 2- part workshop includes an offsite visit to Telefónica Innovation Alpha and uses storytelling and theatre methods to examine how and where ethical reflection happens on the ground. It will explore the gaps in expectations and identify alternative approaches to more effective ethical performance. Bringing social scientists, data scientists, designers, civic rights activists and ethics consultants together to focus on AI/ML in the health context, it will foster critical and creative activities that will bring to the surface the structural, disciplinary, social and epistemological challenges to effective ethical performance in practice. Participants will explore and enact where, when and how meaningful interventions can happen.


Lost in Translation: An Interactive Workshop Mapping Interdisciplinary Translations for Epistemic Justice

Evelyn Wan (Tilburg Institute of Law, Technology, and Society, Tilburg University; Institute for Cultural Inquiry, Utrecht University), Aviva de Groot (Tilburg Institute of Law, Technology, and Society, Tilburg University), Phillip Lücking (Gender/Diversity in Informatics Systems, University of Kassel)


There are gaps in understanding between those who design systems of AI/ ML and those who critique them. This gap can be defined in multiple ways - methodological, epistemological, linguistic, or cultural. To bridge this gap would require a set of translations: the generation of a collaborative space and a new set of shared sensibilities that traverse disciplinary boundaries. We propose a workshop which aims to explore translations across multiple fields, and translations between theory and practice, as well as how interdisciplinary work could generate new operationalizable approaches. Through 3-hours of joint discussion and interactive exercises, the workshop will generate insights regarding the challenges of interdisciplinary work, identify shared concerns and pressing issues facing our field(s), and pave the way for actionable steps to introduce change in our respective practices that would be conducive to future interdisciplinary collaboration. This to us would help achieve a vision of epistemic justice, where new grounds for interdisciplinary knowledge could be built, validated, and adopted towards the goal of FAT and beyond in computational systems.


Centering Disability Perspectives in Algorithmic Fairness, Accountability & Transparency

Alexandra Givens (Georgetown Institute for Tech Law & Policy), Meredith Ringel Morris (Microsoft Research)


This interactive panel seeks to highlight the impact of algorithmic systems on people with disabilities. The panelists include technical experts and law/policy experts working on these issues. A key objective is to surface new research projects and collaborations, including by integrating a critical disability perspective into existing research and advocacy efforts focused on identifying sources of bias and advancing equity.


Algorithmically Encoded Identities: Reframing Human Classification

Dylan Baker (Google), Alex Hanna (Google), Emily Denton (Google AI)


Our aim with this workshop is to provide a venue within which the FAT* community can thoughtfully engage with identity and the categories which are imposed on people as part of making sense of their identities. Most people have nuanced and deeply personal understandings of what identity categories mean to them; however, sociotechnical systems must, through a set of classification decisions, reduce the nuance and complexity of those identities into discrete categories. The impact of misclassifications can range from the uncomfortable (e.g. displaying ads for items that aren't desirable) to devastating (e.g. being denied medical care; being evaluated as having a high risk of criminal recidivism). Even the act of being classified can force an individual into categories which feel foreign and othering. Through this workshop, we hope to connect participants’ personal understandings of identity to how identity is ‘seen’ and categorized by sociotechnical systems.


Deconstructing FAT: Using Memories to Collectively Explore Implicit Assumptions, Values and Context in Practices of Debiasing and Discrimination-Awareness

Doris Allhutter (Austrian Academy of Sciences), Bettina Berendt (KU Leuven)


This workshop explores implicit assumptions, values and beliefs that FAT researchers mobilize as part of their epistemic practices. As researchers in FAT “we” (often unknowingly) resort to ways-of-doing that reflect our own embeddedness in power relations, our disciplinary ways of thinking, and our historically, locally, and culturally-informed ways of solving computational problems or approaching our research. During the workshop, a curated, interdisciplinary panel of FAT researchers engages in a deconstruction exercise that aims at making visible how practices of computing are entrenched in power relations in complex and multi-layered ways. The workshop will be interactive and panelists and participants will collectively try and disentangle how structural discrimination, mundane ways-of-doing, and normative computational concepts and methods are intertwined. In this way, we will analyze the normativity of technical approaches, methods and concepts that are part of the repertoire of FAT research.


Creating Community-Based Tech Policy: Case Studies, Lessons Learned, and What Technologists and Communities Can Do Together

Jennifer Lee (ACLU of Washington), Shankar Narayan (ACLU of Washington), Hannah Sassaman (Media Mobilizing Project), Jenessa Irvine (Media Mobilizing Project)


What are the core ways the field of data science can center community voice and power throughout all of these processes? What are the most possible and most urgent ways communities can shape the field of algorithmic decision-making to center community power in the next few years? This interactive workshop will highlight some of the following lessons learned through our combined experience engaging with communities challenging technology in Seattle and Philadelphia, cities in the United States. We will discuss the historical context of disproportionate impacts of technology on marginalized and vulnerable communities; case studies including criminal justice risk assessments, face surveillance technologies, and surveillance regulations; and work in small-group and break-out sessions to engage questions about when and where technologists hold power, serve as gatekeepers, and can work in accountable partnership with impacted communities. By the end of the session, we hope that participants will learn about how to actively center diverse communities in creating technology by examining successes, challenges, and ongoing work in Seattle and Philadelphia.


Fairness, Accountability, Transparency in AI at Scale: Lessons from National Programs

Muhammad Aurangzeb Ahmad (University of Washington Tacoma / KenSci), Ankur Teredesai (University of Washington Tacoma / KenSci), Carly Eckert (University of Washington / KenSci)


The panel aims to elucidate how different national govenmental programs are implementing accountability of machine learning systems in healthcare and how accountability is operationlized in different cultural settings in legislation, policy and deployment. We have representatives from three different govenments, UAE, Singapore and Maldives who will discuss what accountability of AI and machine learning means in their contexts and use cases. We hope to have a fruitful conversation around FAT ML as it is operationalized ccross cultures, national boundries and legislative constraints. The panel will delve into how each government conceptualizes what constitutes fairness, accountability and transparency within their cultural context and how these inform their policies.


Zine Fair: Critical Perspectives

Emily Denton (Google), Alex Hanna (Google)


CtrlZ.AI is a zine fair focused on technology and artificial intelligence (AI). Ctrl + Z is the near-universal keyboard shortcut for "undo." The invocation of this keyboard command is meant to evoke a sense of pause or reconsideration, rather than going backwards. CtrlZ.AI is a venue for expressing this sense of reflection on AI and other sociotechnical systems which impact us every day. We aim to be an inclusive platform for artists, technologists, researchers, activists, teachers, and students to showcase their zines and other DIY creations. Zines are self-published, self or independently-distributed, small-circulation magazines that offer an accessible, liberating, and community-oriented mode of creating and disseminating content. We chose zines and related DIY creations as the primary medium of communication at CtrlZ.AI due to their low barrier of entry — to both creation and readership — and because of the significance this publication model holds for many marginalized communities. The CtrlZ.AI Zine Fair will run all day, with structured workshop programming during the hours listed in the program.

Infrastructures: Mathematical Choices and Truth in Data

Mukul Patel (Independent)


Final abstract pending: In modelling our world, mathematical choices are made much deeper down than commonly assumed. The ‘natural’ topologies and metrics underlying data capture and representation are not necessary and given, but contingent choices that affect classification, measurement and even (the possibility of) ordering. I propose to explore alternative spaces and metrics for data representation not so much as practical program, but to raise awareness for a non-technical audience that mathematically rigorous alternatives do in fact exist, and to persuade the mathematically-minded algorithm or software designer that their faith might be misplaced.


Hardwiring Discriminatory Police Practices: the Implications of Data-Driven Technological Policing on Minority (Ethnic and Religious) People and Communities

Patrick Williams (Manchester Metropolitan University), Eric Kind (Queen Mary University)


'The criminal regulation and over-policing of minority ethnic and religious groups across Europe is an everyday reality. Such encounters result in significant personal and emotional harms - driving up levels of mistrust in the police and raises serious questions concerning the legitimacy of European justice systems. Within this workshop, we will consider the implications of the adoption of data-driven technology by police and wider law enforcement agencies. Alongside anti-racism activists and campaigners, this workshop starts from the position that ‘data is not neutral’ and is better understood as a reflection and representation of (memoried) racialized policing practices. Critically then, the workshop aims to explore how (a) data-driven technologies pose ‘data-harms’ to minority ethnic and religious groups and (b) to collaboratively consider the feasibility of building (technological) community-based knowledge and skill capacity to resist ethnic profiling and racialized policing. We anticipate that the outputs from the workshop will contribute further to the development of guides for community based anti-racism organisations and serve to inform those working in the technology field of the inherent threats of data-driven technological policing.'


Site (Un)seen: an Examination of Surveillance, Invisibility and Possibilities Through Interactive Projection Map Dance, Poetry and Film [description will soon follow]

J. Khadijah Abdurahman (WordTor)


Burn, Dream and Reboot! Speculating Backwards for the Missing Archive on Non-Coercive Computing

Helen Pritchard (Goldsmiths University of London), Eric Snodgrass (Linnaeus University)


Whether one is speaking of barbed wire, the assembly line or computer operating systems, the history of coercive technologies for the automation of tasks has been one with a focus on optimization, determinate outcomes and an ongoing disciplining of components and bodies. The paradigmatic automated technologies of the present emerge and are readily marked by this lineage of coercive modes of implementation, whose scarred history of techniques of discrimination, exploitation and extraction point to an archive of automated injustices in computing, a history that continues to charge present paradigms and practices of computing. This workshop aims to address the history of coercive technologies through a renewed attention to how we perform speculation within practices of computing through a renewed attention to this history. We propose to go backwards into the archive, rather than racing forward and proposing ever new speculative futures of automation. This is because with a focus on futures, speculative creative approaches are often conceived and positioned as methodological toolkits for addressing computing practices by imagining for/with others for a “future otherwise”. We argue that “speculation” as the easy-go-to of designers and artists trying to address automated injustices needs some undoing, as without work it will always be confined within ongoing legacies of coercive modes of computing practice. Instead of creating more just-worlds, the generation of ever-new futures by creative speculation often merely reinforces the project of coercive computing. For this workshop, drawing on queer approaches to resisting futures and informed by activist feminist engagements with archives, we invite participants to temporarily resist imagining futures and instead to speculate backwards. We begin the session with a method of speculating backwards to various moments, artefacts and practices within computing history. In this initial part of the workshop, participants are encouraged to select a coercive technique and work in smaller groups to delve into the specific computational workings of the technique in question, while also working to trace out some of their legacies as they have travelled through history from one implementation to another. Examples will be provided, but participants are encouraged to suggest and work with their own examples. What does it means to understand techniques of computing and automation as coercive infrastructures? How did so many of the dreams and seeming promises of computing turn into the coercive practices that we see today? Following this opening discussion, we then move to working to build up a speculative understanding and possible archive of non-coercive computing. What potential artefacts, techniques and practices might we populate such an archive with? Has computing as a practice become so imbued with coercive techniques that we find it hard to imagine otherwise? By the end of the workshop we hope to, in the words of Alexis Pauline Gumbs, be able to look at the emerging archives in these sessions and wonder "how did their dreams make rooms to dream in"... or not, in the case of coercive practices of computing. And "what if she changes her dream?" What if we reboot this dream?


When Not to Design, Build, or Deploy [session in plenary room]

Solon Barocas (Microsoft Research New York, Cornell University), Asia J. Biega (Microsoft Research Montréal), Benjamin Fish (Microsoft Research Montréal), Luke Stark (Microsoft Research Montréal).


This interactive workshop will highlight some of the following lessons learned through our combined experience engaging with communities challenging technology in Seattle and Philadelphia, cities in the United States. We will discuss the historical context of disproportionate impacts of technology on marginalized and vulnerable communities; case studies including criminal justice risk assessments, face surveillance technologies, and surveillance regulations; and work in small-group and break-out sessions to engage questions about when and where technologists hold power, serve as gatekeepers, and can work in accountable partnership with impacted communities. By the end of the session, we hope that participants will learn about how to actively center diverse communities in creating technology by examining successes, challenges, and ongoing work in Seattle and Philadelphia.