I. CRAFT Sessions in the Plenary Room

Fairness, Accountability, Transparency in AI at Scale: Lessons from National Programs

Muhammad Aurangzeb Ahmad (University of Washington Tacoma / KenSci), Ankur Teredesai (University of Washington Tacoma / KenSci), Carly Eckert (University of Washington / KenSci)

The panel aims to elucidate how different national govenmental programs are implementing accountability of machine learning systems in healthcare and how accountability is operationlized in different cultural settings in legislation, policy and deployment. We have representatives from three different govenments, UAE, Singapore and Maldives who will discuss what accountability of AI and machine learning means in their contexts and use cases. We hope to have a fruitful conversation around FAT ML as it is operationalized ccross cultures, national boundries and legislative constraints. The panel will delve into how each government conceptualizes what constitutes fairness, accountability and transparency within their cultural context and how these inform their policies.

(Session in plenary room.)

Microsite Paper


When Not to Design, Build, or Deploy

Solon Barocas (Microsoft Research New York, Cornell University), Asia J. Biega (Microsoft Research Montréal), Benjamin Fish (Microsoft Research Montréal), Luke Stark (Microsoft Research Montréal), Jedrzej Niklas.

While much ACM FAT* work to date has focused on various model and system interventions, the goal of this session will be to foster discussion of when we should not design, build, and deploy models and systems in the first place. Given the recent push for moratoria on facial recognition, protests around the sale of digital technologies to authoritarian government agencies, and the ongoing harms to marginalized groups from automated systems such as risk prediction, a broader discussion around how, when, and why to say 'no' as academics, practitioners, activists, and society, seems both relevant and urgent.

This interactive gathering will feature diverse invited and contributed perspectives presented in a fishbowl conversation format, accompanied by questions and comments from the audience. While the central focus of this discussion will be on individual instances of past refusal efforts, the high-level goal of the session is to create a foundation for answering long-term questions of:

  1. relevant historical and disciplinary contexts,
  2. frameworks and guidelines for practitioners to use when reasoning about ‘not designing, building, or deploying’, and
  3. the broader politics and aftermaths of refusal.

(Session in plenary room.)

Microsite Paper


Rump Session: CRAFT on Speed

Seda Guerses (TU Delft), Seeta Peña Gangadharan (London School of Economics), Suresh Venkatasubramanian (U. Utah)

CRAFT on Speed is a session during which people can make 5 minute presentations that present informal unpublished ideas in the spirit of CRAFT. This format provides a great opportunity for speakers to rally people around new projects, ask for help in formalizing or concretizing difficult concepts, point out open problems, present critique or reflections on current approaches to fairness, accountability, and transparency, or make suggestions to the community.

Interested parties need to:

  • contact the CRAFT Speedy Chair, Suresh Venkatasubramanian on the 29th of January
  • provide any slides or visuals to the CRAFT Speedy Chair by the lunch break (13.15 hrs) on the 30th of January

We have a total of 18 slots, people will be selected on a first come first serve basis.

(Session in plenary room.)

Contact


II. CRAFT Sessions in Breakout Rooms

Creating Community-Based Tech Policy: Case Studies, Lessons Learned, and What Technologists and Communities Can Do Together

Jennifer Lee (ACLU of Washington), Shankar Narayan (ACLU of Washington), Hannah Sassaman (Media Mobilizing Project), Jenessa Irvine (Media Mobilizing Project)

What are the core ways the field of data science can center community voice and power throughout all of these processes? What are the most possible and most urgent ways communities can shape the field of algorithmic decision-making to center community power in the next few years? This interactive workshop will highlight some of the following lessons learned through our combined experience engaging with communities challenging technology in Seattle and Philadelphia, cities in the United States. We will discuss the historical context of disproportionate impacts of technology on marginalized and vulnerable communities; case studies including criminal justice risk assessments, face surveillance technologies, and surveillance regulations; and work in small-group and break-out sessions to engage questions about when and where technologists hold power, serve as gatekeepers, and can work in accountable partnership with impacted communities. By the end of the session, we hope that participants will learn about how to actively center diverse communities in creating technology by examining successes, challenges, and ongoing work in Seattle and Philadelphia.

Maximum 50 Participants

Paper


Lost in Translation: An Interactive Workshop Mapping Interdisciplinary Translations for Epistemic Justice

Evelyn Wan (Tilburg Institute of Law, Technology, and Society, Tilburg University; Institute for Cultural Inquiry, Utrecht University), Aviva de Groot (Tilburg Institute of Law, Technology, and Society, Tilburg University), Phillip Lücking (Gender/Diversity in Informatics Systems, University of Kassel),Goda Klumbyte (Gender/Diversity in Informatics Systems, University of Kassel)

There are gaps in understanding between those who design systems of AI/ ML and those who critique them. This gap can be defined in multiple ways - methodological, epistemological, linguistic, or cultural. To bridge this gap would require a set of translations: the generation of a collaborative space and a new set of shared sensibilities that traverse disciplinary boundaries. We propose a workshop which aims to explore translations across multiple fields, and translations between theory and practice, as well as how interdisciplinary work could generate new operationalizable approaches. Through 3-hours of joint discussion and interactive exercises, the workshop will generate insights regarding the challenges of interdisciplinary work, identify shared concerns and pressing issues facing our field(s), and pave the way for actionable steps to introduce change in our respective practices that would be conducive to future interdisciplinary collaboration. This to us would help achieve a vision of epistemic justice, where new grounds for interdisciplinary knowledge could be built, validated, and adopted towards the goal of FAT and beyond in computational systems.

Maximum 30 Participants

Blogpost Paper


From Theory to Practice: Where do Algorithmic Accountability and Explainability Frameworks Take Us in the Real World

Fanny Hidvegi (Access Now), Anna Bacciarelli (Amnesty International), Katarzyna Szymielewicz (Panoptykon Foundation), Matthias Spielkamp (AlgorithmWatch)

This hands-on session takes academic concepts and their formulation in policy initiatives around algorithmic accountability and explainability and test them against real cases. In small groups we will (1) test selected frameworks on algorithmic accountability and explainability against a concrete case study (that likely constitutes a human rights violation) and (2) test different formats to explain important aspects of an automated decision making process (such as input data, type of an algorithm used, design decisions and technical parameters, expected outcomes) to various audiences (end users, affected communities, watchdog organisations, public sector agencies and regulators). We invite participants with various backgrounds: researchers, technologists, human rights advocates, public servants and designers.

Maximum 60 Participants

Microsite Paper


Burn, Dream and Reboot! Speculating Backwards for the Missing Archive on Non-Coercive Computing

Helen Pritchard Goldsmiths (U. of London), Eric Snodgrass (Linnaeus U.), Romi Ron Morrison (U. of Southern California), Loren Britton (U. of Kassel), Joana Moll (Independent Artist and Researcher)

Whether one is speaking of barbed wire, the assembly line or computer operating systems, the history of coercive technologies for the automation of tasks has been one with a focus on optimization, determinate outcomes and an ongoing disciplining of components and bodies. The paradigmatic automated technologies of the present emerge and are readily marked by this lineage of coercive modes of implementation, whose scarred history of techniques of discrimination, exploitation and extraction point to an archive of automated injustices in computing, a history that continues to charge present paradigms and practices of computing. This workshop aims to address the history of coercive technologies through a renewed attention to how we perform speculation within practices of computing through a renewed attention to this history. We propose to go backwards into the archive, rather than racing forward and proposing ever new speculative futures of automation. This is because with a focus on futures, speculative creative approaches are often conceived and positioned as methodological toolkits for addressing computing practices by imagining for/with others for a “future otherwise”. We argue that “speculation” as the easy-go-to of designers and artists trying to address automated injustices needs some undoing, as without work it will always be confined within ongoing legacies of coercive modes of computing practice. Instead of creating more just-worlds, the generation of ever-new futures by creative speculation often merely reinforces the project of coercive computing. For this workshop, drawing on queer approaches to resisting futures and informed by activist feminist engagements with archives, we invite participants to temporarily resist imagining futures and instead to speculate backwards. We begin the session with a method of speculating backwards to various moments, artefacts and practices within computing history. In this initial part of the workshop, participants are encouraged to select a coercive technique and work in smaller groups to delve into the specific computational workings of the technique in question, while also working to trace out some of their legacies as they have travelled through history from one implementation to another. Examples will be provided, but participants are encouraged to suggest and work with their own examples. What does it means to understand techniques of computing and automation as coercive infrastructures? How did so many of the dreams and seeming promises of computing turn into the coercive practices that we see today? Following this opening discussion, we then move to working to build up a speculative understanding and possible archive of non-coercive computing. What potential artefacts, techniques and practices might we populate such an archive with? Has computing as a practice become so imbued with coercive techniques that we find it hard to imagine otherwise? By the end of the workshop we hope to, in the words of Alexis Pauline Gumbs, be able to look at the emerging archives in these sessions and wonder "how did their dreams make rooms to dream in"... or not, in the case of coercive practices of computing. And "what if she changes her dream?" What if we reboot this dream?

Maximum 25 Participants

Paper


Algorithmically Encoded Identities: Reframing Human Classification

Dylan Baker (Google), Alex Hanna (Google), Emily Denton (Google AI)

Our aim with this workshop is to provide a venue within which the ACM FAT* community can thoughtfully engage with identity and the categories which are imposed on people as part of making sense of their identities. Most people have nuanced and deeply personal understandings of what identity categories mean to them; however, sociotechnical systems must, through a set of classification decisions, reduce the nuance and complexity of those identities into discrete categories. The impact of misclassifications can range from the uncomfortable (e.g. displaying ads for items that aren't desirable) to devastating (e.g. being denied medical care; being evaluated as having a high risk of criminal recidivism). Even the act of being classified can force an individual into categories which feel foreign and othering. Through this workshop, we hope to connect participants’ personal understandings of identity to how identity is ‘seen’ and categorized by sociotechnical systems.

Maximum 25 Participants

Microsite Paper


Ethics on the Ground: From Principles to Practice

Marguerite Barry (University College Dublin), Aphra Kerr (Maynooth University), Oliver Smith (Health Moonshot, Telefónica Innovation Alpha, Barcelona)

Surveys of public attitudes show that people believe it is possible to design ethical AI. However, the everyday professional development context can offer minimal space for ethical reflection or oversight, creating a significant gap between public expectations and the performance of ethics in practice. This 2- part workshop includes an offsite visit to Telefónica Innovation Alpha and uses storytelling and theatre methods to examine how and where ethical reflection happens on the ground. It will explore the gaps in expectations and identify alternative approaches to more effective ethical performance. Bringing social scientists, data scientists, designers, civic rights activists and ethics consultants together to focus on AI/ML in the health context, it will foster critical and creative activities that will bring to the surface the structural, disciplinary, social and epistemological challenges to effective ethical performance in practice. Participants will explore and enact where, when and how meaningful interventions can happen.

Maximum 40 Participants

Microsite Paper


Deconstructing FAT: Using Memories to Collectively Explore Implicit Assumptions, Values and Context in Practices of Debiasing and Discrimination-Awareness

Doris Allhutter (Austrian Academy of Sciences), Bettina Berendt (KU Leuven)

This workshop explores implicit assumptions, values and beliefs that FAT researchers mobilize as part of their epistemic practices. As researchers in FAT “we” (often unknowingly) resort to ways-of-doing that reflect our own embeddedness in power relations, our disciplinary ways of thinking, and our historically, locally, and culturally-informed ways of solving computational problems or approaching our research. During the workshop, a curated, interdisciplinary panel of FAT researchers engages in a deconstruction exercise that aims at making visible how practices of computing are entrenched in power relations in complex and multi-layered ways. The workshop will be interactive and panelists and participants will collectively try and disentangle how structural discrimination, mundane ways-of-doing, and normative computational concepts and methods are intertwined. In this way, we will analyze the normativity of technical approaches, methods and concepts that are part of the repertoire of FAT research.

Maximum 25 Participants

Microsite Paper


Bridging the Gap from AI Ethics Research to Practice

Kathy Baxter (Salesforce)

This 90-minute workshop will focus on efforts by AI ethics practitioners in technology companies to evaluate and ensure fairness in machine learning applications. Six industry practitioners (LinkedIn, Yoti, Microsoft, Pymetrics, Facebook, Salesforce) will briefly share insights from the work they have undertaken in the area of fairness in machine learning applications, what has worked and what has not, lessons learned and best practices instituted as a result. After that set of lightning talks and for the remainder of the workshop, attendees will discuss insights gleaned from the talks. There will be an opportunity to brainstorm ways to build upon the practitioners’ work through further research or collaboration. The goal is to develop a shared understanding of experiences and needs of AI ethics practitioners in order to identify areas for deeper research of fairness in AI.

Maximum 40 Participants

Microsite Paper


Manifesting the Sociotechnical: Experimenting with Methods for Social Context and Social Justice

Ezra Goss (Georgia Institute of Technology), Lily Hu (Harvard University), Stephanie Teeple (University of Pennsylvania), Manuel Sabin (UC Berkeley).

Critiques of ‘algorithmic fairness’ have counseled against a purely technical approach. Recent work from the ACM FAT* Conference has warned specifically about abstracting away the social context that these automated systems are operating within and has suggested that "[fairness work] require[s] technical researchers to learn new skills or partner with social scientists" (Selbst et al., ACM FAT* '19). That “social context” includes groups outside the academy and social movements organizing for ‘data justice’ that have risen to prominence in the past several years (e.g., data4blacklives). In this CRAFT session we will experiment with methods used by community organizers to analyze power relations present in that social context. We will facilitate a conversation about if and how these and other methods, collaborations, and efforts can help ground a pursuit of fairer algorithmic systems and ultimately, data justice.

It is important to note that many others have spoken out on how to approach social context when discussing algorithmic fairness interventions. Community organizing and attendant methods for power analysis present one such approach: methods for documenting all stakeholders and entities relevant to an issue and the nature of the power differentials between them. We are not experts in community organizing theory or practice, and as a result, this session aims to be a collective learning experience open to all who see their interests as relevant to the conversation.

We will open with a discussion of community organizing as a practice: What is community organizing, what are its goals, methods, past and ongoing examples? What disciplines and intellectual lineages does it draw from? We will incorporate key sources and ongoing projects that we have found helpful for synthesizing this knowledge so that participants can continue exposing themselves to the field after the conference.

We will also together consider the concept of social power. Understanding that there are many means to theorize and understand power, we will share the framings that have been most useful to us. We plan to present different tools, models and procedures for doing power analysis in use in various organizing settings.

Finally, we will propose to our group that we conduct a power analysis of our own. We have prepared a hypothetical but realistic scenario involving risk assessment in a hospital setting as an example. However, we encourage participants to bring their own experiences to the table, especially if they pertain in any way to data injustice to be assessed via power analysis. We also invite participants to bring examples of ongoing organizing efforts that algorithmic fairness researchers could act in solidarity with.

Participants will walk away from this session with 1) an understanding of the key terms and sources necessary to gain further exposure to these topics and 2) preliminary experience investigating power in realistic, grounded scenarios.

Maximum 40 Participants

Microsite Paper


Centering Disability Perspectives in Algorithmic Fairness, Accountability & Transparency

Alexandra Givens (Georgetown Institute for Tech Law & Policy), Meredith Ringel Morris (Microsoft Research)

This interactive panel seeks to highlight the impact of algorithmic systems on people with disabilities. The panelists include technical experts and law/policy experts working on these issues. A key objective is to surface new research projects and collaborations, including by integrating a critical disability perspective into existing research and advocacy efforts focused on identifying sources of bias and advancing equity.

Maximum 75 Participants

Description Paper


Infrastructures: Mathematical Choices and Truth in Data

Mukul Patel (Independent)

Final abstract pending: In modelling our world, mathematical choices are made much deeper down than commonly assumed. The ‘natural’ topologies and metrics underlying data capture and representation are not necessary and given, but contingent choices that affect classification, measurement and even (the possibility of) ordering. I propose to explore alternative spaces and metrics for data representation not so much as practical program, but to raise awareness for a non-technical audience that mathematically rigorous alternatives do in fact exist, and to persuade the mathematically-minded algorithm or software designer that their faith might be misplaced.

Maximum 25 Participants


Hardwiring Discriminatory Police Practices: the Implications of Data-Driven Technological Policing on Minority (Ethnic and Religious) People and Communities

Patrick Williams (Manchester Metropolitan University), Eric Kind (Queen Mary University)

'The criminal regulation and over-policing of minority ethnic and religious groups across Europe is an everyday reality. Such encounters result in significant personal and emotional harms - driving up levels of mistrust in the police and raises serious questions concerning the legitimacy of European justice systems. Within this workshop, we will consider the implications of the adoption of data-driven technology by police and wider law enforcement agencies. Alongside anti-racism activists and campaigners, this workshop starts from the position that ‘data is not neutral’ and is better understood as a reflection and representation of (memoried) racialized policing practices. Critically then, the workshop aims to explore how (a) data-driven technologies pose ‘data-harms’ to minority ethnic and religious groups and (b) to collaboratively consider the feasibility of building (technological) community-based knowledge and skill capacity to resist ethnic profiling and racialized policing. We anticipate that the outputs from the workshop will contribute further to the development of guides for community based anti-racism organisations and serve to inform those working in the technology field of the inherent threats of data-driven technological policing.'

Maximum 25 Participants

Paper


III. Off-site CRAFT Sessions

CtrlZ.AI Zine Fair: Critical Perspectives

Emily Denton (Google), Alex Hanna (Google)

CtrlZ.AI is a zine fair focused on technology and artificial intelligence (AI). Ctrl + Z is the near-universal keyboard shortcut for "undo." The invocation of this keyboard command is meant to evoke a sense of pause or reconsideration, rather than going backwards. CtrlZ.AI is a venue for expressing this sense of reflection on AI and other sociotechnical systems which impact us every day. We aim to be an inclusive platform for artists, technologists, researchers, activists, teachers, and students to showcase their zines and other DIY creations. Zines are self-published, self or independently-distributed, small-circulation magazines that offer an accessible, liberating, and community-oriented mode of creating and disseminating content. We chose zines and related DIY creations as the primary medium of communication at CtrlZ.AI due to their low barrier of entry — to both creation and readership — and because of the significance this publication model holds for many marginalized communities. The CtrlZ.AI Zine Fair will run all day, with structured workshop programming during the hours listed in the program.

Maximum 75 Participants

Microsite Paper