Description

This track invites papers from the range of social sciences and humanities (SSH) disciplines that engage with issues of justice and fairness in computational systems. With this track we seek both to complement and to critically interrogate the research that has taken place within this field so far. We particularly encourage submissions that explore existing and potential interfaces between computer science/engineering and SSH, and that either build on debates already present in the FAT* research community, or pose new questions and areas of research. The areas of interest purposely incorporate a broad set of domains that have so far debated computation and social justice separately. We welcome both ‘deep dive’ approaches that use disciplinary insights to illuminate a particular issue in new ways, and interdisciplinary contributions that identify oppositional or complementary approaches that challenge or interrogate established ideas or perspectives, and that translate key ideas across perspectives. One key aim for this new track is to build an agenda for SSH to contribute to the advancement of social justice in computing systems, and to integrate work that has been ongoing in these fields within the view of the ACM community.

If your paper concerns the auditing or evaluation of one or several systems used currently or in the past from the SSH perspective, or studies of emerging social movements, social justice and activism with relation to computational and algorithmic systems, please consider Track 4.

Evaluation

Each paper will be reviewed by 3 relevant SSH (peer review) and, possibly, by 1 CS (cross-disciplinary review) program committee members. The evaluation criteria for the review will include: 

  • Relevance to the themes of the conference;
  • Quality of submission as measured by clarity, comprehensiveness, and depth of exposition, including contextualizing the work in the relevant field(s);
  • Originality of the contributions and problem domain; and
  • Potential for broader impact, including across other disciplines and real-world systems.

In particular, satisfying these criteria would include presenting a proper grounding in the relevant literature of the paper’s disciplinary background (e.g. STS, criminology, political philosophy, ethics), depth of theoretical grounding, and methodological rigour and integrity. The paper should aim to be accessible to and/or translate between a CS and SSH audience. 

Areas of interest

3.1. Sociology, Science and Technology Studies (STS)

Including (but not limited to) comparative, ethnographic, cultural, political, structural or political economy approaches, grounded in the disciplines of sociology, anthropology, geography, political science, STS, public administration, or other related approaches to established and emerging FAT* topics; history and philosophy of science and technology; critical big data and algorithm studies, and social justice approaches to FAT-related topics. 

What is the current role CS approaches to fairness, accountability and transparency are playing with relation to algorithmic practices and cultures in society? How does the way these concerns are voiced and addressed differ with geography and culture? What can interdisciplinary approaches tell us about the advantages and drawbacks of formalising complex values such as fairness, accountability and transparency (e.g., historicising processes of conceptual formalisation by different SSH disciplines; placing political philosophy approaches to FAT* in dialogue with CS and public administration perspectives; and what can comparative historical approaches tell us about how to mediate between different and incompatible disciplinary approaches and definitions of concepts such as fairness?)

What are the social justice dimensions of FAT*, and how have they been surfaced and addressed by CS and law research so far? What is missing and what gaps can SSH approaches help to fill? What is the relationship of FAT* research to the concerns of marginalised groups about algorithmic discrimination, fairness and justice, and what is the potential for SSH FAT* research to centre problems that are currently insufficiently visible? If that potential exists, under what conditions (e.g. structural, funding, organisational, institutional) could it be realised? 

How should the field address the problems and opportunities of intersectional perspectives on fairness? What kinds of marginalisation are relevant to the concerns of FAT*? Can perspectives and tools from gender/LGBTQ studies, postcolonial studies, indigenous studies, racial and ethnic studies, migration studies, work on disability and other relevant domains inform a broad perspective on how to frame and advance social justice in relation to design of computational systems?

What are the implications of algorithmic optimisation processes for FAT* concerns, and what research strategies can SSH provide to surface and interrogate these processes?

3.2. Philosophy, Political Philosophy, and Digital Ethics 

Including (but not limited to) philosophy, political philosophy, digital ethics, and related studies of FAT* topics. We invite consideration of what other philosophical perspectives not commonly seen in FAT* research might add. What, for example, do classic and historical debates on fairness have to offer contemporary efforts to define it? What philosophical and analytical perspectives are already embedded in FAT* and how might we distinguish them? What might be the role of philosophy and digital ethics in informing work on FAT*? 

How can philosophy of information and philosophy of design help with the design of algorithms? What can existing research in this field tell us about the epistemology of FAT* research, and about its engagement with the social impact of digital technologies?

What are the assumptions of FAT* research with respect to the political philosophy grounding of notions of accountability and transparency? What idea of society is FAT* research grounded in, and what are the implications of this choice? Can theories of justice shed any light on our understanding of fair algorithms/AI?

Within digital ethics, we also invite consideration of which perspectives have been foregrounded so far in CS work on FAT*, and which others might merit consideration. For example, what is the current role of bioethics with respect to FAT* CS, for instance with regard to setting the goals and boundaries for technical, legal and policy experimentation with direct social impacts? We also invite papers focusing on possible models for the governance of digital technologies in general, and AI in particular. What kinds of guidance does digital ethics provide for organisations or communities to address ethical issues? What kind of theories/frameworks should we use to ascribe moral responsibility for the actions of algorithmic systems? How should ethical auditing mechanisms for algorithmic decisions be defined? Should there be redressing mechanisms for unintended consequences or failures of algorithms? Are there any fundamental values/factors which should underpin the design of algorithms to ensure fairness and socially good outcomes?

Domains and sub-disciplines

Authors should select one or more SSH sub-disciplines or domains of study from the following list when submitting their paper. Peer reviewers for a paper will be experts in the domain or sub-discipline(s) selected upon its submission, so please select your relevant domains judiciously. 

SSH domains: justice and democracy, implications for the common good, group privacy, autonomy, responsibility, ethics of data, ethics of algorithms, ethics of practices, transparency, auditing of algorithms, ethical design, governance of the digital; philosophy of information; philosophy of design; historical perspectives; automization, optimization and (computational) statistics in relation to social justice; intersectional concerns; ethnographies of communities/practices/labour in socio-technical systems; qualitative/quantitative studies of engagement with/attitudes towards algorithmic systems.

SSH subdisciplines: gender and sexuality studies, postcolonial studies, migration studies, disability studies, racial and ethnic studies, indigenous studies, critical HCI and the design of algorithmic systems, ethics, history/philosophy of science and technology, critical data/algorithm studies, surveillance studies, criminology.

Program Committee (to be updated)

  • Nikita Aggarwal, University of Oxford
  • Doris Allhutter, Austrain Academiy of Sciences
  • Aaron Alvero, Stanford University
  • Sareeta Amrute, Data & Society
  • Emily Bender, University of Washington
  • Balazs Bodo, University of Amsterdam
  • Rosamunde van Brakel, Vrije Universiteit Brussel
  • Meredith K. Broussard, New York University
  • Cansu Canca, AI Ethics Lab
  • Corinne Cath-Speth, Oxford University
  • Silvia de Conca, Tilburg University
  • Rumman Chowdhury, Accenture
  • Josh Cowls, Oxford University
  • Francien Dechesne, Leiden University
  • Fernando Delgado, Cornell University
  • Nicholas Diakopoulos, Northwestern University
  • Ben Green, Harvard University
  • Aviva de Groot, Tilburg University
  • Alex Hanna, Google
  • Natali Helberger, University of Amsterdam
  • Arne Hintz, Cardiff University
  • Joris van Hoboken, Vrije Universiteit Brussel
  • William Isaac, DeepMind
  • Brian Jefferson, University of Illinois
  • Os Keyes, University of Washington
  • Esther Keymolen, University of Tilburg
  • Lauren Kilgour, Cornell University
  • Brenda Leong, Future of Privacy Forum
  • Karen Levy, Cornell University
  • Yanni Loukissas, Georgia Institute of Technology
  • Jasmine McNealy, University of Florida
  • Yeshimabeit Milner, Data for Black Lives
  • Brent Mittelstadt, Oxford University
  • Sendhil Mullainathan, University of Chicago
  • Deirdre Mulligan, University of California Berkeley
  • Mutale Nkonde, Berkman Klein Center of Internet and Society, Harvard University
  • Merel Noorman, TILT, Tilburg University
  • Sofia Olhede, University College London
  • Niels ten Oever, University of Amsterdam
  • Roya Pakzad, Taraaz
  • Leon Felipe Palafox Novack, Universidad Panamerican
  • Samir Passi, Cornell University
  • Robin Pierce, Tilburg University
  • Noopur Raval, UC Irvine
  • Rashida Richardson, AI Now Institute
  • Pablo Rivas, Marist College
  • Burkhard Schafer, University of Edinburgh
  • Nishant Shah, Dutch Art Institute
  • Tamar Sharon, Radboud University
  • Nicole Shephard, Independent
  • Kate Sim, Oxford Internet Institute
  • Luke Stark, Microsoft Research
  • Anne Washington, New York University

Track Chairs