This track seeks theoretical, methodological, and experimental contributions for understanding and accounting for fairness, accountability, and transparency of algorithmic systems and for mitigating discrimination, inequality, and other harms resulting from the deployment of such systems in real-world contexts. 

Understanding includes detecting and measuring how and which forms of bias are manifested in datasets and models; determining how algorithmic systems may introduce and exacerbate discrimination and unjust outcomes; measuring the efficacy of existing techniques for explaining and interpreting automated decisions; evaluating perceptions of fairness and algorithmic bias. Accounting includes the governance of the design, development and deployment of algorithmic systems, which takes into consideration all stakeholders and interactions with socio-technical systems. Mitigating includes introducing techniques for data collection and analysis and processing that incorporate and acknowledge the systemic bias and discrimination that may be present in datasets and models; formalizing fairness objectives based on notions from the social sciences, law, and humanistic studies; building socio-technical systems which incorporate these insights to minimize harm on historically disadvantaged communities and empower them; introducing methods for decision validation, correction and participation in co-designing algorithmic systems. 

We welcome papers from various sub-disciplines of CS (see list below). Paper submissions must indicate at least one area of interest (see list below) and at least one sub-discipline upon abstract registration. If your paper concerns the study of a deployed system, or the description of software or other developed materials, please consider submitting to Track 4.


Each paper will be reviewed by 3 CS (peer reviews) and, possibly, by 1 non-CS (cross-disciplinary review) program committee members. Peer reviewers will be selected from the same sub-discipline(s) of the paper to ensure expert reviews. The evaluation criteria for the review will include: 

  • Relevance to the themes of the conference;
  • Quality of submission as measured by accuracy, clarity, comprehensiveness, and depth of exposition, including contextualizing the work in the relevant field(s);
  • Novelty of the contributions and problem domain; and
  • Potential for broader impact, including across other disciplines and real-world systems.

Papers are required to present novel, rigorous and significant scientific contributions and engage with work from relevant disciplines. When relevant, reviewers will also take into consideration reproducibility of the results.

Areas of interest

1.1 Fairness, equity, and justice by design: methodologies and techniques to build computing systems that incorporate fairness, equity and justice desiderata informed by legal, social, and philosophical models. Examples include fairness-aware machine learning algorithms, human language generation that mitigates issues of bias, and model-agnostic methods for data sanitization or post-processing. 

1.2 Methods to audit, measure, and evaluate fairness: methods and techniques to check and measure the fairness (or unfairness) of existing computing systems and to assess associated risks. Examples include metrics and formal testing procedures to evaluate fairness, quantify the risk of fairness violations, or explicitly show tradeoffs. 

1.3 Methods involving human factors and humans-in-the-loop: methods and techniques that center on the human-machine relationship. Examples include visual analytics for fairness exploration, cognitive evaluation of explanations, and systems that combine human and algorithmic elements.

1.4 Accountability, transparency, and interpretability by design: methodologies for governing the accountability and transparency of new computing systems, and for working these goals into existing systems using a by-design approach. Examples include machine learning algorithms to create interpretable white-box models, software engineering process models and software metrics, and the documentation of accountable systems.

1.5 Methods to assess explainability, transparency and interpretability: methods and techniques for assessing accountability and transparency of existing computing systems. Examples include the explanation of black-boxes, counterfactual and what-if reasoning.


Authors should select one or more CS discipline(s) from the following list when submitting their paper: computer systems, computer vision, databases, data/web mining, data science, human/natural language technologies, human-computer interaction (quantitative), human-computer interaction (qualitative), information retrieval and recommender systems, machine learning, programming languages, robotics, software engineering, statistical analysis and learning, theoretical computer science, visual analytics, and others.

Peer reviewers for a paper will be experts in the sub-discipline(s) selected upon its submission, so please select your relevant disciplines judiciously. 

Track chairs