Description

This track aims to uncover and explore available legal solutions to tackle bias and unfairness in algorithmic decision-making. While recent debates have focused on transparency, the existing legal tools to mitigate risks to the rights and freedoms of individuals related to bias and unfairness or to sanction bias and unfairness when they occur is far more complex. The two areas of focus for this year’s Law Track are (1) data protection law and fundamental rights and freedoms and (2) non-discrimination law, justice and fairness.

Data protection law, and in particular the GDPR, is a fertile source of legal solutions that complement transparency to ensure fairness, lawfulness and accountability of every system that processes personal data, while at the same time empowering individuals with private rights of action and other subjective rights, like access and the right to object. Some countries have put in place similar systems to the GDPR - comprehensive and robust, while others focus on algorithmic accountability, risk assessments or sectoral approaches.

Authors are invited to explore topics such as, but not limited to, the nature and consequences of the prohibition of solely automated decision-making that may result in a legal or significant effect for the person subjected to it, the relevant safeguards that should be adopted for lawful automated decision-making (explanation, contestation, human in the loop), the role of data protection by design and by default in general, but also the impact of having them codified as legal obligation, the role of a Data Protection Impact Assessment or other impact assessments in building fair algorithms or whether the paradox of applying data minimization and purpose limitation principles to personal data that feed machine learning/AI applications exists or is a myth. Other topics like deconstructing and operationalizing the principle of fair processing, or the effectiveness of the rights of the data subject in granting (some) control to the individual over automated decision-making are also of interest, either in private law or public law (including law enforcement and criminal justice). 

In addition, private rights of action and liability rules play an important role in repairing material or even moral damage caused to individuals. Similarly administrative and consumer law rules sometimes give procedural protections to individuals. But are these rules equipped to deal with automated decisions? If so, how can they be applied to ensure effective judicial redress? Finally, authors are also invited to bring new perspectives on the fundamental rights dimension of automated decision-making, both in horizontal relationships (between private parties) and vertical relationships (between individuals and the state, be it represented by law enforcement or by public administration). Comparative legal approaches to bias and unfairness in algorithms are also welcome.

All of the following topics, among others, are encouraged for the Call for Papers and will be considered in the framework of any relevant legal domain: predictive policing, behavioral advertising, price discrimination, consumer harm, surveillance, facial recognition, data-driven public services, large-scale data analytics, automated decision-making, algorithmic decision-making systems, impact assessments, law enforcement, non-discrimination, inequality, fairness, liability, constitutional implications, rule of law, human rights implications, legal governance of algorithmic systems.

If your paper concerns analysis based on experiences with real-world systems, please consider submitting under Track 4.

Evaluation

Each paper will be reviewed by 3 Law (peer review), and, possibly, 1 CS (cross-disciplinary review) program committee members. The evaluation criteria for the review will include:

  • Relevance to the themes of the conference;
  • Quality of submission as measured by accuracy, clarity, comprehensiveness, and depth of exposition, including contextualizing the work in the relevant field(s);
  • Novelty of the contributions and problem domain; and
  • Potential for broader impact, including across other disciplines and real-world systems.

The evaluation for the peer review will be focused on a proper grounding in positive law and/or relevant theoretical exploration, taking into account that the paper must be sufficiently accessible to a CS audience, and may be focused on explaining core legal issues to CS scholarship.

Carefully read the information below and when submitting your proposal, please indicate:

  • Which of the two areas of interest you envisage (2.1 or 2.2)
  • One or more relevant legal domains for your paper
  • The region(s) of the jurisdiction(s) relevant for your paper

Areas of interest

2.1. Data protection law and fundamental rights and freedoms: What redress possibilities does the law offer people who are harmed by the unfair outcome of machine learning, AI, or other unfair use of personal data? What role, if any, could data protection principles or fair information principles (FIPs) play to defend people against unfair or discriminatory data processing or machine learning? How should the rules on automated decision-making in the GDPR, the modernised Convention 108, and emerging US state law be interpreted? What type of safeguards should be in place when automated decision-making is allowed? How should EU data protection law’s fairness principle be operationalised? Should the law aim for fair machine learning, and if so: how should fairness be operationalised? To what extent can impact assessments, such as human rights impact assessments or data protection impact assessments, help to protect people against unfair machine learning? Which human rights are threatened by machine learning? How could law be adapted to better protect against unfair or otherwise problematic types of automated decision-making systems?

2.2. Non-discrimination law, justice and fairness: This track tackles both public and private sectors. For the private sector, this may include e.g.: price discrimination, or any other discrimination or consumer harm that may result from automated decision-making; analysis of safeguards to prevent bias; claims as to material and non-material damages in civil actions as a result of direct or indirect discrimination. For the public sector, this refers to: law enforcement and other areas of public policy and unfairness that may result from automated or semi-automated decision-making. Law enforcement may include e.g.: predictive policing, facial recognition systems, safeguards available in data protection, privacy laws, equality law, human rights law and administrative law/public law in the law enforcement sector, interoperability of large-scale IT systems within the EU, through a fairness lens. Issues in relation to other public bodies include e.g.: discrimination that may result from algorithms used to determine social benefits or liabilities to the state (such as tax and social security fraud detection), and the available safeguards in data protection, privacy laws, equality law, human rights laws and administrative law/public law as well as their effectiveness to protect against unfairness.

Legal domains and jurisdictions

Authors should select one or more legal domains from the following list when submitting their paper: Constitutional Law, Administrative Law, Criminal Law, Human Rights Law, Private Law, Law of Obligations, Torts, Civil liability, Contract Law, Criminal Procedure, Civil Procedure, Comparative Law, Private International Law, Public International Law, Labor Law, Data Protection Law. Peer reviewers for a paper will be experts in the domains and jurisdictions selected upon its submission, so please select your relevant domains judiciously.

Please also indicate the regions/jurisdictions that are relevant for your submission: Asia, Australia and New Zealand, Africa, Europe, Latin America, North America, Other.

Program Committee (to be updated)

  • Nikita Aggarwal, University of Oxford
  • Simisola Akintoye, De Montfort University, Leicester
  • Jef Ausloos, University of Amsterdam
  • Emre Bayamlioğlu, Tilburg University
  • Ana Beduschi, University of Exeter
  • Rosamunde van Brakel, Vrije Universiteit Brussel
  • Ian Brown, Research ICT Africa
  • Lee Bygrave, University of Oslo
  • Ryan Calo, University of Washington
  • Damian Clifford, Leuven University
  • Jennifer Cobbe, Cambridge University
  • Christian D'Cunha, European Data Protection Supervisor's office (EDPS)
  • Silvia De Conca, Tilburg University
  • Diana Dimitrova, FIZ Karlsruhe
  • Niva Elkin-Koren, Haifa Center for Law & Technology
  • David Erdos, University of Cambridge
  • Sarah Eskens, University of Amsterdam
  • Muge Fazlioglu, International Association of Privacy Professionals
  • Raphael Gellert, Tilburg University
  • Jamie Grace, Sheffield Hallam University
  • Aviva de Groot, Tilburg University
  • Natali Helberger, University of Amsterdam
  • Tristan Henderson, University of St Andrews
  • Joris van Hoboken, Vrije Universiteit Brussel
  • Kristina Irion, Institute for Information Law (IViR)
  • Malavika Jayram, Digital Asia Hub
  • Frederike Kaltheuner, Privacy International
  • Irene Kamara, Tilburg University & Vrije Universiteit Brussel
  • Joshua Kroll, UC Berkely
  • Brenda Leong, Future of Privacy Forum
  • Meg Leta Jones, GeorgeTown
  • Gianclaudio Malgieri, Vrije Universiteit Brussel
  • Deirdre Mulligan, University of California Berkeley
  • Daragh Murray, Essex University
  • Guido Noto La Diega, Northumbria University
  • Roya Pakzad, Taraaz
  • David Powell, Hampshire Constabulary
  • Christine Rinik, Winchester University
  • Sophie Stalla-Bourdillon, Southampton University
  • Anton Vedder, KU Leuven
  • Tal Zarsky, University of Haifa

Track Chairs