On June 2, 2020, the French Supervisory Authority (“CNIL”) published a paper on algorithmic discrimination prepared by the French independent administrative authority known as “Défenseur des droits”.  The paper is divided into two parts: the first part discusses how algorithms can lead to discriminatory outcomes, and the second part includes recommendations on how to identify and minimize algorithmic biases.  This paper follows from a 2017 paper published by the CNIL on “Ethical Issues of Algorithms and Artificial Intelligence”.

According to this new paper, each stage of the development and deployment of an algorithmic system is potentially susceptible to bias – indeed, even the maintenance of such a system can be vulnerable to this problem.  Biases are often the result of the data that is fed into a system, which may itself be skewed or contain information already affected by biases.  The paper gives the example a facial recognition system whose algorithms were trained mainly with data relating to white men.  Alternatively, while the data fed into a system may be “neutral” and representative, the combination of various data types may lead to discriminatory effects later on.  Here, the paper gives the example of a university that uses applicants’ place of residence as a criteria to discriminate against applicants of immigrant origin.

The paper concludes that automated systems “tend to stigmatize members of already disadvantaged and dominated social groups”; moreover, the developers of algorithms and the companies using them are currently “not vigilant enough to avoid this invisible form of automated discrimination”.  The paper advocates for companies to implement measures that will help ensure that algorithmic biases are identified and that individuals applying discriminatory decisions be sanctioned.  Finally, the paper lists the following recommendations to help effect change in this area:

  • training and raising awareness among professionals who create and use algorithmic systems;
  • supporting research to develop studies on bias and methodologies to prevent it;
  • imposing stricter transparency obligations which reinforce the need to explain the logic behind algorithms (and allow third parties, and not only those affected by an automated decision, to access the criteria used by the algorithms); and
  • conducting impact assessment studies to anticipate the discriminatory effects of algorithms (g., similar to the Algorithmic Impact Assessment platform recently implemented by the Canadian Federal government).
Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Anna Oberschelp de Meneses Anna Oberschelp de Meneses

Anna Sophia Oberschelp de Meneses is an associate in the Data Privacy and Cybersecurity Practice Group.  Anna is a qualified Portuguese lawyer, but is both a native Portuguese and German speaker.  Anna advises companies on European data protection law and helps clients coordinate…

Anna Sophia Oberschelp de Meneses is an associate in the Data Privacy and Cybersecurity Practice Group.  Anna is a qualified Portuguese lawyer, but is both a native Portuguese and German speaker.  Anna advises companies on European data protection law and helps clients coordinate international data protection law projects.  She has obtained a certificate for “corporate data protection officer” by the German Association for Data Protection and Data Security (“Gesellschaft für Datenschutz und Datensicherheit e.V.”). She is also Certified Information Privacy Professional Europe (CIPPE/EU) by the International Association of Privacy Professionals (IAPP).  Anna also advises companies in the field of EU consumer law and has been closely tracking the developments in this area.  Her extensive language skills allow her to monitor developments and help clients tackle EU Data Privacy, Cybersecurity and Consumer Law issues in various EU and ROW jurisdictions.

Photo of Kristof Van Quathem Kristof Van Quathem

Kristof Van Quathem advises clients on information technology matters and policy, with a focus on data protection, cybercrime and various EU data-related initiatives, such as the Data Act, the AI Act and EHDS.

Kristof has been specializing in this area for over twenty…

Kristof Van Quathem advises clients on information technology matters and policy, with a focus on data protection, cybercrime and various EU data-related initiatives, such as the Data Act, the AI Act and EHDS.

Kristof has been specializing in this area for over twenty years and developed particular experience in the life science and information technology sectors. He counsels clients on government affairs strategies concerning EU lawmaking and their compliance with applicable regulatory frameworks, and has represented clients in non-contentious and contentious matters before data protection authorities, national courts and the Court of the Justice of the EU.

Kristof is admitted to practice in Belgium.