Photo of Kristof Van Quathem

Kristof Van Quathem

Kristof Van Quathem advises clients on data protection, data security and cybercrime matters in various sectors, and in particular in the pharmaceutical and information technology sector. Kristof has been specializing in this area for over fifteen years and covers the entire spectrum of advising clients on government affairs strategies concerning the lawmaking, to compliance advice on the adopted laws regulations and guidelines, and the representation of clients in non-contentious and contentious matters before data protection authorities.

Several EU data protection supervisory authorities (“SAs”) have recently issued guidance on cookies.  On January 11, 2024, the Spanish SA published guidance on cookies used for audience measurement (often referred to as analytics cookies) (available in Spanish only).  On December 20, 2023, the Austrian SA published FAQs  on cookies and data protection (available in German only).  On October 23, 2023, the Belgian SA published a cookie checklist (available in Dutch and French).

The new guidance builds on existing guidance but addresses some new topics which we discuss below.Continue Reading EU Supervisory Authorities Publish New Guidance on Cookies

On August 22, 2023, the Spanish Council of Ministers approved the Statute of the Spanish Agency for the Supervision of Artificial Intelligence (“AESIA”) thus creating the first AI regulatory body in the EU. The AESIA will start operating from December 2023, in anticipation of the upcoming EU AI Act  (for a summary of the AI Act, see our EMEA Tech Regulation Toolkit). In line with its National Artificial Intelligence Strategy, Spain has been playing an active role in the development of AI initiatives, including a pilot for the EU’s first AI Regulatory Sandbox and guidelines on AI transparency.Continue Reading Spain Creates AI Regulator to Enforce the AI Act

On July 17, 2020, the High-Level Expert Group on Artificial Intelligence set up by the European Commission (“AI HLEG”) published The Assessment List for Trustworthy Artificial Intelligence (“Assessment List”). The purpose of the Assessment List is to help companies identify the risks of AI systems they develop, deploy or procure, and implement appropriate measures to mitigate those risks.

The Assessment List is not mandatory, and there isn’t yet a self-certification scheme or other formal framework built around it that would enable companies to signal their adherence to it.  The AI HLEG notes that the Assessment List should be used flexibly; organizations can add or ignore elements as they see fit, taking into consideration the sector in which they operate. As we’ve discussed in our previous blog post here, the European Commission is currently developing policies and legislative proposals relating to trustworthy AI, and it is possible that the Assessment List may influence the Commission’s thinking on how organizations should operationalize requirements relating to this topic.Continue Reading AI Update: EU High-Level Working Group Publishes Self Assessment for Trustworthy AI

On June 2, 2020, the French Supervisory Authority (“CNIL”) published a paper on algorithmic discrimination prepared by the French independent administrative authority known as “Défenseur des droits”.  The paper is divided into two parts: the first part discusses how algorithms can lead to discriminatory outcomes, and the second part includes recommendations on how to identify and minimize algorithmic biases.  This paper follows from a 2017 paper published by the CNIL on “Ethical Issues of Algorithms and Artificial Intelligence”.
Continue Reading French CNIL Publishes Paper on Algorithmic Discrimination

On March 24, 2020, the Dutch Supervisory Authority (“SA”) announced the launch of a broad investigation into automobile manufacturers, to determine whether any violations of data protection laws have occurred in relation to connected cars.

The Dutch SA sent a questionnaire to all Netherlands-based car and truck manufacturers, asking what types of personal data they