In February 2025, the European Commission published two sets of guidelines to clarify key aspects of the EU Artificial Intelligence Act (“AI Act”): Guidelines on the definition of an AI system and Guidelines on prohibited AI practices. These guidelines are intended to provide guidance on the set of AI Act obligations that started to apply on February 2, 2025 – which includes the definitions section of the AI Act, obligations relating to AI literacy, and prohibitions on certain AI practices.

This article summarizes the key takeaways from the Commission’s guidelines on the definition of AI systems (the “Guidelines”). Please see our blogs on the guidelines on prohibited AI practices here, and our blog on AI literacy requirements under the AI Act here.

Defining an “AI System” Under the AI Act

The AI Act (Article 3(1)) defines an “AI system” as (1) a machine-based system; (2) that is designed to operate with varying levels of autonomy; (3) that may exhibit adaptiveness after deployment; (4) and that, for explicit or implicit objectives; (5) infers, from the input it receives, how to generate outputs; (6) such as predictions, content, recommendations, or decisions; (7) that can influence physical or virtual environments. The AI System Definition Guidelines provide explanatory guidance on each of these seven elements.

Key takeaways from the Guidelines include:

  • Machine-based. The term “machine-based” “refers to the fact that AI systems are developed with and run on machines” (para. 11) and covers a wide variety of computational systems, including emerging quantum computing systems (para. 13). Interestingly, the Guidelines note that “biological or organic systems” can also be “machine-based” if they “provide computational capacity” (para. 13).
  • Autonomy. The concept of “varying levels of autonomy” in the definition refers to the system’s ability to operate with some degree of independence from human involvement (para. 14, AI Act Recital 12). Systems that are “designed to operate solely with full manual human involvement and intervention,” whether through manual controls or automated controls that enable humans to supervise operations, are thus out of scope of the AI system definition (para. 17). In contrast, a “system that requires manually [i.e., human] provided inputs to generate an output by itself” would qualify as such a system because the output is generated without being “controlled, or explicitly and exactly specified by a human” (para. 18).
  • Adaptiveness. The Guidelines explain that “adaptiveness after deployment” refers to a system’s “self-learning capabilities, allowing the behaviour of the system to change while in use” (para. 22). The Guidelines state that “adaptiveness after deployment” is not a necessary condition for a system to qualify as an AI system, because the AI Act uses the term “may” in relation to this element of the definition (para. 23).
  • Objectives. Objectives are the explicit or implicit goals of the task to be performed by that AI system (para. 24). The Guidelines draw a (not wholly clear) distinction between an AI system’s “objectives”—which are internal to the system—and its “intended purpose,” which is external to the system, relates to the context of deployment and turns on the “use for which an AI system is intended by the provider” (para. 25; citing Art. 3(12)). The Guidelines give the example of a corporate AI assistant whose intended purpose is to assist a company department to carry out certain tasks; this purpose is fulfilled through the system’s internal operation to achieve its objectives, but also relies on other factors, such as the system being integrated into the customer service workflow, the data that is used by the system and the system’s instructions for use.
  • Inferencing and AI techniques. The Guidelines state that the capability to infer, from the input received, how to generate outputs is a “key, indispensable condition” of AI systems (para. 26). The Guidelines explain that the term “infer how to” is broad. It is not limited to the “ability of a system to derive outputs from given inputs, and thus infer the result”; instead, it also refers to the “building phase” of an AI system, “whereby a system derives outputs through AI techniques enabling inferencing” (para. 29). The Guidelines state that supervised learning, unsupervised learning, self-supervised learning, reinforcement learning, deep learning, and knowledge-and logic-based techniques are all examples of AI techniques that enable inferencing in the building phase.  
  • Outputs. Outputs include four broad categories: (1) predictions, meaning estimations about an unknown value from a known value (para. 54), (2) content, meaning newly generated material such as text or images (para. 56), (3) recommendations, meaning suggestions for specific actions, products, or services (para. 57), and (4) decisions, meaning conclusions or choices made by the AI system (para. 58).
  • Interaction with the environment. Interacting with the environment means the AI system is “not passive, but actively impact[s] the environment in which [it is] deployed” (para. 60). Impacted environments can be physical or virtual.

The Guidelines also point to Recital 12, which excludes from the AI system definition “simpler traditional software systems or programming approaches” and systems “that are based on the rules defined solely by natural persons to automatically execute operations”. The Guidelines provide examples of systems that may fall into this category—including those for improving mathematical optimization, basic data processing, systems based on classical heuristics, and simple prediction systems. According to the Guidelines, although some of these systems have the capacity to infer, they nonetheless fall outside the scope of the definition “because of their limited capacity to analyse patterns and adjust autonomously their output” (para. 41).

The Covington team continues to monitor regulatory developments on AI, and we regularly advise the world’s top technology companies on their most challenging regulatory and compliance issues in the EU and other major markets. If you have questions about AI regulation, or other tech regulatory matters, we are happy to assist with any queries.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Madelaine Harrington Madelaine Harrington

Madelaine Harrington is an associate in the technology and media group. Her practice covers a wide range of regulatory and policy matters at the cross-section of privacy, content moderation, artificial intelligence, and free expression. Madelaine has deep experience with regulatory investigations, and has…

Madelaine Harrington is an associate in the technology and media group. Her practice covers a wide range of regulatory and policy matters at the cross-section of privacy, content moderation, artificial intelligence, and free expression. Madelaine has deep experience with regulatory investigations, and has counseled multi-national companies on complex cross-jurisdictional fact-gathering exercises and responses to alleged non-compliance. She routinely counsels clients on compliance within the EU regulatory framework, including the General Data Protection Regulation (GDPR), among other EU laws and legislative proposals.

Madelaine’s representative matters include:

coordinating responses to investigations into the handling of personal information under the GDPR,
counseling major technology companies on the use of artificial intelligence, specifically facial recognition technology in public spaces,
advising a major technology company on the legality of hacking defense tactics,
advising a content company on compliance obligations under the DSA, including rules regarding recommender systems.

Madelaine’s work has previously involved representing U.S.-based clients on a wide range of First Amendment issues, including defamation lawsuits, access to courts, and FOIA. She maintains an active pro-bono practice representing journalists with various news-gathering needs.

Photo of Sam Jungyun Choi Sam Jungyun Choi

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such…

Recognized by Law.com International as a Rising Star (2023), Sam Jungyun Choi is an associate in the technology regulatory group in Brussels. She advises leading multinationals on European and UK data protection law and new regulations and policy relating to innovative technologies, such as AI, digital health, and autonomous vehicles.

Sam is an expert on the EU General Data Protection Regulation (GDPR) and the UK Data Protection Act, having advised on these laws since they started to apply. In recent years, her work has evolved to include advising companies on new data and digital laws in the EU, including the AI Act, Data Act and the Digital Services Act.

Sam’s practice includes advising on regulatory, compliance and policy issues that affect leading companies in the technology, life sciences and gaming companies on laws relating to privacy and data protection, digital services and AI. She advises clients on designing of new products and services, preparing privacy documentation, and developing data and AI governance programs. She also advises clients on matters relating to children’s privacy and policy initiatives relating to online safety.

Photo of Lisa Peets Lisa Peets

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice encompasses regulatory compliance and investigations alongside legislative advocacy. For more…

Lisa Peets is co-chair of the firm’s Technology and Communications Regulation Practice Group and a member of the firm’s global Management Committee. Lisa divides her time between London and Brussels, and her practice encompasses regulatory compliance and investigations alongside legislative advocacy. For more than two decades, she has worked closely with many of the world’s best-known technology companies.

Lisa counsels clients on a range of EU and UK legal frameworks affecting technology providers, including data protection, content moderation, artificial intelligence, platform regulation, copyright, e-commerce and consumer protection, and the rapidly expanding universe of additional rules applicable to technology, data and online services.

Lisa also supports Covington’s disputes team in litigation involving technology providers.

According to Chambers UK (2024 edition), “Lisa provides an excellent service and familiarity with client needs.”