Tomorrow, the Federal Senate of the Brazilian National Congress may have its first vote on the country’s new artificial intelligence (AI) legal framework, which takes a human rights, risk management, and transparency approach.

The bill, to be marked-up by the Senate Temporary Committee on Artificial Intelligence (“CTIA”), creates a broad and detailed legal framework. It contains rules on the rights of affected persons and groups, risk categorization and management, governance of AI systems, civil liability, penalties for non-compliance, and copyright protection. It also includes specific provisions for government use of AI, best practices and self-regulation, and communication of serious security incidents. Finally, it establishes an inter-agency regulatory system at the federal government level, whose main regulator will be chosen by the executive branch.

If approved by CTIA, the President of the Senate plans to put the bill to a floor vote next week. The new AI legal framework is a priority for both the congressional leadership and President Luiz Inácio Lula da Silva’s administration. If adopted by Congress, the framework will be the first major piece of legislation in Brazil to regulate the digital economy since the approval of the Civil Rights Framework for the Internet Act of 2014 (“MCI”) and the General Personal Data Protection Act of 2018 (“LGPD”).


The proposed new AI legal framework sets rights and obligations for development, implementation and use of any AI system of general application, as well as of generative AI, with the exception of systems for personal use, those developed exclusively for defense, and systems used only for research and development.

Risks and Rights

Before use, AI systems must be assessed and classified by developers and implementers according to their risk.  There are three categories: excessive risk, high risk, and systems that are neither of excessive nor high risk.

AI systems deemed of excessive risk are prohibited.  They include technology to: (i) manipulate behavior and explore vulnerabilities, (ii) establish social scoring policies, (iii) facilitate children and adolescents sexual abuse and exploration, (iv) create crime-prone personality classifications, (v) autonomous weapons systems, and (vi) long-distance, real-time biometric surveillance of public spaces (with exceptions).

AI systems deemed of high risk are allowed, but must comply with the framework’s obligations.  They include technology related to: (i) critical infrastructure, (ii) education, (iii) recruiting, (iv) public and private essential services, (v) justice, (vi) autonomous vehicles, (vii) healthcare, (viii) crime fighting, (ix) investigations by authorities, (x) emotions recognition, (xi) immigration and border management, and (xii) content creation and distribution to foster engagement.  The inter-agency group will regulate high-risk systems, and may expand the list.

Furthermore, high-risk AI systems will have to undergo an algorithmic impact analysis based on a methodology and process detailed in the framework.

Persons or groups affected by AI systems have rights to information, privacy and data protection, human participation, and non-discrimination.  If the system generates relevant legal effects or is deemed high-risk, the framework provides for additional rights to explanation, contestability and revision, and human intervention or revision.


Private AI agents – including developers, providers and implementers – will have to establish governance systems based on obligations listed in the framework.  In the case of high-risk AI systems, there are additional obligations.

The framework also establishes governance obligations for all levels and branches of government when developing, hiring, and adopting AI systems.


The bill establishes an inter-agency group (“SIA”) in charge of the framework’s regulation and oversight.  The executive branch will decide which federal agency will lead the group and act as the main regulator.  Existing sector-specific regulators will be part of the group.

The framework also includes provisions related to civil liability, penalties for non-compliance (including a maximum fine of BRL 50 million or 2 percent of gross revenue, per infraction), communication of serious security incidents, good practices, and self-regulation.  It also sets the parameters for a regulatory sandbox.

Other Issues

The bill further establishes provisions connected to AI systems, including antitrust, copyrights protection, and impact on labor market and sustainability.  It also includes provisions related to education and capacity building.

Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Diego Bonomo Diego Bonomo

Diego Bonomo is a senior advisor in the firm’s London office. Diego, a non-lawyer, has more than 20 years of Brazil regulatory, trade, and foreign affairs experience at leading business associations, think tanks, companies, and academic institutions. Diego also served in the Brazilian…

Diego Bonomo is a senior advisor in the firm’s London office. Diego, a non-lawyer, has more than 20 years of Brazil regulatory, trade, and foreign affairs experience at leading business associations, think tanks, companies, and academic institutions. Diego also served in the Brazilian government.

Before joining the firm, Diego was Team Leader of the Brazil Trade Facilitation Program at Palladium and Executive Manager for International Affairs at Brazil’s National Confederation of Industry (Confederação Nacional da Indústria, CNI). At the U.S. Chamber of Commerce, he served as Senior Director of the International Division and Senior Director for Policy of the Brazil-U.S. Business Council. Diego also was Executive Director of the Brazil Industries Coalition (BIC), the leading Brazilian business coalition in the United States, and General Coordinator of Foreign and Trade Affairs at the Federation of Industries of the State of São Paulo (Federação das Indústrias do Estado de São Paulo, FIESP). He previously served in the Office of the President of Brazil as advisor to the Minister of Long-Term Planning.

Diego holds a bachelor’s and master’s degree in international relations from the Pontifical Catholic University of São Paulo.