Open Letter: Advocating for Brazilian AI regulation that protects human rights

In Brazil, a human rights-based approach to regulation for Artificial Intelligence Systems is urgent.

It is undeniable that artificial intelligence (AI) systems  has the potential to benefit  society, particularly in promoting the 17 UN Sustainable Development Goals.

However, the lack of binding rules to regulate the development, implementation, and use of AI has enormous potential to exacerbate known risks and harms to people and communities . AI is already facilitating and generating concrete harms and violations, for instance, by reinforcing discriminatory practices, excluding historically marginalized groups from access to essential goods and services, supporting misinformation, undermining democratic processes, facilitating surveillance, exacerbating climate change, accelerating the epistemicide of Indigenous and local languages and cultures, and intensifying job insecurity.

To ensure AI systems promote innovation based on human rights, ethics, and responsibility, it is crucial to establish minimum rules to safeguard the rights of affected individuals, obligations for AI agents, governance measures, and the definition of a regulatory framework for oversight and transparency. This does not impede development and innovation; on the contrary, effective regulation that protects rights is an indispensable condition for the flourishing of responsible AI products and services that enhance human potential and the democratic rule of law.

Bill 2338/2023, which focuses on risks and rights, is a good guide for AI regulation, taking into account what is being developed in other international contexts. Although there is still space for improvement, as we show below, this approach would facilitate dialogue between the legislation of different countries (regulatory interoperability/convergence), reducing the effort of organizations to adapt to the Brazilian context. Moreover, AI regulation based on rights and attuned to its risks, would help to position Brazil as a pioneer in providing and adopting responsible technologies.

Debunking myths and false trade-offs: regulation as a driver of responsible innovation and inclusive economic development

Actors who oppose the comprehensive regulation of AI in Brazil are precisely those who benefit from this unregulated scenario by creating arguments and narratives that do not hold up in practice.

  1. Regulation vs. Innovation 

There is no conflict between regulation and innovation; both can and should coexist, as seen in the Brazilian Consumer Protection Code and the General Data Protection Law. In addition, rights-based regulation of AI systems allows the promotion of responsible innovation, fostering economic, technological, and social development that prioritizes well-being and the promotion of fundamental rights. The Brazilian Academy of Sciences published a report on AI regulation, confirming that stimulating the national AI industry and protecting fundamental rights are perfectly compatible agendas.

  1. “Increased need for dialogue”   

The civil society has advocated for a more inclusive and systematic dialogue. However, those contrary to a prescriptive regulation use the same argument to impair and slow down the legislative process. Delaying the adoption of a responsible regulation allows the continued development and implementation of risky technologies.

  1. Unknown technology  

The argument that AI is inherently disruptive and unmanageable for regulatory purposes does not hold because (a) studies on the subject, both in academia and the private sector, have accumulated eight decades of experimentation and analysis of social impacts; (b) AI agents, especially developers, have agency in making decisions about the ideation, development, and implementation of technology, including the option not to implement it if mechanisms for transparency, quality control, and accountability are deemed inadequate.

In addition to fallacious arguments and narratives, there is a strong mobilization of productive sectors and technology companies to prevent the vote on the Bill, whether through the imposition of a flurry of last-minute amendments, requests for public hearings, or lobbying directly with parliamentarians. The industry lobby is massive, including international trips and private events organized by big techs for the senators most involved in the debate.

After a successful lobbying to postpone the vote on the bill, a new round of public hearings was called. The first request indicating names for the hearings only included individuals from the private sector, disproportinately made up of white men from the southeast of the country, disregarding other sectors, especially civil society, as well as social markers of race, gender, and territory. It was up to civil society to fight for the inclusion of some of its representatives.

Brazilian Regulatory Landscape: June 2024

As the final work of the Brazilian Federal Senate’s Temporary Commission on Artificial Intelligence (CTIA), a report was published on June 7, 2024, containing a new proposal for Bill 2338, which was updated again on June 18, 2024. It is important to highlight that this last proposal includes elements considered essential for the proper regulation of AI systems in Brazil, namely:

  • Guaranteeing  basic rights for individuals potentially affected by AI;
  • Defining unacceptable uses of AI, that pose significant risks to fundamental rights;
  • Creating general governance guidance and obligations, with specific requirements for high-risk systems and  the public sector;
  • Maintaining algorithmic impact assessments to identify, mitigate risks, and evaluate opportunities;
  • Paying special attention to the Brazilian context of structural racism by incorporating measures throughout the text to prevent and combat different forms of direct and indirect discrimination, as well as to protect vulnerable groups;
  • Establishing an oversight framework where the competent authority works together with sectorial regulators.

Also, we would like to highlight important improvements that were added to the original text of Bill 2338 by the latest proposal:

  • Explicit prohibition of autonomous weapons systems;
  • Creation of specific governance measures for general-purpose and generative AI, which is critical because such systems may not fit neatly into risk-level categorizations;
  • Provide for societal participation in governance processes;
  • Definition of the civil liability regime for operators, suppliers and distributors of artificial intelligence systems as set out in the Consumer Protection Code (objective liability) for consumer relations and in the Civil Code for the other cases. At the same time, it also guarantees the duty to reverse the burden of proof in case of vulnerability and lack of understanding and resources of the victim or when the characteristics of the AI ​​system make it excessively burdensome for the victim to prove the requirements of civil liability;
  •  Direct designation of the Brazilian Data Protection Authority as the competent authority to harmonize the supervisory system, in collaboration with other actors.
  • Despite the advances outlined above, the last version maintains or exacerbates critical issues that contradict the central goal of regulating AI to protect rights and provide for responsible innovation. 

What can be done to improve the Brazilian regulation?

Initially,  it is important to highlight that the prohibited uses of AI systems must not be linked to causality, such as causing or being likely to cause damage. Thus, for prohibitions on the use of subliminal techniques (art. 13, I) and those that exploit vulnerabilities (art. 13, II), the mention of “in a way that causes or is likely to cause harm to health, security or other fundamental rights of yourself or a third party” must be excluded.

Furthermore, the use of facial recognition technologies for public security and criminal justice must be banned, considering that these are highly sensitive areas due to their potential to restrict fundamental rights such as freedom of expression and assembly and reverse the presumption of innocence. These uses also reaffirm the discriminatory potential of such technologies, which often lead to errors – known as false positives – leading to unacceptable situations, such as the unjust imprisonment of a man in Sergipe, the aggressive treatment of a young man after a mistaken identification in Salvador, and the unjust arrest of a woman in Rio de Janeiro. Cases like these are serious and recurrent.

Constant,far-reaching, and indiscriminate surveillance constitutes a violation of people’s rights and freedoms and limits civic and public space. What is known about facial recognition systems is that they are inefficient and create unnecessary costs for public administration due to their high error rates. A recent study indicated an average cost to public resources of 440,000 reais per arrest through facial recognition. In addition to their inefficiency, such systems have been consistently denounced for their discriminatory impact, disproportionately affecting black populations and, to a greater extent, women.

We see the authorization to use facial recognition systems as a violation of the most cherished fundamental rights of the Brazilian people. Moreover, the manner in which their use is allowed, without specific safeguards and protections given the nature of these systems, exacerbates the already recognized problem. Equally worrying is the lack of specific data protection legislation for public security activities.

Also, there is a need for change in high-risk category provisions, mainly (a) regarding the assessment of indebtedness capacity and the establishment of credit scores and (b) harmful uses of AI. 

  1. Assessment of indebtedness capacity and the establishment of credit scores 

A credit score is a tool used by banks and financial institutions to assess whether an individual is a reliable borrower based on their risk of default, for example, when making decisions about access to credit in these institutions.

Access to financial credit is fundamental as a prerequisite for the exercise of a range of constitutionally guaranteed rights, which underscores the importance of robust safeguards in AI-defined credit scoring. This is especially crucial due to its proven potential for discrimination, as evinced by research, including studies conducted in the Brazilian context.  It is important to note that in addition to the critical importance of credit as a prerequisite for access to essential goods and services such as healthcare and housing, credit models are fed with a large amount of personal data, which in itself requires enhanced security protection. 

Considering this application in the high-risk classification is consistent with other international regulations, such as the European Regulation on Artificial Intelligence, the AI Act.

  1. Harmful uses of AI

 Article 14, IX, X and XI list the uses of AI shown in various national and international studies to have more negative than positive effects and may be contrary to international human rights law. These refer to the analytical study of crimes involving natural persons to identify behavioral patterns and profiles; to assess the credibility of evidence; predictive policing; and emotion recognition.

This is mainly because such uses are associated with techno-solutionism and Lombrosian theories, which would reinforce structural discrimination and historical violence against certain groups in society, particularly black people..

In 2016, ProPublica published a study showing how an AI system used to generate a predictive score of an individual’s likelihood of committing future crimes showed biased results based on race and gender that did not match the reality of individuals who actually committed or reoffended. Thus, subsection X of Article 14 would potentially allow AI to be used in a similar manner to “predict the occurrence or recurrence of an actual or potential crime based on the profiling of individuals,” which has not been scientifically proven and, to the contrary, has been shown to potentially increase discriminatory practices. 

Subsection XI, on biometric identification and authentication for emotion recognition, is also of concern, as there is no scientific consensus on the ability to identify emotions based solely on facial expressions, leading, for example, Microsoft to discontinue the use of AI tools for such purposes.

In this sense, we consider the necessity of classifying the uses provided for in Article, 14,  IX, X and XI as unacceptable risks.

Civil society participation in governance and oversight system

The actual version of the bill improved the definition of the oversight system. The text now proposes the creation of the National AI System (SIA), composed of the competent authority, pointed out to be the Brazilian Data Protection Authority (ANPD), sectoral authorities,  the new Council for Regulatory Cooperation on AI (CRIA), and the Committee of AI Experts and Scientists (CECIA). 

Considered an important step forward, the Council for Regulatory Cooperation, which will serve as a permanent forum of collaboration, now includes the participation of civil society. However, it is important to ensure that this participation is meaningful, guaranteeing civil society an active and effective role in AI governance and regulation.

For the approval of Bill 2338/2023 with the inclusion of the improvements

With all said, the subscribing organizations express support for advancing the legislative process of Bill 2338/2023 within the Commission of AI (CTIA) and the Senate plenary towards its approval, provided that the following improvements are made: 

  1. Amendments of Article 13, I and II to exclude the causal link of ‘causing or being likely to cause damage’ for the prohibitions on the use of subliminal techniques and to exploit vulnerabilities;
  2. Amendment of Article 13, VII to exclude the exceptions for using biometric identification systems remotely, in real-time, and in spaces accessible to the public, therefore banning the use of these systems for public security and criminal prosecution. Or, at a minimum, we call for a moratorium that  authorizes uses in the listed exceptions only after approval of a federal law that specifies the purposes of use and guarantees compliance with sufficient safeguards measures (at least those guaranteed for high-risk AI systems);
  3. Return of the credit score or other AI systems intended to be used to evaluate the creditworthiness of natural persons to the high-risk list in Article 14, with the possibility to create an exception for AI systems used to detect financial fraud;
  4. The change of classifications for systems mentioned in Article 14,  IX, X and XI into the category of unacceptable risks; 
  5. Ensure that civil society participation in the National Artificial Intelligence Governance and Regulation System (SIA), through the composition of the Artificial Intelligence Regulatory Cooperation Council (CRIA), is effective and meaningful.

Access Now

Rights in Network Coalition

IP.rec – Instituto de pesquisa em direito e tecnologia do Recife 

Laboratório de Políticas Públicas e Internet – LAPIN

Transparência Brasil

Institute for Research on Internet and Society (IRIS)

Instituto Telecom

O post Open Letter: Advocating for Brazilian AI regulation that protects human rights apareceu primeiro em Coalizão Direitos na Rede.