Téléchargez gratuitement notre eBook "Pour une stratégie d'entreprise éco-responsable"
télécharger
French
French
Actualité
9/11/23

Artificial Intelligence : Joe Biden’s executive order vs. the European AI Act – Common Goals?

The U.S. has finally joined the race to be the first in the world to adopt a comprehensive legal framework on artificial intelligence (AI). On October 30, 2023, Joe Biden signed an ambitious and far-reaching executive order (EO) aiming to balance the interests of cutting-edge technology companies with national security, as well as civil, consumer, and workers’ rights. Notably, the EO seeks to ensure responsible innovation.

Recognizing that AI is advancing at « warp speed » and with « tremendous potential as well as perils », the American government is determined to « lead the way » both in AI innovation and in managing its risks, as conveyed by the White House’s statement of October 30.

The global leadership on AI governance is precisely what the European Union has sought since the publication of the first draft of the European Regulation on Artificial Intelligence (the “AI Act”) on April 21, 2021, which is now in its final phase of negotiations. In this context, the U.S. EO not only draws attention due to the urgency of AI regulation, but also to how it will impact and pressure the EU to resolve the remaining controversies hindering the AI Act’s signature.

Hence, in the face of this AI regulation competition, it is important to understand what are the main similarities and the main differences between both approaches.

Safe, secure, and trustworthy: the U.S. AI Executive Order

Entitled « safe, secure, and trustworthy AI », the EO incorporates the principles set on the Blueprint for an AI Bill of Rights1, released one year earlier by the White House, and follows the commitments made by American companies leading the AI sector for its responsible development2.

Besides seeking to protect civil, consumer, and workers’ rights, the EO also has a strong focus on developing industry standards, guidelines, and practices, for AI safety and security, which would be further consolidated by national laws and global agreements. This strategy aims to address the risks posed by advanced AI systems, aligning with goals from the National Cybersecurity Strategy.

In particular, developers of models that could impact national security, national economic security, or national public health would be required to notify the American government when training it and share the « results of all the red-team safety tests »3 before releasing their systems to the public.

Furthermore, the order outlines the development of standards and best practices for detecting AI-generated content, as well as authentication of official content through techniques such as watermarking to protect American citizens from AI-enabled fraud and deception. As Nicol Turner Lee4 explains, content authentication and watermarking « are clearly important as we see these deep fake technologies penetrate our general domain ».

In light of the above, the US government considers their initiatives as « the most significant actions ever taken by any government to advance the field of AI safety ». But how does the American approach differ from the European in terms of ensuring AI safety?

U.S. AI Executive Order vs. European Union’s AI Act

Although both EO and the AI Act seek, in general, to ensure responsible AI development and are based on harmonious principles, their distinct priorities and regulatory philosophies result in different approaches, as is possible see below:

1. Foundation models5

Both the AI Act and the EO seek to control AI foundation models, by posing strict obligations for their development. However, it is possible to consider the EO more stringent, as, for example, it introduces new requirements such as « red-teaming »6, and it places special emphasis on « dual-use foundation models » that could pose serious risks to security.

2. Testing and monitoring

Both documents highlight the need to test and monitor an AI system throughout its life cycle. The AI Act demands that developers comply with pre-market testing and post-market monitoring procedures. Similarly, the EO mentions the importance of testing and post-deployment monitoring to ensure proper functionality, especially avoiding problems related to misuse or ethical development.

3. Individual privacy protection

Both the AI Act and the EO emphasize the importance of individual privacy protection, neither of them allowing privacy legislation exceptions for AI system training. However, the scenario for individual privacy protection would be more favorable in the European Union, due to the GDRP7. Conversely, in the U.S., the creation of a regulatory system on data privacy would be required.

4. Cybersecurity

Both approaches require compliance with cybersecurity standards, with a focus on incorporating security by design and maintaining consistent performance.

We highlight below the most significant differences between the American and the European approaches:

1. Reach and enforcement

Firstly, it is important to highlight that as the EO is not legislation, it is not the U.S. version of the AI Act. While the AI Act establishes a harmonized regulation directly applicable throughout the EU single market, enforceable to all sectors of society (except the military), the EO does not directly regulate the private industry. Instead, it serves as an instrument employed by the presidential authority to instruct various executive departments to create industry standards, guidelines, practices, and regulations for AI development. Thus, despite the EO’s undeniable importance in safeguarding American citizens’ rights against AI risks, it lacks enforceability.

2. Primary Focus

While the AI Act presents a risk-based regulatory framework, classifying AI systems according to their level of risk (with stricter rules for higher risks and forbidding certain AI uses), the EO focuses mostly on standards and guidelines. It highlights the importance of international cooperation to mitigate AI-related risks. In this sense, violations of the conditions of operation of high-risk systems or prohibited uses will not require further legislative action under the AI Act, unlike the scenario outlined in the EO.

3. Intellectual Property

The EO acknowledges the significant issues posed by AI to IP protection and supports clarifying the boundaries between IP rights and AI. Notably, the EO gives instructions to the Trademark and Patent Offices (the USPTO) to publish guidance to patent examiners and applicants regarding the intersection of AI and IP. In addition, the EO requires the USPTO to address recommendations on copyright and AI to the president. Regarding the AI Act, its current text requires full disclosure by AI generative developers of copyrighted material used to feed the system. However, there are still discussions about this condition.

4. Other aspects

As the AI Act is mostly centered on compliance aspects, it does not directly address certain political issues such as immigration, education, and labor, unlike the EO. Moreover, the AI Act does not mention specific risks, such as those related to biological or chemical matters, which are clear concerns in the EO.

Conclusion

Given the above, it seems that the EU framework is more stringent when compared to the U.S. approach, as it seeks a formal compliance regulation. However, in the American context, while the simple execution of certain activities might suffice for compliance with the industry standards in some cases, in others, further legislation or regulatory measures resulting from the EO’s implementation could have a practical effect similar to the AI Act (such as the prohibition of some AI systems or the application of penalties).

Despite the different regulatory philosophies and priorities of the U.S. and EU approaches, both are situated in a common scenario: the rapid development of AI with unprecedented possibilities and risks, particularly regarding generative AI. If this scenario evolves into a competition for global leadership in AI regulation, leaders from both the U.S. and the EU are challenged to find an equitable manner to regulate AI.

This requires achieving a balance between promoting innovation and protecting citizens’ rights and interests, which would certainly be the most valuable prize in the AI regulation competition. 

Vincent FAUCHOUX / Juliana PERISSINOTTO

1 Release on October 4, 2022, the Blueprint for an AI Bill of Rights is defined by the White House as “a set of five principles and associated practices to help guide the design, use, and deployment of automated systems”. Its main purpose is to protect Americans’ civil rights in the face of AI advancements.

2 Microsoft, Google, OpenAI, Amazon, Meta, and the startups Anthropic and Inflection, have agreed on a series of AI safeguards set by the White House.

3 The “Red-Team” is a practice recently adopted by big tech companies, to ensure a safer implementation of AI. The red-team is in charge of not only preventing security failures but also other system failures, such as the generation of potentially harmful content. For example, Microsoft describes its read-team as a “group of interdisciplinary experts dedicated to thinking like attackers and probing AI systems for failures”.

4 Nicol Turner Lee is a senior fellow in governance studies and director of the Center for Technology Innovation.

5 AI foundation models are large, pre-trained models that serve as a starting point for various AI tasks (such as Chat CPT or Midjourney). They learn from extensive datasets and can be customized for specific applications.

6 Such as the requirements to share information and results, as above.

7 General Data Protection Regulation - Regulation (EU) 2016/679.

Découvrez le rapport d'activité annuel du cabinet
lire le rapport

Abonnez vous à notre Newsletter

Recevez chaque mois la lettre du DDG Lab sur l’actualité juridique du moment : retrouvez nos dernières brèves, vidéos, webinars et dossiers spéciaux.
je m'abonne
DDG utilise des cookies dans le but de vous proposer des services fonctionnels, dans le respect de notre politique de confidentialité et notre gestion des cookies (en savoir plus). Si vous acceptez les cookies, cliquer ici.