Téléchargez gratuitement notre eBook "Pour une stratégie d'entreprise éco-responsable"
télécharger
French
French
Actualité
30/1/25

Legal Analysis of Mark Zuckerberg's January 7, 2025 Decision to End Meta's Fact-Checking Program in the United States: Initial Legal Considerations

1. The Facts

Meta (Facebook, Instagram, WhatsApp) announced on Tuesday, January 7, 2025, that it would terminate its fact-checking program in the United States, marking a major retreat from the social network's content moderation policy. "We will eliminate fact-checkers and replace them with community notes, similar to X (formerly Twitter), beginning with the United States," declared Mark Zuckerberg, the group's CEO, in a social media post. According to Zuckerberg, "the fact-checkers have been too politically oriented and have contributed more to eroding trust than improving it, particularly in the United States." Meta's announcement comes as Republicans and X's owner, Elon Musk, have repeatedly complained about fact-checking programs, which they equated with censorship. "Recent elections appear to be a cultural turning point, once again prioritizing freedom of expression," stated Meta's CEO.

This decision has taken on exceptional political dimensions, prompting interventions from numerous politicians, particularly the Minister of Digital Affairs, and notably leading to an official communiqué from the Ministry of Foreign Affairs on January 8, which was particularly telling: "France expresses its concern regarding the decision by the American company META to question the utility of fact-checking in limiting the circulation of false information. France notes that this decision is currently limited to the territory of the United States. France will maintain its vigilance to ensure that META, along with other platforms, complies with their obligations under European legislation, particularly the Digital Services Act (DSA). Implemented in 2024, this unprecedented regulatory framework holds platforms accountable for the content to which users are exposed. It is an integral part of the EU's democratic functioning and protects our citizens from foreign interference and information manipulation. Freedom of expression, a fundamental right protected in France and Europe, should not be confused with a right to virality that would authorize the dissemination of inauthentic content reaching millions of users without filtering or moderation. The American company META had itself publicly promoted its partnership program with fact-checkers as an effective tool that enabled the successful conduct of the 2024 European elections. France reiterates its support for civil society actors engaged worldwide in defending and strengthening democracies against information manipulation and destabilization acts by authoritarian regimes."

2. Review of the Concept of Fact-Checking and Its Evolution Over the Past Decade

Fact-checking is a technique consisting of verifying the veracity of facts and the accuracy of figures presented in the media by public figures, particularly political personalities and experts, and evaluating the level of objectivity of the media themselves in their information processing. This concept emerged in the United States in the 1990s under the term "fact-checking" (a term also used in French-speaking countries). Initially practiced by journalists as part of their profession, the method has become more democratized through software helping individuals verify facts. It has even become automated with the appearance of robots designed to practice it without human intervention in 2013. Since 2016, social networks—Facebook, Twitter, etc.—have employed fact-checking, as numerous false information items (also known as "infox" or fake news) are disseminated through their channels.

Major digital platforms initiated their collaboration with third-party fact-checkers from 2016-2017, primarily in response to disinformation controversies that marked the 2016 U.S. presidential election.This major evolution in online content moderation was inaugurated by Facebook (now Meta) in December 2016, with the launch of an external fact-checking program involving organizations certified by the International Fact-Checking Network (IFCN).

The fact-checking process is structured around a three-tier system:

  • The first stage consists of initial detection of potentially misleading content, operating through a combination of user reports, automated systems using artificial intelligence, and proactive monitoring conducted by fact-checkers themselves;
  • In the second stage, partner fact-checkers, maintaining strict independence from the platform, proceed with the actual verification. They select content for verification according to their own editorial criteria, conduct their investigations, publish articles detailing their analysis, and assign a rating to the examined content;
  • The third phase falls under the platform's responsibility, which implements various measures based on fact-checkers' conclusions. These actions include reducing the visibility of content identified as false, adding warning labels accompanied by a link to the fact-checker's article, notifying users who shared the contested content, and potentially demonetizing the content in question.

Google adopted a similar approach in 2017 with the launch of its "Google News Fact Check" program. These collaborations are governed by contracts that guarantee fact-checkers' editorial independence while establishing strict quality standards. Verifiers are compensated for their work, generally according to a model based on the volume of verified content.

The effectiveness of these programs was already—prior to Mark Zuckerberg's highly publicized January 7 decision—subject to debate within the legal and academic community. Some observers highlight these systems' limitations in face of the growing volume of content to verify, while others emphasize their essential role in the global ecosystem of combating online disinformation. This tension illustrates the persistent challenges of content regulation on digital platforms, at the intersection of communication law, personal data protection, and economic regulation of digital actors.

This content moderation approach has been considerably strengthened by the adoption of the European Digital Services Act (DSA) of October 19, 2022, which now requires very large online platforms to implement effective measures to combat disinformation, explicitly including collaboration with independent fact-checkers.

3. Compatibility of META's Decision with Article 35 of the European Digital Services Act (DSA)

META's decision does not appear directly contrary to the Digital Services Act (DSA) provisions, particularly those of its Article 35 "Risk Mitigation," which lists [in a purely indicative manner] the measures that providers of very large online platforms must take to combat systemic risks of disinformation; thus, this article provides:

"Providers of very large online platforms and of very large online search engines shall put in place reasonable, proportionate and effective mitigation measures, tailored to the specific systemic risks identified pursuant to Article 34, taking into account the impact of such measures on fundamental rights. Such measures may include, **where appropriate** (...)
- cooperation with trusted flaggers in accordance with Article 22, and the implementation of decisions from out-of-court dispute settlement bodies pursuant to Article 21;
- cooperation with other providers of online platforms or of online search engines, or adjusting such cooperation (...);
- clearly visible marking to ensure that content, including images, audio or video content, that has been generated or manipulated, and that resembles existing persons, objects, places or other entities or events significantly and appears to a person to be authentic or truthful, is recognizable when presented on their online interfaces and, in addition, providing an easy-to-use functionality allowing recipients of the service to report such information."

The measures envisaged by the DSA to reduce systemic risks (including cooperation measures with "trusted flaggers") are therefore essentially indicative (as indicated by the expression "where appropriate" in Article 35 above) and can be complemented by other measures chosen by platforms, particularly community notes as mentioned by Mark Zuckerberg in his recent communication.

4. Compatibility of META's Decision with the Provisions of the "Code of Practice" Adopted Under the DSA

The Digital Services Act led to the adoption of the "Strengthened Code of Practice on Disinformation" on June 16, 2022. This Code, which derives its binding force from Article 35 of the DSA, brings together 34 signatories, including major digital platforms such as Meta, Google, TikTok, and Microsoft, as well as advertising actors, fact-checking organizations, and civil society organizations.

The commitments made in this Code are structured around several major axes. In the advertising domain, signatories commit to implementing a policy of "demonetization" of content identified as disinformation, while enhancing transparency on political advertising and strictly regulating advertising targeting techniques. Regarding service integrity, the Code provides for concrete measures to combat artificial manipulation, notably through the detection of fake accounts and bot networks, as well as enhanced verification of accounts with large audiences.

Regarding the fight against fake news, the Code imposes specific transparency and action obligations on very large online platforms. Signatories commit to implementing rapid detection systems for disinformation campaigns, reducing the visibility of content identified as misleading, and collaborating with certified independent fact-checkers. The Code provides for a rigorous supervision mechanism involving regular reports to the European Commission. Signatories must provide detailed data on measures taken to identify and limit the spread of false information, particularly during electoral periods.

The Code also places central importance on user empowerment, providing for the development of more effective reporting tools and media literacy initiatives. Cooperation with the scientific community constitutes another major pillar, with the establishment of a framework facilitating researchers' access to platform data.

In January 2024, this framework was substantially strengthened by the adoption of additional commitments, particularly focused on new technological challenges. These commitments notably concern the systematic detection and marking of AI-generated content, as well as specific measures to protect the democratic process during electoral periods and protect minors from disinformation. Specifically regarding AI-generated content, the commitments impose clear identification of deepfakes and other synthetic content likely to deceive users.

The Code's initial validity period is forty months from its signing, with a revision clause allowing its adaptation to technological developments and new online disinformation challenges. The effectiveness of these commitments is guaranteed by a supervision mechanism entrusted to the European Commission, which has sanctioning powers in case of characterized breaches of subscribed obligations.

This original normative architecture, mixing soft law and hard law, illustrates the European approach to digital platform regulation, favoring actor accountability while maintaining effective coercive power. It fits into an innovative co-regulation approach, combining voluntary commitments with the European Commission's sanctioning power in case of non-compliance with subscribed obligations.

It goes without saying that META's behavior in implementing the measures provided for by this Code of Practice will be particularly monitored, and the possible termination of agreements with third-party fact-checkers in Europe at a later stage could be considered a violation of both the spirit and letter of this Code.

5. Freedom of Expression: Two Distinct Legal Approaches Between the United States and France

Could this decision ultimately be merely the manifestation of two distinct legal and philosophical approaches to freedom of expression that separate France from the United States?

The First Amendment to the U.S. Constitution enshrines an almost absolute protection of freedom of expression ("Congress shall make no law [...] abridging the freedom of speech, or of the press"), drastically limiting any prior restriction on publication, including in matters of defamation or false information. The New York Times v. Sullivan (1964) jurisprudence illustrates this approach by requiring proof of "actual malice" to condemn defamation against public figures.

Conversely, French law, heir to the Press Freedom Law of July 29, 1881, strictly regulates the exercise of this freedom. It criminally sanctions defamation (Article 29), insult (Article 33), and the dissemination of false news (Article 27). This restrictive approach was strengthened with the law of December 22, 2018, relating to information manipulation, which allows judges to order the rapid removal of "false information" during electoral periods. More recently, the law of November 8, 2023, aimed at securing and regulating digital space (SREN) reinforces this framework by establishing an obligation for online platforms to combat the dissemination of manipulated or artificially generated content ("deepfakes"), particularly by clearly flagging them to users.

This fundamental divergence reflects two distinct philosophies: the American tradition favors a "marketplace of ideas" where truth would naturally emerge from public debate, while French law, influenced by its history, considers that freedom of expression must be regulated to protect both individual rights and the general interest, particularly in face of new technological threats such as deepfakes that can undermine the integrity of democratic debate.

It will be important to closely monitor announcements from META and other major American platforms in the coming months to see if the January 7 announcement, currently limited to U.S. territory, will soon extend to Europe, with the numerous political and legal upheavals that would inevitably ensue. To be continued!

Vincent FAUCHOUX
Découvrez le Livre Blanc : "Intelligence artificielle : quels enjeux juridiques"
Télécharger
Intelligence Artificielle : quels enjeux juridiques ?

Abonnez vous à notre Newsletter

Recevez chaque mois la lettre du DDG Lab sur l’actualité juridique du moment : retrouvez nos dernières brèves, vidéos, webinars et dossiers spéciaux.
je m'abonne
DDG utilise des cookies dans le but de vous proposer des services fonctionnels, dans le respect de notre politique de confidentialité et notre gestion des cookies (en savoir plus). Si vous acceptez les cookies, cliquer ici.