Let’s first recall the purpose of this public warning letter (“A right to warn about Advanced Artificial Intelligence”), written by eminent AI specialists as well as anonymous former employees of OpenAI and Google DeepMind.
The signatories of this letter believe in the potential of AI technology to deliver unprecedented benefits to humanity. But, they also highlight the serious risks posed by these technologies, from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction. The signatories underline that AI companies themselves have acknowledged these risks, as have governments across the world and other AI experts.
The problem addressed by the signatories is the following : AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm. However, they currently have only weak obligations to share some of this information with governments, and none with civil society. The signatories do not think they can all be relied upon to share it voluntarily.
The signatories of this warning letter therefore call upon advanced AI companies to commit to these four principles:
Firstly, the authors of this warning letter are addressing lawmakers, emphasizing that they, alongside the general public and the scientific community, are in a position to mitigate three identified risks: the exacerbation of inequalities, the manipulation of information, and the loss of control over autonomous AI systems. It is therefore the law that can help limit the major risks generated by the exponential development of advanced artificial intelligence.
Secondly, this appeal is also addressed to legal professionals because it clearly recognizes that whistleblowing by employees of major AI system operators, which the letter advocates for, is likely to clash with intellectual property rights and trade secrets. Thus, a balance must be struck between the imperative to report significant risks to our society and the pragmatic necessity of protecting the intellectual property of innovative companies.
Thirdly, this letter serves as an additional call to legal professionals by highlighting the complexities of the task at hand. Indeed, such whistleblowing by employees of AI system operators comes with the challenge of understanding the contractual obligations they have committed to, particularly confidentiality obligations, non-disparagement clauses, and many other commitments.
In conclusion, reading this global warning document suggests that new laws and new agreements for a new form of whistleblowing will enable us to achieve the ambitious and probably essential goals set by its signatories.