Artificial intelligence (AI) is undoubtedly expected to be one of the most significant scientific advancements in our history. However, as AI develops and its applications become more and more current in our lives – both benefiting and harming us – the concerns about its tangible impacts increase.
In this context, the French Prime Minister, Elisabeth Borne, recently created an Artificial Intelligence Committee, dedicated to studying the potential impact of the AI generative on the economy, employment, growth, and also on the fight against inequality, to advise the government on decisions related to this technology.
Within the private sector, numerous companies are creating internal policies to regulate the use of AI, particularly focusing on generative AI models, seeking to preserve their interests. Generally speaking, corporate leaders are preoccupied with the ethical and moral principles that will shape AI’s use in our society in the future.
These ethical and moral questions have been confronted by many science fiction authors, who have depicted the usage of AI and its outcome in countless scenarios1. Despite their creativity and entertainment value, these works frequently present exaggerated AI portrayals. As a consequence, they should not be used as our exclusive sources when trying to understand how to responsibly use AI in our everyday lives.
Thus, to comprehend the ethical challenges in AI development from a different perspective, we propose a creative exercise. We will try to step into the mind of one of the greatest genies of humankind, imagining his potential solutions, starting Leonardo da Vinci, in what could be the first of a series of articles.
While imagining Leonardo da Vinci’s thoughts on the AI phenomenon is a hypothetical exercise, we can envision his perspectives based on his profound curiosity and knowledge of human nature and technology. As a polymath2, Da Vinci would have certainly recognized AI’s major benefits in several fields of our society. On the other hand, his multidisciplinary understanding would have also led him to conclude that AI’s development without ethical guardrails would be catastrophic in many facets, ranging from human rights to environmental risks.
Given the above and Leonardo’s enthusiasm for the harmony between man and machine3, it is reasonable to believe that he would have been intrigued by the idea of integrating technology in a way that aligns with human existence. Thus, he would advocate for a careful and responsible implementation of AI into society, limited by ethical and moral boundaries.
In light of the above, we imagine the great Leonardo writing down in one of his famous notebooks the 10 commandments for the development of an ethical AI. As a humanist, if Leonardo was transposed to our time, he would certainly be inspired by Isaac Asimov’s laws of robotics4 to craft his AI rules, as follows:
Influenced by the Renaissance context and guided by his wisdom and timeless principles, Da Vinci would have expressed his commitment to a harmonious integration of AI with the natural world by writing these commandments. In fact, they represent his instructions to solve or at least mitigate AI’s main ethical challenges:
In general, founded on Leonardo’s timeless values, these commandments reflect his humanistic values, as well as his visionary and pragmatic perspective on AI’s current challenges.
Although is not possible to predict precisely what Leonardo da Vinci would have proposed if he had been confronted with AI’s ethical challenges, his curious spirit and multidisciplinary knowledge can inspire us to find answers to this unprecedented innovation. As we navigate through AI’s unexplored territory, it becomes clearer that there is no turning back: AI is already transforming various aspects of our lives.
This is why developing an ethical AI, capable of harmonizing human values and technical innovation is probably one of the most serious challenges posed to our immediate future, requiring the action of our society as a whole: government, developers, users, and companies. In this sense, if the 10 commandments above should not be seen as definitive solutions, they serve as a guide to instruct our actions for a responsible AI development.
Stay tuned for our next articles and do not hesitate to contact our IP/IT team in case of any doubt.
1 We can mention a vast range of examples: from the classics “The Terminator” and “The Matrix” to more recent works such as “Westworld”, “Black Mirror”, or “Blade Runner”.
2 A “polymath” is an individual whose expertise or knowledge extends across a wide range of fields. Leonardo da Vinci undoubtedly stands as one of the greatest polymaths in history, if not the greatest.
3 For example, his fascination, or perhaps obsession, with making the man fly led him to project what can be considered an early prototype of a helicopter in history.
4 Isaac Asimov, a scientific fiction author envisioned a set of rules to be followed by robots in several of his stories. The first three laws are: (i) a robot may not injure a human being or, through inaction, allow a human being to come to harm; (ii) a robot must obey the orders given it by human beings except where such orders would conflict with the First Law; (iii) a robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Asimov later added another rule, known as the fourth or zeroth law, that superseded the others: “a robot may not harm humanity, or, by inaction, allow humanity to come to harm ”.
5 The sfumato is a painting technique mastered by Leonardo, described by himself as a “blending of colors without lines or borders, in the manner of smoke". By this technique, Leonardo created imperceptible transitions between light and shade, resulting in imprecise contours.