The UK government's consultation launched on 17 December 2024 and closed on 25 February 2025 on copyright and artificial intelligence (AI), proposes a controversial change to copyright law. One of the key proposals is an exception to copyright for text and data mining (TDM), allowing AI developers to use copyrighted works to train AI models without the need for a licence to use them, unless rights holders explicitly object (opt-out).
Under section 16 of the UK Copyright, Designs and Patents Act 1988 (CDPA), copyright owners in the UK enjoy exclusive rights, in particular in relation to reproduction [section 16(1)(a)], public performance or exhibition [section 16(1)(d)] and adaptation [section 16(1)(e)]. However, these exclusive prerogatives are now being undermined by the emergence of works generated by artificial intelligence. Indeed, AI, which uses copyright-protected elements as training data, comes into direct tension with this regime, as the CDPA makes no specific provision applicable to this technology.
To date, only two exceptions could be invoked by AI developers to legally legitimise their practice:
It will therefore be up to the UK courts to determine whether the training of artificial intelligence systems can effectively benefit from the Fair Dealing exception provided for in Article 30.
One of the very first disputes over training data to come before the UK courts, which was brought back in 2023, Getty Images & Others v. Stability AI, is a particularly revealing case of what is at stake. In this case, Getty Images accused Stability AI of copyright infringement, claiming that it had trained its artificial intelligence model, Stable Diffusion, using millions of images protected by copyright without authorisation or licence. Getty Images argues that this constitutes an infringement of its exclusive right of reproduction under the Copyright, Designs and Patents Act 1988 (CDPA), amounting to unauthorised copying.
In response, Stability AI challenged the jurisdiction of the UK courts, arguing that the training of its model was carried out outside the UK. It also invoked the fair dealing exception, arguing that the disputed use was a "pastiche", constituting a form of criticism, and thus serving the public interest, which would bring it within the scope of Article 30 of the CDPA.
The High Court, presided over by Judge Joanna Smith, rejected the territorial incompetence argument raised by Stability AI, pointing out that acts such as the distribution of the AI model on platforms accessible from the UK were sufficient to establish the jurisdiction of the UK courts. The court also found plausible the claimants' allegations that training the AI would necessarily involve copying protected works. In addition, it held that Stability AI's defence of fair dealing was not sufficiently robust at this stage to allow the case to be dismissed early.
The UK's position on the regime applicable to the training of artificial intelligences using works protected by Copyright will therefore depend directly on the outcome of Getty Images v. Stability AI in the absence of legislative change
The 4 options set out in the UK Labour Government's proposal
The UK Government's consultation on copyright and AI sets out four options for addressing the use of copyrighted material in AI training, each balancing the interests of rights holders and AI innovation differently :
Option 3: The EU approach: Either the creation of an exception for data mining with reservation of rights and transparency. This option allows AI developers to train on legally accessible material, such as online content, unless rights holders explicitly reserve their rights through machine-readable tools. It includes transparency requirements, such as disclosure of training datasets, and is considered a balanced approach, echoing the EU opt-out model under Article 4 of the Digital Single Market Directive (DAMUN) .
Option 3 appears to be the Labour Government's preferred route, aiming to foster the growth of AI while providing rights holders with a practical means of protecting their works. Option 3 aligns with the model of withdrawal from the EU, in line with Labour's tendency to strengthen cooperation with the EU under its leadership.
The UK's creative industries are an economic powerhouse, contributing over £126 billion a year (DCMS, 2023) and employing 2.4 million people, and have mobilised massively against this consultation. The creative community has mobilised strong opposition, with major players and grassroots efforts highlighting the issues at stake:
These initiatives reflect a united front against what many see as an existential risk to the UK's cultural heritage and economic stability.
The international regulation of artificial intelligence (AI), particularly in terms of copyright and text and data mining (TDM), is part of a geopolitical context in which governments are seeking to attract innovative companies while protecting their creative industries that hold copyrights. In the United States, the Trump administration, at the request of OpenAI in particular, seems tempted to relax copyright protection rules in order to encourage AI training, with a view to competing with China, where the legal constraints on the use of protected data are much weaker.
Internationally, regulatory approaches diverge considerably, directly influencing innovation and national economic strategies. On the one hand, some countries such as China and the United Kingdom (under its current legal framework) impose strict regimes, generally requiring specific licences for CT activities, thereby providing strong protection for rights holders but risking stifling innovation. In contrast, the European Union seeks to strike a balance by permitting CT, while allowing rights holders to exercise control via an opt-out mechanism. In the United States, the flexibility of the fair use regime facilitates TDM but generates considerable legal uncertainty, as illustrated by ongoing litigation such as New York Times v. OpenAI and Cohere or the resolved case of Thomson Reuters v. Ross Intelligence.
Finally, some countries, notably the United Arab Emirates (UAE), Singapore and Japan, are adopting a proactive strategy by minimising regulatory constraints to attract companies specialising in AI. The UAE stands out in particular through its "AI Strategy 2031", an ambitious plan for targeted investment in infrastructure, education and innovation in artificial intelligence. This is complemented by attractive tax incentives, including specific exemptions and benefits in free trade zones, as well as strong support for start-ups through incubators and accelerators, creating a particularly favourable environment for the rapid development of an entrepreneurial ecosystem dedicated to AI. In Japan2, there is a broad exception for TDM, which applies even if it is commercial. In Singapore, TDM is authorised quite widely for "computational analysis"3. These countries could therefore attract the next generation of AI talent, much to the chagrin of countries with overly protectionist copyright regulations.
The UK government has yet to announce the next stage of the Copyright Act consultation, and the creative and AI industries are on the lookout for any changes across the Channel
This British proposal confirms, if confirmation were needed, that taking account of intellectual property in the training of AI systems is now a major geopolitical issue - a first in the history of intellectual property.