Existing copyright rules have fallen behind the rapid growth of AI and machine learning. This disparity has sparked debates on both sides of the Atlantic as to how the productions of AI should be considered under copyright law.
Machine learning is the ability for algorithms to improve their accuracy without a human intervening to rewrite code. It has existed in some form for decades to but recent advancements in AI technology have vastly increased the types of data that algorithms can process. While at their inception machine learning algorithms required structured sets of neatly organized data, AI algorithms can now process unstructured data such as photographs, audio files and written works. These generative algorithms can then create new images, sounds or written works in imitation of the information it analyzed. The capabilities of generative AI are poised to increase workers’ efficiency in a broad range of fields.
However, many authors, artists and other creatives are worried that the sheer productive capacity of open-sourced, generative AI could threaten their careers. This is a valid fear, but generative AI algorithms are derivative. While AI can now easily create works in the style of almost any given artist, truly original work is still beyond them.
Of greater concern for most creatives is the fact that the generative AI algorithms were trained on their work without the developers asking them for permission. The problem is thus not that these creatives worry about being completely replaced (which is still a long way off) but that they are receiving no credit or compensation for AI outputs which are derived from their own work.
One London based company, Stability AI, has already found itself in a lawsuit. The company allegedly trained its text-to-image generative AI algorithm, Stable Diffusion, with copyrighted images from Getty Images. Getty Images collects images, photographs and video clips from creators and licenses them out for commercial use for a fee. In not paying that licensing fee, Getty Images claims Stability AI infringed upon its copyright.
Some AI developers have tried to incorporate methods to protect the intellectual property of creators. Adobe has developed its own generative AI program called Adobe Firefly which automatically incorporates metadata called Content Credentials to track the history of an image. These Content Credentials track how Firefly’s algorithm generates images as well as the alterations a person made to a base image. In a panel hosted by the US Copyright Office an Adobe executive also advocated for a do-not-train tag to be attached to an image’s metadata so that independent artists can indicate they do not wish for AI algorithms to process their work. There is no current method to enforce adherence to the tag, but Adobe hopes that companies will begin respecting its use as an industry standard.
However, at least in US law, there exist loopholes that enable the alteration of metadata attached to an image which could allow companies to avoid copyright issues. The EU AI Act, which the EU Council is currently debating, will impose stricter transparency regulations but these measures might still not be enough to protect independent artists. The internet is massive; artists would not only need to learn how to protect their work but also know when and by who their work is being misused. The adoption of the do-not-train tag as an industry standard would thus place the responsibility to protect intellectual property on individual artists who might not be as capable of extensive legal action as a corporation like Getty Images.
There is another side to the debate, much of which centers on the principle of fair use. In the US, fair use is a flexible but robust principle that allows people to make use of existing works without the author’s explicit permission provided they use the work for the purposes of commentary, criticism, reporting or academic purposes. There is also the tangential factor that copyright protections no longer apply to works that have been sufficiently transformed. The question thus stands as to whether US legislators will either take a position on whether the unauthorized use of intellectual property to train generative AI’s constitutes fair use or if the works produced have met the threshold to be considered new Intellectual property.
In addition to the new AI Act, the EU recently instituted its new Copyright Directive which simultaneously aimed to expand what material could be used without copyright infringement while also strengthening protections for creators. While controversial at the time of its approval, the Copyright Directive may provide creators a means to protect their work. Currently, however, there are not yet any major rulings based on the Copyright Directive that indicate how the new regulations will affect AI productions.
AI thus has the potential to appropriate the countless hours of effort that creators poured into their work but it also has the potential to assist these creators in drastically increasing their output. Going forward, how AI will affect content creators’ careers will depend on how regulators apply copyright rules as well as any new legislation that clarifies how AI productions correspond to copyright.