ONLINE ACADEMY FOR AUDIO ENGINEERING & MUSIC PRODUCTION

Legal Challenges for Musicians and Digital Creators in the Use of AI

Artificial intelligence is the trending topic of the moment. With ChatGPT or the “new” Beatles song, it’s become a subject of discussion in the general public.

A recent survey by GEMA and SACEM (the German and French performance rights organisations) and the research company Goldmedia examined how creative professionals use AI and what opportunities and risks they see in it. 13% of the respondents stated that they see potential in the use of AI, 35% of respondents already use AI for their work. An interesting fact is that the number of users is increasing among respondents younger than 45. This means that almost half of the younger generation is already using AI – even though 64% of respondents believe that the risks of AI outweigh the opportunities. One concern, for example, is the fear that songwriters or composers will no longer be able to make a living from their work because of AI.

The survey shows that AI is on the rise, particularly in the creative sector – not least because of the wide range of applications for AI in music production. These two examples show the variety of ways in which AI can be used in music production:

One of last year’s headlines was the announcement of the “new” Beatles song “Now and Then”. More than 40 years after the death of John Lennon, AI has made this possible. The song was recorded by John Lennon back in 1978. Paul McCartney received the recordings from Yoko Ono in 1995 but discarded them because there was too much noise on them. The technology to remove that noise was not available at the time. It wasn’t until the release of the documentary film “The Beatles: Get Back” that McCartney decided to work on the recording again. For the film, the dialogue editor had trained an AI to recognise the Beatles’ voices and separate them from background noises and their instruments in order to create an audio signal free of interference.

The song “Now and Then” was analysed by two HOFA Audio Engineers in a video:

But AI can do much more. One such example is the song “Heart on My Sleeve”, in which the voices of rapper Drake and The Weeknd were imitated. The alleged duet spread rapidly on TikTok, YouTube and on various streaming services at the beginning of 2023. To this day, it is unclear who produced this song, only his pseudonym “ghostwriter977” is known. In the meantime, the song has been officially removed from the platforms at the request of Universal Music Group, the company that signed Drake and The Weeknd. However, the question is why the removal could be requested. After all, the lyrics, the melody and the music are not copies, but originals. At least in Germany, the voices of individuals are not protected by copyright law. The situation is different when you look at the artists’ personal rights. After all, using a person’s voice for your own purposes without their prior permission is of course not legal.

You can find a summary of the various available AI tools and the possibilities they offer for music video production in our blog post “Your Own Music Video in No Time – With AI?”.

The HOFA Topic Course Music Business provides you with more information about music law, self-employment in the music industry and promotion. This online course is also included in the ultimate audio course HOFA AUDIO DIPLOMA which gives you all the knowledge you need for outstanding audio productions.

Is AI-generated music protected by copyright?

For the German jurisdiction, a distinction must be made between the copyright protection of the work or composition as such and the copyright in the recording.

According to Section 2 (2) UrhG, a work is protected by copyright if it represents a personal intellectual creation. This work requires human artisic activity by an originator. In addition, a minimum of human creativity is required. Therefore, works that were created autonomously by AI, i.e. purely by machine, cannot be protected by copyright.

The legal situation in America is very similar, as can be seen in the example of the comic “Zarya of the Dawn” by the American Kris Kashtanova. This comic was illustrated with the help of the AI image generator Midjourney. Kashtanova applied to the U.S. Copyright Office for copyright protection. The Copyright Office only accepted that Kashtanova was the author of the texts, but not that she had created the images. She was therefore refused registration. The changes she had made to the images were too insignificant to be protected by copyright.

“Zarya of the Dawn” by Kris Kashtanova

A distinction can be made between creation by AI and using AI.

Creation by AI - AI as originator

For creations by AI systems, authorship would have to be assigned to the AI itself. Under German law, only human creations are eligible for protection. A similar problem exists in American law. Although a legal entity can hold the copyright in US law, copyright law still demands a human contribution to the process of creation. Thus, the AI cannot be an originator.

Creation using AI - the user as originator

Generative AI works with so-called prompts. A prompt is a text that instructs the AI to do something. However, the AI carries out this instruction itself by referring to its training data and the patterns it has learned from it. The user cannot comprehend its procedure and does not know what the result will be in the end. The result depends on various uncontrollable factors that cannot be influenced by the user. The decisive criterion as to whether a work is eligible for protection is always the extent of the individual’s contribution, i.e. whether the individual is the creator of the end product based on an overall assessment or whether it is “only” the work of an AI. The clearer a user defines the end result, the more likely it is to be a human creative achievement.

The AI behind Midjourney, for example, generates the image autonomously, using the user’s input only as a suggestion for the design. Even the selection of an image at the end of the process does not have the required level of creativity. For this reason, works created exclusively by AI are not yet protected by copyright in Germany. And in the United States as well, the input of a prompt is only considered a suggestion that is not eligible for protection.

A different situation is only applicable if the AI is used as a technical resource or tool for the creation of the work and the result is then used as the basis for creating a new work. In 2023, for example, some 100 works with partial elements created by AI systems were registered at the U.S. Copyright Office.

Legal problems in the training, instruction, and use of AI

The question could now be raised as to whether the developer of the AI could be considered the originator of an AI-generated song. The specific software on which the AI is based is copyrightable. The problem for the protection of the result, however, is that the programmer does not know the outcome – the finished song in our example – in advance and cannot foresee it. For the programmer, the resulting product is a surprise, more or less. This is why the programmer has no creative involvement in the specific work.

The specific instruction of the AI is also not protected by copyright. Just as a specific painting style cannot be protected, the instruction to an AI to create a work cannot be protected either. Copyright law only protects works, not styles.

Even now, the proctection of the performance is more relevant in practice. These rights protect people who are involved in the production of works. The focus here is on those types of AI that can generate content such as images, text or music. For a so-called generative AI to operate properly, it must first be provided with various pieces of information (“training data”) such as musical works. This input may therefore also be subject to copyrights or performance protection rights.

Two lawsuits filed by Getty Images against Stability AI in London and Delaware show the current relevance of this topic, but also how controversially it is being discussed in America and Europe. Getty Images is one of the best-known image agencies and provider of stock media worldwide. Its business model is to sell stock photos as well as editorial photographs, music and video footage for license fees. Stability AI is a start-up that has developed the Stable Diffusion image generator. Among other things, Getty Images alleges that Stability AI trained the image generator with images from Getty Images without permission and that the images generated by Stable Diffusion are therefore unauthorized copies of those works. The two lawsuits are still pending, but the decisions of the courts could be groundbreaking. After all, the use of copyright-protected works to train an AI is a technical novelty.

The original is shown on the left and an image created using Stable Diffusion on the right.
Legal regulation of AI in the EU and America

In Europe, the EU member states unanimously agreed on the so-called AI Act in February 2024. This act is aimed at users and providers of artificial intelligence. It affects all companies that provide AI systems on the European market. It does not matter where the company is based, which means that companies outside the EU will also be affected by the AI Act.

The intention is to create legal certainty in the use of AI. This requires a balancing act between innovation and risk protection: the regulation must provide a legal framework that does not completely stifle innovation and the opportunities offered by AI technology. The core idea of the act is to classify the regulations according to the risk of the AI specifically used. In simple terms, AI that is dangerous for the user will be banned and strict regulations will apply to the use of risky AI. There will be four risk categories.

1. Prohibited AI systems (unacceptable risk)

Examples:

  • Social scoring systems. These systems assign a numerical value to a person’s social behavior. China, for example, is currently testing various social scoring systems. Based on this score, for example, a person’s access to public goods is determined.
  • Behavior manipulation systems

2. High risk AI systems

AI technologies that are used in the following sectors, for example:

  • Critical infrastructure (e.g. transportation)
  • Essential private and public services (e.g. credit rating)

3. AI systems with limited risk

Systems with which people can interact directly, e.g. chatbots
(transparency obligations, e.g. labeling as AI)

4. AI systems with minimal or no risk

e.g. AI in computer games, spam filters (no restriction by the AI Act)

Four risk categories of the European AI Act

It was also agreed that there will be special regulations for generative AI (mid-journey, etc.). These have an abstract risk potential that largely depends on the specific application by the user. These regulations will be implemented in the form of a code of practice to be developed in collaboration with AI providers and stakeholders. It is particularly interesting that providers of generative AI systems will then be obliged to disclose which copyrighted works were used to train the AI. For example, AI systems that aim to manipulate people’s cognitive behavior pose an unacceptable risk. Examples include voice-controlled toys that encourage dangerous behavior among children or social scoring. Fines of up to 30 million euros or 6% of the annual global turnover may be imposed for infringements.

In the US, too, there are plans to respond to the potential threat posed by AI to national security. In October 2023, US President Joe Biden issued an executive order that obliges AI providers to perform security tests. The “Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence” means that AI models such as GPT-4 (ChatGPT) must first be presented to the US government before they are published. Developers will be obliged to submit security models and other information. There will also be mandatory labelling of AI-generated content. The US Department of Commerce will first develop regulations on how producers of AI-generated content can label it with a digital watermark.

Outlook

Overall, the legal aspects of AI will remain challenging. Legislative and judiciary must now consider in detail how cooperation between humans and AI will be handled and, how the underlying technical implementation will be defined. The legal issues associated with the use of copyright-protected training data also need to be resolved.

Europe has decided to develop a comprehensive set of rules for the use of AI – the first of its kind and probably also a pioneering step. The AI Act must provide legal certainty but must not lead to overregulation of the European market and prevent innovation. Unlike the European AI Act, the American decree does not contain any general bans on certain AI systems. It therefore remains to be seen whether strong regulation will lead to unique global legal certainty or whether the American “model” will strengthen innovative companies.

Author

Judith Kircher
Judith Kircher
Judith Kircher works at HOFA as an author and in the back office. She successfully completed a bachelor's degree in digital media and is currently studying law at the University of Heidelberg. Her cross-disciplinary knowledge of the media industry and law helps her shed light on legal topics.

Leave a Reply

Your email address will not be published. Required fields are marked *