Advertisement

We need your help now

Support from readers like you keeps The Journal open.

You are visiting us because we have something you value. Independent, unbiased news that tells the truth. Advertising revenue goes some way to support our mission, but this year it has not been enough.

If you've seen value in our reporting, please contribute what you can, so we can continue to produce accurate and meaningful journalism. For everyone who needs it.

Last week, streaming platforms removed a song that used AI to clone the voices of music artists Drake and The Weeknd after it went viral on TikTok. Alamy Stock Photo

'Extreme' recent advancement of Artificial Intelligence raises concerns for copyright holders

A UCD assistant professor told The Journal that a conversation about how we use AI is needed.

THE PREVALENCE OF images generated by artificial intelligence (AI) has grown exponentially in the last few months, and is seemingly showing no signs of slowing down. 

However, the controversial technology is no longer only being used to generate imagery, with AI-generated music, journalism and even interviews seen in recent weeks.

There has already been a number of copyright controversies in relation to AI-generated material, with experts raising concerns that the capability of the technology is fast outpacing the parameters of existing legislation. 

AI software learns by using content that already exists, identifying and replicating the patterns in the data, and then mimicking it. But the original content is created by humans and copyright protected, which raises the question of whether or not this has legal implications.

Last week, streaming services such as Spotify and Apple Music removed a song that used AI to clone the voices of music artists Drake and The Weeknd after it went viral on TikTok. 

The song, called Heart On My Sleeve, was created by an artist known as Ghostwriter.

Universal Music Group, which represents both artists, issued a statement stating that training AI software without artists’ permission “begs the question as to which side of history all stakeholders in the music ecosystem want to be on: the side of artists, fans and human creative expression, or on the side of deep fakes, fraud and denying artists their due compensation?”

Speaking to The Journal, Dr Justin Jutte, an assistant professor in intellectual property law at University College Dublin, said the last month or so has been “extreme” in terms of AI development.

“I was a bit shocked in a positive as well as negative sense by the incredible speed of the developments over the last couple of months or so,” he said.

Whereas initially concerns were being raised about ChatGPT, “Now suddenly, we can emulate full songs with realistic sounding voices. That is something that literally came in the last one-and-a half, two years or so and we haven’t really thought about this from a legislative perspective, how to regulate this properly.”

He said the main copyright issue with the Drake example is that somebody used music from the internet to train an AI machine to learn how to create a voice.

“This is a problem that we encounter in many, many areas of artificial intelligence, that in order to create something with artificial intelligence, we need to have learning material and this, in many, many cases, includes material that is protected by copyright.

“This is where, for example, Spotify, but also in this specific case, the music industry says ‘we haven’t given permission for this’, because copyright gives you, in principle, the right to control any reproduction of the creations in which you hold copyright.”

Legislation

Copyright laws are currently not equipped to deal with artificial intelligence. Last month, the US Copyright Office launched an initiative to examine the copyright law and policy issues raised by AI technology, including “the scope of copyright in works generated using AI tools and the use of copyrighted materials in AI training”.

In the UK, copyright law protects original literary, dramatic, artistic and musical works, as well as films, sound recordings, broadcasts and published editions. However, the UK government’s website states that copyright work may be created by a human who has assistance from AI.

“If the work expresses original human creativity it will benefit from copyright protection like a work created using any other tool. An example of this could be where a camera contains AI that helps someone take a photograph,” it says.

“If the photograph expresses the creativity of the photographer, it will be protected as an artistic work, regardless of whether AI assisted them.”

There are currently no rules or regulations that apply specifically to AI in Ireland, but Jutte said there are currently a number of initiatives at European level focused on AI. 

“We have a draft AI Act, which is supposed to mitigate or prevent certain harms based on or created by certain AI systems, for example, in the health sector or in the financial sector. But as always, with technology, we are always running behind the technological developments. They tend to be much faster than a legislator can react and also understand what the problem actually is,” he said.

“The Act will, of course, only address certain issues when it comes to artificial intelligence, not everything. But it also means that copyright has to be addressed when it comes to artificial intelligence, because if we want to train good AI, we need training materials.

“For people who want to generate text, generate imagery, generate music, they should have a good training set available and this requires changes to copyright law.

“Ireland, I would recommend, should actively work at European level to influence the European policy process in a way that also reflects what Ireland wants to see in terms of AI developments. Of course, with many, many tech companies in Ireland, I think Ireland should have an important role to play in this discussion.”

In late March, technology chiefs, including Elon Musk, Apple co-founder Steve Wozniak and Skype co-founder Jaan Tallinn, signed an open letter urging scientists to pause developing AI for at least six months to ensure it does not “pose a risk to humanity”.

Jutte believes that we should not be overly concerned when it comes to AI, but that “a good and thorough conversation where we draw the line” is needed.

“We probably have to wait a bit to see how this materialises. What are the opportunities? What are the potential threats? There must be, as with any technology, a broad consensus as to what we can use it for, and then you can think about different regulatory models.

“You can say we outlaw certain things, we will regulate certain things with transparency obligations. We can also simply say we have to tell people when something uses AI technology so we know AI is in something. In certain foodstuffs, we know there are certain sugars and some technologies should be labeled that they use artificial intelligence, and that might have hundreds of hundreds of different implications.”

Deepfakes

In Germany, the family of seven-time Formula One champion Michael Schumacher are taking legal action against a magazine for using an AI programme to generate fake quotes and attributing them to the seven-time Formula One champion.

Die Aktuelle had claimed it had the first interview with the motorsports legend since he suffered a serious brain injury in a 2013 skiing accident in the French Alps. It revealed after publishing the “interview” that it had been generated by AI. 

The article included quotes attributed to Schumacher, discussing his family life since the accident and his medical condition.

Jutte said this would not come under copyright law, but rather personality rights law, as it is similar to deepfakes.

Deepfakes are videos or images in which a person’s face or body has been digitally altered so that they appear to be someone else. They are often used to spread false information.

“We had similar issues with, for example, pornographic material. Not only can you do an interview with somebody who doesn’t want to be in the public, but you can also use this to emulate something that is very realistic, difficult to distinguish from a real movie, and put a face of a person in an erotic movie,” Jutte said.

“This can have much, much bigger impacts on the individual person and there we need to find mechanisms of how we control this, but you can also argue, of course, we need to have sanctions of a criminal nature that the unauthorised use for deep fakes, for example, are being set at a limit that actually, it has a deterring effect.”

The main text-generated form of AI that we have seen has been chatbots, such as ChatGPT, which caused a sensation when it was released last year.

The open-access software takes the form of a chatbot which responds to commands and prompts such as explaining scientific concepts, writing scenes for a play, summarising lengthy articles or even writing lines of computer code.

Last month, Italy became the first Western country to temporarily block the chatbot over data privacy concerns.

The country’s Data Protection Authority said US firm OpenAI, which makes ChatGPT, had no legal basis to justify “the mass collection and storage of personal data for the purpose of ‘training’ the algorithms underlying the operation of the platform”.

In Ireland, higher education stakeholders have said that colleges have to face up to emerging AI software such as ChatGPT, but downplayed its potential to transform university assessment. 

“As a law teacher, I’m obviously concerned that my students are cheating with text generators, which we have seen happening. I’ve seen a couple of cases already,” Jutte said.

“We, even as a university, now have to start thinking about policies, how we prevent that students use this, or even my potential colleagues use this for purposes that I don’t want to see it being used.”

He added that there is no “simple answer” when it comes to AI 

“It requires that we think very carefully about the legal framework, that we cover the making of the machine, but also the use of the machine. That’s something where at both ends, we have to intervene at some point in time. But off the cuff, I couldn’t give you any exact resolve.

“We also need to talk to computer scientists because they have to explain to us how these things actually work on this. One element of AI is that AI machines must be explainable. We have to be able to explain how they come to a certain conclusion. And that’s something where we need to work hand in hand with the technology side, with society and with lawyers.”

Readers like you are keeping these stories free for everyone...
A mix of advertising and supporting contributions helps keep paywalls away from valuable information like this article. Over 5,000 readers like you have already stepped up and support us with a monthly payment or a once-off donation.

Author
Jane Moore
View 22 comments
Close
22 Comments
This is YOUR comments community. Stay civil, stay constructive, stay on topic. Please familiarise yourself with our comments policy here before taking part.
Leave a Comment
    Submit a report
    Please help us understand how this comment violates our community guidelines.
    Thank you for the feedback
    Your feedback has been sent to our team for review.

    Leave a commentcancel

     
    JournalTv
    News in 60 seconds