The misuse of copyrighted music by artificial intelligence companies could exploit musicians, a former executive at a leading tech startup has warned.
The technology is trained on a huge number of existing songs, which it uses to generate music based on a text prompt.
Copyrighted work is already being used to train artificial intelligence models without permission, according to Ed Newton-Rex, who resigned from his role leading Stability AI’s audio team because he didn’t agree with the company’s opinion that training generative AI models on copyrighted works is “fair use” of the material.
Mr Newton-Rex told Sky News that his issue is not so much with Stability as a company as it is with the generative AI industry as a whole.
“Everyone really adopts this same position and this position is essentially we can train these generative models on whatever we want to, and we can do that without consent from the rights holders, from the people who actually created that content and who own that content,” he said.
Newton-Rex added that one of the reasons large AI companies do not agree deals with artists and labels is because it involves “legwork” that costs them time and money.
Emad Mostaque, co-founder and chief executive of Stability AI, said that fair use supports creative development.
Fair use is a legal clause that allows copyrighted work to be used without the owner’s permission for specified non-commercial purposes like research or teaching.
Stability’s audio generator, Stable Audio, gave musicians the option to opt out of their pool of training data.
Millions of AI generated songs are being created online every day, and big name artists are even signing deals with technology giants to create AI music tools.
Read more:
Britain’s musicians facing existential career crisis
Schools urged to teach children how to use AI from age of 11