Researchers at the University of York have found that current AI-generated music is inferior to human-composed music.
They have also shown that there are faults with the algorithms used in AI music generation that could infringe on copyright, and have developed guidelines to help others evaluate the systems they are using.
In the study, 50 participants with a high level of musical knowledge were played excerpts of music—some from real human-composed works, and others generated by deep learning (DL), a type of artificial neural network, and non-DL algorithms.
The study recruited participants who had experience in analyzing note content and stylistic success in music so that results were not just focused on expression in music.
Musical criteria
The listeners were asked to rate the excerpts along six musical criteria (stylistic success, aesthetic pleasure, repetition or self-reference, melody, harmony, and rhythm), but were not told the identity—human-composed or computer-generated—of what they were hearing.
Co-author Dr. Tom Collins, from the School of Arts and Creative Technologies at the University of York, said, “On analysis, the ratings for human-composed excerpts are significantly higher and stylistically more successful than those for any of the systems responsible for computer-generated excerpts.”
The study also provided findings that raise concerns about the potential ethical violations of direct copying with deep learning methods. A popular type of DL architecture called transformer (the same type of architecture as behind OpenAI’s ChatGPT) was shown to copy large chunks of training data in its output.
Legal and ethical
Dr. Collins explained, “If Artist X uses an AI-generated excerpt, the algorithm that generates the excerpt may happen to copy a chunk of a song in the training (input) data by Artist Y. Unwittingly, if Artist X releases their song, they are infringing the copyright of Artist Y.
“It is a concerning finding and perhaps suggests that organizations who develop the algorithms should be being policed in some way or should be policing themselves. They know there are issues with these algorithms, so the focus should be on rectifying this so that AI-generated content can continue to be produced, but in an ethical and legal way.”
The researchers in the study have provided seven guidelines for conducting a comparative evaluation of machine learning systems. The findings could help to improve the development of AI-generated music, address current ethical issues, and avoid future legal dilemmas around copyright infringement.
The work is published in the journal Machine Learning.
More information:
Zongyu Yin et al, Deep learning’s shallow gains: a comparative evaluation of algorithms for automatic music generation, Machine Learning (2023). DOI: 10.1007/s10994-023-06309-w
Provided by
University of York
Citation:
Study finds AI-generated music ‘inferior’ to human-composed works (2023, April 4)