If 2022 was the year of AI, 2023 seems to be following the example. In recent months, artificial intelligence tools like ChatGPT, DALL-E, GitHub Co-pilot, and others have gained tremendous popularity and are being discussed all over the world.
What is the impact of taking all the musical know-how and inserting it into an artificial intelligence brain that only asks questions expressing your musical tastes? Who needs to be a musician to entertain when soon we will be able to create our music even more easily and draw on the human genius of every musician who has a recording?
MusicLM is built on a neural network and trained on a large dataset of musical data consisting of over 280,000 hours of music, which enables it to automatically produce innovative musical compositions consisting of various instruments, genres, and concepts based on textual descriptions.
It is a tool capable of producing high-quality audio in a simpler way; one can even hum a melody to train the artificial intelligence algorithm to get the right rhythm that one wants to hear. According to Google researchers, the model generates music at 24 kHz that remains constant for several minutes.
Additionally, a training dataset consisting of 5,500 musical tracks has also been released to support other researchers working on automatic song generation.
If this is a sign of what will happen in the world, we need to ask questions in order to arrive at an effective policy that regulates future realities:
- What is the risk that AI algorithms will create their own compositions and work, and who owns this work, the AI or the human?
- Who owns the music when it is a mix of content taken from the web that composes a song through the genius of musicians?
- When purchasing music, are you also buying the right to use its audio as data for AI training?
It is clear that we need to improve our legislation on music ownership and understand how AI algorithms should be treated and managed in the music industry.
Even as Google, Meta, Microsoft, OpenAI, and many other leaders in the AI market continue to push the boundaries of every sector using AI, we as humans have an ethical responsibility to think more deeply about the world we want to create and leave for future generations.
Human brains and musical talent have value, and we are rapidly commodifying the precious creative DNA into bits and bytes that will have a strong impact on the creative property rights of our musicians...
However, this is evolution! It will be intriguing to follow its process.
"MusicLM" casts the process of conditional music generation as a hierarchical sequence-to-sequence modeling task, and it generates music at 24 kHz that remains consistent over several minutes. Our experiments show that MusicLM outperforms previous systems both in audio quality and adherence to the text description. Moreover, we demonstrate that MusicLM can be conditioned on both text and a melody in that it can transform whistled and hummed melodies according to the style described in a text caption," Google said in the research paper, while releasing some samples created using the AI tool, which seem promising.
Listen to Google MusicLM's first proposals for yourself