Meta has released a demo of what it's calling MusicGen, an artificial intelligence system designed to take text prompts and convert them into sounds or melodies.
The Facebook parent company released the demo on Hugging Face, and it has since been used by many. Interesting Engineering decided to give Meta's newest AI tool a try, requesting the system produce a "rock lullaby," and according to the publication, it only took 341 seconds for it to create a 15-second audio clip that featured a combination of guitar and piano. Interesting Engineering reported the audio clip sounded like the beginning of an old-school classic rock track.
So, how did Meta do this? MusicGen is a model that has been fed more than 20,000 hours of music. This digested music contained various harmonies and instrumentals, which, when combined, create a large number of possible sounds that can be produced.

Meta has released the AI tool onto GitHub for all to use and see how it was created. As for the ethical problems of creating such a tool or the fair use of the music it was trained on. The researchers that announced the AI system said that all of the sounds MusicGen was trained on were covered by legal agreements with the owners of those music libraries.
"Open research can ensure that all actors have equal access to these models. Through the development of more advanced controls, such as the melody conditioning we introduced, we hope that such models can become useful both to music amateurs and professionals," said Meta