Owners of Facebook and Instagram, Meta have revealed a new AI music generator. The platform will be able to create releases based on nothing but text inputted by the user.
The platform developed by Meta is designed to create simple tracks based upon text prompts. A user could input text such as “happy pop dance beat” or “slow acoustic track with meaningful lyrics”. From there MusicGen should create 12 second music clips which can be saved and used by the Meta user.
Meta have shared the lengths they have gone to preparing the platform for use. They have used 20,000 hours of licensed music. 10,000 of the tracks used were high quality according to the technology company. There were also 390,000 instrument based tracks used which were sourced from ShutterStock and Pond5.
Google were the first to announce their own text-to-music model. However, Meta have shortly followed their lead and are now one of the few technology giants to jump on board with this idea. With their rivals over at Google only making their platform public recently, Meta have come in at a good time with their model.
MusicLM, Google’s model will create two versions of a track. These will be made using simple prompts like “upbeat songs to use while cooking” and from there two sounds will be shared with the user. Users are able to vote on their favourite and this information will then be used to enhance the AI model.
The Google account was trained using 5M audio clips which came to 280,000 hours of music. This music was at 24 kHz. It seems a lot more preparation has gone into Google’s model compared with Meta’s. When one company is on to a good thing, others want to follow, so it’s no surprise Meta have created their own music AI. However, is there enough room for them all?