Meta, formerly known as Facebook, has recently unveiled an open-source AI tool called Audiocraft. This innovative tool aims to empower users to create music and audio based on simple text prompts.
Audiocraft tools include three different models, each serving a unique purpose in the creative process. The first model, MusicGen, has been efficiently trained using Meta’s extensive music library, which enables it to generate musical compositions from text input. AudioGen, the second model, is trained on a large collection of public sound effects, making it efficient at generating audio samples based on a given text input. Encode Decoder, which facilitates the production of high quality music with fewer unwanted artifacts.
The release of these models brings an exciting opportunity for researchers and practitioners alike, as META is providing access to models for training with its own datasets. Users can expect these models to produce remarkable audio with sustained consistency for a long time, as the company claims.
The potential applications of this tool are wide-ranging, covering areas as diverse as music composition, sound effects creation, compression algorithms, and audio production. Simplifying the design of generative audio models, Audiocraft enables users to easily experiment with and explore existing models.
Interestingly, this is not the only advancement in the field of AI-powered audio generation. Google’s parent company Alphabet previously introduced its experimental audio generating AI tool, MusicLM.
With the availability of Meta’s AudioCraft tools, the field of audio creativity is about to witness a new era, revolutionizing the way music and audio are produced.
Check out: What is “AI Cryptocurrency”?