AudioLM
AudioLM is a pure audio language model that generates high‑fidelity, long‑term coherent speech and piano music by learning from raw audio alone, without requiring any text transcripts or symbolic representations. It represents audio hierarchically using two types of discrete tokens, semantic tokens extracted from a self‑supervised model to capture phonetic or melodic structure and global context, and acoustic tokens from a neural codec to preserve speaker characteristics and fine waveform details, and chains three Transformer stages to predict first semantic tokens for high‑level structure, then coarse and finally fine acoustic tokens for detailed synthesis. The resulting pipeline allows AudioLM to condition on a few seconds of input audio and produce seamless continuations that retain voice identity, prosody, and recording conditions in speech or melody, harmony, and rhythm in music. Human evaluations show that synthetic continuations are nearly indistinguishable from real recordings.
Learn more
OpenAI Jukebox
We’re introducing Jukebox, a neural net that generates music, including rudimentary singing, as raw audio in a variety of genres and artistic styles. We’re releasing the model weights and code, along with a tool to explore the generated samples. Provided with genre, artist, and lyrics as input, Jukebox outputs a new music sample produced from scratch. Jukebox produces a wide range of music and singing styles and generalizes to lyrics not seen during training. All the lyrics below have been co-written by a language model and OpenAI researchers. When conditioned on lyrics seen during training, Jukebox produces songs very different from the original songs it was trained on. We provide 12 seconds of audio to condition on and Jukebox completes the rest in a specified style. We chose to work on music because we want to continue to push the boundaries of generative models. Jukebox’s autoencoder model compresses audio to a discrete space, using a quantization-based approach called VQ-VAE.
Learn more
Seed-Music
Seed-Music is a unified framework for high-quality and controlled music generation and editing, capable of producing vocal and instrumental works from multimodal inputs such as lyrics, style descriptions, sheet music, audio references, or voice prompts, and of supporting post-production editing of existing tracks by allowing direct modification of melodies, timbres, lyrics, or instruments. It combines autoregressive language modeling with diffusion approaches and a three-stage pipeline comprising representation learning (which encodes raw audio into intermediate representations, including audio tokens, symbolic music tokens, and vocoder latents), generation (which transforms these multimodal inputs into music representations), and rendering (which converts those representations into high-fidelity audio). The system supports lead-sheet to song conversion, singing synthesis, voice conversion, audio continuation, style transfer, and fine-grained control over music structure.
Learn more
MusicGen
Meta's MusicGen is an open source, deep-learning language model that can generate short pieces of music based on text prompts. The model was trained on 20,000 hours of music, including whole tracks and individual instrument samples. The model will generate 12 seconds of audio based on the description you provided. You can optionally provide reference audio from which a broad melody will be extracted. The model will then try to follow both the description and melody provided. All samples are generated with the melody model. You can also use your own GPU or a Google Colab by following the instructions on our repo. MusicGen is comprised of a single-stage transformer LM together with efficient token interleaving patterns, which eliminates the need for cascading several models. MusicGen can generate high-quality samples, while being conditioned on textual description or melodic features, allowing better control over the generated output.
Learn more