EVI 3
Hume AI's EVI 3 is a third-generation speech-language model that streams in user speech and forms natural, expressive speech and language responses. At conversational latency, it produces the same quality of speech as our text-to-speech model, Octave. Simultaneously, it responds with the same intelligence as the most advanced LLMs of similar latency. It also communicates with reasoning models and web search systems as it speaks, “thinking fast and slow” to match the intelligence of any frontier AI system. EVI 3 can instantly generate new voices and personalities instead of being limited to a handful of speakers. For instance, users can speak to any of the more than 100,000 custom voices already created on our text-to-speech platform, each with an inferred personality. No matter the voice, it responds with a wide range of emotions or styles, implicitly or on command.
Learn more
Orpheus TTS
Canopy Labs has introduced Orpheus, a family of state-of-the-art speech large language models (LLMs) designed for human-level speech generation. These models are built on the Llama-3 architecture and are trained on over 100,000 hours of English speech data, enabling them to produce natural intonation, emotion, and rhythm that surpasses current state-of-the-art closed source models. Orpheus supports zero-shot voice cloning, allowing users to replicate voices without prior fine-tuning, and offers guided emotion and intonation control through simple tags. The models achieve low latency, with approximately 200ms streaming latency for real-time applications, reducible to around 100ms with input streaming. Canopy Labs has released both pre-trained and fine-tuned 3B-parameter models under the permissive Apache 2.0 license, with plans to release smaller models of 1B, 400M, and 150M parameters for use on resource-constrained devices.
Learn more
Gemini 2.5 Pro TTS
Gemini 2.5 Pro TTS is Google’s advanced text-to-speech model in the Gemini 2.5 family, optimized for high-quality, expressive, controllable speech synthesis for structured and professional audio generation tasks. The model delivers natural-sounding voice output with enhanced expressivity, tone control, pacing, and pronunciation fidelity, enabling developers to dictate style, accent, rhythm, and emotional nuance through text-based prompts, making it suitable for applications like podcasts, audiobooks, customer assistance, tutorials, and multimedia narration that require premium audio output. It supports both single-speaker and multi-speaker audio, allowing distinct voices and conversational flows in the same output, and can synthesize speech across multiple languages with consistent style adherence. Compared with lower-latency variants like Flash TTS, the Pro TTS model prioritizes sound quality, depth of expression, and nuanced control.
Learn more
Amazon Polly
Amazon Polly is a service that turns text into lifelike speech, allowing you to create applications that talk, and build entirely new categories of speech-enabled products. Polly's Text-to-Speech (TTS) service uses advanced deep learning technologies to synthesize natural sounding human speech. With dozens of lifelike voices across a broad set of languages, you can build speech-enabled applications that work in many different countries.
In addition to Standard TTS voices, Amazon Polly offers Neural Text-to-Speech (NTTS) voices that deliver advanced improvements in speech quality through a new machine learning approach. Polly’s Neural TTS technology also supports two speaking styles that allow you to better match the delivery style of the speaker to the application: a Newscaster reading style that is tailored to news narration use cases, and a Conversational speaking style that is ideal for two-way communication like telephony applications.
Learn more