Inworld TTS
Inworld TTS is a state-of-the-art text-to-speech platform designed to deliver ultra-realistic, context-aware speech synthesis and precise voice-cloning capabilities at a radically accessible price. The flagship model, TTS-1, is optimized for real-time applications and supports low-latency streaming (first audio chunk in ≈200 ms) as well as multiple languages (including English, Spanish, French, Korean, Chinese, and more). Developers can use instant zero-shot voice cloning (5-15 seconds of audio) or professional fine-tuned cloning, add voice-tags for emotion, style, and non-verbal sounds, and switch languages while preserving voice identity. The larger TTS-1-Max model (in preview) offers even more expressive speech and multilingual strength. The platform supports both API and portal access, streaming or batch mode, and is designed for everything from interactive voice agents and gaming characters to branded audio experiences.
Learn more
Piper TTS
Piper is a fast, local neural text-to-speech (TTS) system optimized for devices like the Raspberry Pi 4, designed to deliver high-quality speech synthesis without relying on cloud services. It utilizes neural network models trained with VITS and exported to ONNX Runtime, enabling efficient and natural-sounding speech generation. Piper supports a wide range of languages, including English (US and UK), Spanish (Spain and Mexico), French, German, and many others, with voices available for download. Users can run Piper via the command line or integrate it into Python applications using the piper-tts package. The system allows for real-time audio streaming, JSON input for batch processing, and supports multi-speaker models. Piper relies on espeak-ng for phoneme generation, converting text into phonemes before synthesizing speech. It is employed in various projects such as Home Assistant, Rhasspy 3, NVDA, and others.
Learn more
Llama 2
The next generation of our open source large language model. This release includes model weights and starting code for pretrained and fine-tuned Llama language models — ranging from 7B to 70B parameters.
Llama 2 pretrained models are trained on 2 trillion tokens, and have double the context length than Llama 1. Its fine-tuned models have been trained on over 1 million human annotations.
Llama 2 outperforms other open source language models on many external benchmarks, including reasoning, coding, proficiency, and knowledge tests.
Llama 2 was pretrained on publicly available online data sources. The fine-tuned model, Llama-2-chat, leverages publicly available instruction datasets and over 1 million human annotations.
We have a broad range of supporters around the world who believe in our open approach to today’s AI — companies that have given early feedback and are excited to build with Llama 2.
Learn more
Octave TTS
Hume AI has introduced Octave (Omni-capable Text and Voice Engine), a groundbreaking text-to-speech system that leverages large language model technology to understand and interpret the context of words, enabling it to generate speech with appropriate emotions, rhythm, and cadence, unlike traditional TTS models that merely read text, Octave acts akin to a human actor, delivering lines with nuanced expression based on the content. Users can create diverse AI voices by providing descriptive prompts, such as "a sarcastic medieval peasant," allowing for tailored voice generation that aligns with specific character traits or scenarios. Additionally, Octave offers the flexibility to modify the emotional delivery and speaking style through natural language instructions, enabling commands like "sound more enthusiastic" or "whisper fearfully" to fine-tune the output.
Learn more