Octave TTS
Hume AI has introduced Octave (Omni-capable Text and Voice Engine), a groundbreaking text-to-speech system that leverages large language model technology to understand and interpret the context of words, enabling it to generate speech with appropriate emotions, rhythm, and cadence, unlike traditional TTS models that merely read text, Octave acts akin to a human actor, delivering lines with nuanced expression based on the content. Users can create diverse AI voices by providing descriptive prompts, such as "a sarcastic medieval peasant," allowing for tailored voice generation that aligns with specific character traits or scenarios. Additionally, Octave offers the flexibility to modify the emotional delivery and speaking style through natural language instructions, enabling commands like "sound more enthusiastic" or "whisper fearfully" to fine-tune the output.
Learn more
MARS6
CAMB.AI's MARS6 is a groundbreaking text-to-speech (TTS) model that has become the first speech model accessible on Amazon Web Services (AWS) Bedrock platform. This integration allows developers to incorporate advanced TTS capabilities into generative AI applications, facilitating the creation of enhanced voice assistants, engaging audiobooks, interactive media, and various audio-centric experiences. MARS6's advanced algorithms enable natural and expressive speech synthesis, setting a new standard for TTS conversion. Developers can access MARS6 directly through the Amazon Bedrock platform, ensuring seamless integration into applications and enhancing user engagement and accessibility. The inclusion of MARS6 in AWS Bedrock's diverse selection of foundation models underscores CAMB.AI's commitment to advancing machine learning and artificial intelligence, providing developers with vital tools to create rich audio experiences supported by AWS's reliable and scalable infrastructure.
Learn more
Piper TTS
Piper is a fast, local neural text-to-speech (TTS) system optimized for devices like the Raspberry Pi 4, designed to deliver high-quality speech synthesis without relying on cloud services. It utilizes neural network models trained with VITS and exported to ONNX Runtime, enabling efficient and natural-sounding speech generation. Piper supports a wide range of languages, including English (US and UK), Spanish (Spain and Mexico), French, German, and many others, with voices available for download. Users can run Piper via the command line or integrate it into Python applications using the piper-tts package. The system allows for real-time audio streaming, JSON input for batch processing, and supports multi-speaker models. Piper relies on espeak-ng for phoneme generation, converting text into phonemes before synthesizing speech. It is employed in various projects such as Home Assistant, Rhasspy 3, NVDA, and others.
Learn more
Llama 2
The next generation of our open source large language model. This release includes model weights and starting code for pretrained and fine-tuned Llama language models — ranging from 7B to 70B parameters.
Llama 2 pretrained models are trained on 2 trillion tokens, and have double the context length than Llama 1. Its fine-tuned models have been trained on over 1 million human annotations.
Llama 2 outperforms other open source language models on many external benchmarks, including reasoning, coding, proficiency, and knowledge tests.
Llama 2 was pretrained on publicly available online data sources. The fine-tuned model, Llama-2-chat, leverages publicly available instruction datasets and over 1 million human annotations.
We have a broad range of supporters around the world who believe in our open approach to today’s AI — companies that have given early feedback and are excited to build with Llama 2.
Learn more