Compare the Top AI Models as of August 2025 - Page 8

AI Models Clear Filters
  • 1
    LearnLM

    LearnLM

    Google

    LearnLM is an experimental, task-specific model designed to align with learning science principles for teaching and learning applications. It is trained to respond to system instructions like "You are an expert tutor," and is capable of inspiring active learning by encouraging practice and providing timely feedback. The model effectively manages cognitive load by presenting relevant, well-structured information across multiple modalities, while dynamically adapting to the learner’s goals and needs, grounding responses in appropriate materials. LearnLM also stimulates curiosity, motivating learners throughout their educational journey, and supports metacognition by helping learners plan, monitor, and reflect on their progress. This innovative model is available for experimentation in AI Studio.
    Starting Price: Free
  • 2
    BitNet

    BitNet

    Microsoft

    The BitNet b1.58 2B4T is a cutting-edge 1-bit Large Language Model (LLM) developed by Microsoft, designed to enhance computational efficiency while maintaining high performance. This model, built with approximately 2 billion parameters and trained on 4 trillion tokens, uses innovative quantization techniques to optimize memory usage, energy consumption, and latency. The platform supports multiple modalities and is particularly valuable for applications in AI-powered text generation, offering substantial efficiency gains compared to full-precision models.
    Starting Price: Free
  • 3
    GPT-Image-1
    OpenAI's Image Generation API, powered by the gpt-image-1 model, enables developers and businesses to integrate high-quality, professional-grade image generation directly into their tools and platforms. This model offers versatility, allowing it to create images across diverse styles, faithfully follow custom guidelines, leverage world knowledge, and accurately render text, unlocking countless practical applications across multiple domains. Leading enterprises and startups across industries, including creative tools, ecommerce, education, enterprise software, and gaming, are already using image generation in their products and experiences. It gives creators the choice and flexibility to experiment with different aesthetic styles. Users can generate and edit images from simple prompts, adjusting styles, adding or removing objects, expanding backgrounds, and more.
    Starting Price: $0.19 per image
  • 4
    ERNIE X1 Turbo
    ERNIE X1 Turbo, developed by Baidu, is an advanced deep reasoning AI model introduced at the Baidu Create 2025 conference. Designed to handle complex multi-step tasks such as problem-solving, literary creation, and code generation, this model outperforms competitors like DeepSeek R1 in terms of reasoning abilities. With a focus on multimodal capabilities, ERNIE X1 Turbo supports text, audio, and image processing, making it an incredibly versatile AI solution. Despite its cutting-edge technology, it is priced at just a fraction of the cost of other top-tier models, offering a high-value solution for businesses and developers.
    Starting Price: $0.14 per 1M tokens
  • 5
    Gemini 2.5 Pro Preview (I/O Edition)
    Gemini 2.5 Pro Preview (I/O Edition) by Google is an advanced AI model designed to streamline coding tasks and enhance web app development. This powerful tool allows developers to efficiently transform and edit code, reducing errors and improving function calling accuracy. With enhanced capabilities in video understanding and web app creation, Gemini 2.5 Pro Preview excels at building aesthetically pleasing and functional web applications. Available through Google’s Gemini API and AI platforms, this model provides a seamless solution for developers to create innovative applications with improved performance and reliability.
    Starting Price: $19.99/month
  • 6
    Mirage by Captions
    Mirage by Captions is the world's first AI model designed to generate UGC content. It generates original actors with natural expressions and body language, completely free from licensing restrictions. With Mirage, you’ll experience your fastest video creation workflow yet. Using just a prompt, generate a complete video from start to finish. Instantly create your actor, background, voice, and script. Mirage brings unique AI-generated actors to life, free from rights restrictions, unlocking limitless, expressive storytelling. Scaling video ad production has never been easier. Thanks to Mirage, marketing teams cut costly production cycles, reduce reliance on external creators, and focus more on strategy. No actors, studios, or shoots needed, just enter a prompt, and Mirage generates a full video, from script to screen. Skip the legal and logistical headaches of traditional video production.
    Starting Price: $9.99 per month
  • 7
    Gemma 3n

    Gemma 3n

    Google DeepMind

    Gemma 3n is our state-of-the-art open multimodal model, engineered for on-device performance and efficiency. Made for responsive, low-footprint local inference, Gemma 3n empowers a new wave of intelligent, on-the-go applications. It analyzes and responds to combined images and text, with video and audio coming soon. Build intelligent, interactive features that put user privacy first and work reliably offline. Mobile-first architecture, with a significantly reduced memory footprint. Co-designed by Google's mobile hardware teams and industry leaders. 4B active memory footprint with the ability to create submodels for quality-latency tradeoffs. Gemma 3n is our first open model built on this groundbreaking, shared architecture, allowing developers to begin experimenting with this technology today in an early preview.
  • 8
    Orpheus TTS

    Orpheus TTS

    Canopy Labs

    Canopy Labs has introduced Orpheus, a family of state-of-the-art speech large language models (LLMs) designed for human-level speech generation. These models are built on the Llama-3 architecture and are trained on over 100,000 hours of English speech data, enabling them to produce natural intonation, emotion, and rhythm that surpasses current state-of-the-art closed source models. Orpheus supports zero-shot voice cloning, allowing users to replicate voices without prior fine-tuning, and offers guided emotion and intonation control through simple tags. The models achieve low latency, with approximately 200ms streaming latency for real-time applications, reducible to around 100ms with input streaming. Canopy Labs has released both pre-trained and fine-tuned 3B-parameter models under the permissive Apache 2.0 license, with plans to release smaller models of 1B, 400M, and 150M parameters for use on resource-constrained devices.
  • 9
    MARS6

    MARS6

    CAMB.AI

    CAMB.AI's MARS6 is a groundbreaking text-to-speech (TTS) model that has become the first speech model accessible on Amazon Web Services (AWS) Bedrock platform. This integration allows developers to incorporate advanced TTS capabilities into generative AI applications, facilitating the creation of enhanced voice assistants, engaging audiobooks, interactive media, and various audio-centric experiences. MARS6's advanced algorithms enable natural and expressive speech synthesis, setting a new standard for TTS conversion. Developers can access MARS6 directly through the Amazon Bedrock platform, ensuring seamless integration into applications and enhancing user engagement and accessibility. The inclusion of MARS6 in AWS Bedrock's diverse selection of foundation models underscores CAMB.AI's commitment to advancing machine learning and artificial intelligence, providing developers with vital tools to create rich audio experiences supported by AWS's reliable and scalable infrastructure.
  • 10
    OpenAI o3-pro
    OpenAI’s o3-pro is a high-performance reasoning model designed for tasks that require deep analysis and precision. It is available exclusively to ChatGPT Pro and Team subscribers, succeeding the earlier o1-pro model. The model excels in complex fields like mathematics, science, and coding by employing detailed step-by-step reasoning. It integrates advanced tools such as real-time web search, file analysis, Python execution, and visual input processing. While powerful, o3-pro has slower response times and lacks support for features like image generation and temporary chats. Despite these trade-offs, o3-pro demonstrates superior clarity, accuracy, and adherence to instructions compared to its predecessor.
    Starting Price: $20 per 1 million tokens
  • 11
    MiniMax-M1

    MiniMax-M1

    MiniMax

    MiniMax‑M1 is a large‑scale hybrid‑attention reasoning model released by MiniMax AI under the Apache 2.0 license. It supports an unprecedented 1 million‑token context window and up to 80,000-token outputs, enabling extended reasoning across long documents. Trained using large‑scale reinforcement learning with a novel CISPO algorithm, MiniMax‑M1 completed full training on 512 H800 GPUs in about three weeks. It achieves state‑of‑the‑art performance on benchmarks in mathematics, coding, software engineering, tool usage, and long‑context understanding, matching or outperforming leading models. Two model variants are available (40K and 80K thinking budgets), with weights and deployment scripts provided via GitHub and Hugging Face.
  • 12
    Marey

    Marey

    Moonvalley

    Marey is Moonvalley’s foundational AI video model engineered for world-class cinematography, offering filmmakers precision, consistency, and fidelity across every frame. It is the first commercially safe video model, trained exclusively on licensed, high-resolution footage to eliminate legal gray areas and safeguard intellectual property. Designed in collaboration with AI researchers and professional directors, Marey mirrors real production workflows to deliver production-grade output free of visual noise and ready for final delivery. Its creative control suite includes Camera Control, transforming 2D scenes into manipulable 3D environments for cinematic moves; Motion Transfer, applying timing and energy from reference clips to new subjects; Trajectory Control, drawing exact paths for object movement without prompts or rerolls; Keyframing, generating smooth transitions between reference images on a timeline; Reference, defining appearance and interaction of individual elements.
    Starting Price: $14.99 per month
  • 13
    Solar Pro 2

    Solar Pro 2

    Upstage AI

    Solar Pro 2 is Upstage’s latest frontier‑scale large language model, designed to power complex tasks and agent‑like workflows across domains such as finance, healthcare, and legal. Packaged in a compact 31 billion‑parameter architecture, it delivers top‑tier multilingual performance, especially in Korean, where it outperforms much larger models on benchmarks like Ko‑MMLU, Hae‑Rae, and Ko‑IFEval, while also excelling in English and Japanese. Beyond superior language understanding and generation, Solar Pro 2 offers next‑level intelligence through an advanced Reasoning Mode that significantly boosts multi‑step task accuracy on challenges ranging from general reasoning (MMLU, MMLU‑Pro, HumanEval) to complex mathematics (Math500, AIME) and software engineering (SWE‑Bench Agentless), achieving problem‑solving efficiency comparable to or exceeding that of models twice its size. Enhanced tool‑use capabilities enable the model to interact seamlessly with external APIs and data sources.
    Starting Price: $0.1 per 1M tokens
  • 14
    Solar Mini

    Solar Mini

    Upstage AI

    Solar Mini is a pre‑trained large language model that delivers GPT‑3.5‑comparable responses with 2.5× faster inference while staying under 30 billion parameters. It achieved first place on the Hugging Face Open LLM Leaderboard in December 2023 by combining a 32‑layer Llama 2 architecture, initialized with high‑quality Mistral 7B weights, with an innovative “depth up‑scaling” (DUS) approach that deepens the model efficiently without adding complex modules. After DUS, continued pretraining restores and enhances performance, and instruction tuning in a QA format, especially for Korean, refines its ability to follow user prompts, while alignment tuning ensures its outputs meet human or advanced AI preferences. Solar Mini outperforms competitors such as Llama 2, Mistral 7B, Ko‑Alpaca, and KULLM across a variety of benchmarks, proving that compact size need not sacrifice capability.
    Starting Price: $0.1 per 1M tokens
  • 15
    Syn

    Syn

    Upstage AI

    Syn is a next‑generation Japanese large language model co‑developed by Upstage and Karakuri, featuring under 14 billion parameters and optimized for enterprise use in finance, manufacturing, legal, and healthcare. It delivers top‑tier benchmark performance on the Weights & Biases Nejumi Leaderboard, achieving industry‑leading scores for accuracy and alignment, while maintaining cost efficiency through a lightweight architecture derived from Solar Mini. Syn excels in Japanese “truthfulness” and safety, understanding nuanced expressions and industry‑specific terminology, and offers flexible fine‑tuning to integrate proprietary data and domain knowledge. Built for scalable deployment, it supports on‑premises, AWS Marketplace, and cloud environments, with security and compliance safeguards tailored to enterprise requirements. Leveraging AWS Trainium, Syn reduces training costs by approximately 50 percent compared to traditional GPU setups, enabling rapid customization of use cases.
    Starting Price: $0.1 per 1M tokens
  • 16
    LUIS

    LUIS

    Microsoft

    Language Understanding (LUIS): A machine learning-based service to build natural language into apps, bots, and IoT devices. Quickly create enterprise-ready, custom models that continuously improve. Add natural language to your apps. Designed to identify valuable information in conversations, LUIS interprets user goals (intents) and distills valuable information from sentences (entities), for a high quality, nuanced language model. LUIS integrates seamlessly with the Azure Bot Service, making it easy to create a sophisticated bot. Powerful developer tools are combined with customizable pre-built apps and entity dictionaries, such as Calendar, Music, and Devices, so you can build and deploy a solution more quickly. Dictionaries are mined from the collective knowledge of the web and supply billions of entries, helping your model to correctly identify valuable information from user conversations. Active learning is used to continuously improve the quality of the models.
  • 17
    Sparrow

    Sparrow

    DeepMind

    Sparrow is a research model and proof of concept, designed with the goal of training dialogue agents to be more helpful, correct, and harmless. By learning these qualities in a general dialogue setting, Sparrow advances our understanding of how we can train agents to be safer and more useful – and ultimately, to help build safer and more useful artificial general intelligence (AGI). Sparrow is not yet available for public use. Training a conversational AI is an especially challenging problem because it’s difficult to pinpoint what makes a dialogue successful. To address this problem, we turn to a form of reinforcement learning (RL) based on people's feedback, using the study participants’ preference feedback to train a model of how useful an answer is. To get this data, we show our participants multiple model answers to the same question and ask them which answer they like the most.
  • 18
    NVIDIA NeMo
    NVIDIA NeMo LLM is a service that provides a fast path to customizing and using large language models trained on several frameworks. Developers can deploy enterprise AI applications using NeMo LLM on private and public clouds. They can also experience Megatron 530B—one of the largest language models—through the cloud API or experiment via the LLM service. Customize your choice of various NVIDIA or community-developed models that work best for your AI applications. Within minutes to hours, get better responses by providing context for specific use cases using prompt learning techniques. Leverage the power of NVIDIA Megatron 530B, one of the largest language models, through the NeMo LLM Service or the cloud API. Take advantage of models for drug discovery, including in the cloud API and NVIDIA BioNeMo framework.
  • 19
    ERNIE Bot
    ERNIE Bot is an AI-powered conversational assistant developed by Baidu, designed to facilitate seamless and natural interactions with users. Built on the ERNIE (Enhanced Representation through Knowledge Integration) model, ERNIE Bot excels at understanding complex queries and generating human-like responses across various domains. Its capabilities include processing text, generating images, and engaging in multimodal communication, making it suitable for a wide range of applications such as customer support, virtual assistants, and enterprise automation. With its advanced contextual understanding, ERNIE Bot offers an intuitive and efficient solution for businesses seeking to enhance their digital interactions and automate workflows.
    Starting Price: Free
  • 20
    PaLM

    PaLM

    Google

    PaLM API is an easy and safe way to build on top of our best language models. Today, we’re making an efficient model available, in terms of size and capabilities, and we’ll add other sizes soon. The API also comes with an intuitive tool called MakerSuite, which lets you quickly prototype ideas and, over time, will have features for prompt engineering, synthetic data generation and custom-model tuning — all supported by robust safety tools. Select developers can access the PaLM API and MakerSuite in Private Preview today, and stay tuned for our waitlist soon.
  • 21
    Med-PaLM 2

    Med-PaLM 2

    Google Cloud

    Healthcare breakthroughs change the world and bring hope to humanity through scientific rigor, human insight, and compassion. We believe AI can contribute to this, with thoughtful collaboration between researchers, healthcare organizations, and the broader ecosystem. Today, we're sharing exciting progress on these initiatives, with the announcement of limited access to Google’s medical large language model, or LLM, called Med-PaLM 2. It will be available in the coming weeks to a select group of Google Cloud customers for limited testing, to explore use cases and share feedback as we investigate safe, responsible, and meaningful ways to use this technology. Med-PaLM 2 harnesses the power of Google’s LLMs, aligned to the medical domain to more accurately and safely answer medical questions. As a result, Med-PaLM 2 was the first LLM to perform at an “expert” test-taker level performance on the MedQA dataset of US Medical Licensing Examination (USMLE)-style questions.
  • 22
    Gopher

    Gopher

    DeepMind

    Language, and its role in demonstrating and facilitating comprehension - or intelligence - is a fundamental part of being human. It gives people the ability to communicate thoughts and concepts, express ideas, create memories, and build mutual understanding. These are foundational parts of social intelligence. It’s why our teams at DeepMind study aspects of language processing and communication, both in artificial agents and in humans. As part of a broader portfolio of AI research, we believe the development and study of more powerful language models – systems that predict and generate text – have tremendous potential for building advanced AI systems that can be used safely and efficiently to summarise information, provide expert advice and follow instructions via natural language. Developing beneficial language models requires research into their potential impacts, including the risks they pose.
  • 23
    PaLM 2

    PaLM 2

    Google

    PaLM 2 is our next generation large language model that builds on Google’s legacy of breakthrough research in machine learning and responsible AI. It excels at advanced reasoning tasks, including code and math, classification and question answering, translation and multilingual proficiency, and natural language generation better than our previous state-of-the-art LLMs, including PaLM. It can accomplish these tasks because of the way it was built – bringing together compute-optimal scaling, an improved dataset mixture, and model architecture improvements. PaLM 2 is grounded in Google’s approach to building and deploying AI responsibly. It was evaluated rigorously for its potential harms and biases, capabilities and downstream uses in research and in-product applications. It’s being used in other state-of-the-art models, like Med-PaLM 2 and Sec-PaLM, and is powering generative AI features and tools at Google, like Bard and the PaLM API.
  • 24
    Hippocratic AI

    Hippocratic AI

    Hippocratic AI

    Hippocratic AI is the new state of the art (SOTA) model, outperforming GPT-4 on 105 of 114 healthcare exams and certifications. Hippocratic AI has outperformed GPT-4 on 105 out of 114 tests and certifications, outperformed by a margin of five percent or more on 74 of the certifications, and outperformed by a margin of ten percent or more on 43 of the certifications. Most language models pre-train on the common crawl of the Internet, which may include incorrect and misleading information. Unlike these LLMs, Hippocratic AI is investing heavily in legally acquiring evidence-based healthcare content. We’re conducting a unique Reinforcement Learning with Human Feedback process using healthcare professionals to train and validate the model’s readiness for deployment. We call this RLHF-HP. Hippocratic AI will not release the model until a large number of these licensed professionals deem it safe.
  • 25
    YandexGPT
    Take advantage of the capabilities of generative language models to improve and optimize your applications and web services. Get an aggregated result of accumulated textual data whether it be information from work chats, user reviews, or other types of data. YandexGPT will help both summarize and interpret the information. Speed up text creation as you improve their quality and style. Create template texts for newsletters, product descriptions for online stores and other applications. Develop a chatbot for your support service: teach the bot to answer various user questions, both common and more complicated. Use the API to integrate the service with your applications and automate processes.
  • 26
    Ntropy

    Ntropy

    Ntropy

    Ship faster integrating with our Python SDK or Rest API in minutes. No prior setups or data formatting. You can get going straight away as soon as you have incoming data and your first customers. We have built and fine-tuned custom language models to recognize entities, automatically crawl the web in real-time and pick the best match, as well as assign labels with superhuman accuracy in a fraction of the time. Everybody has a data enrichment model that is trying to be good at one thing, US or Europe, business or consumer. These models are poor at generalizing and are not capable of human-level output. With us, you can leverage the power of the world's largest and most performant models embedded in your products, at a fraction of cost and time.
  • 27
    Nexusflow

    Nexusflow

    Nexusflow

    Nexusflow offers enterprise-grade generative AI agents that provide complete ownership and control over your AI models, built to operate behind your firewall. The platform features Compact Models that allow businesses to continuously incorporate the latest knowledge, tools, and feedback into their AI agents. Nexusflow’s solution offers domain specialization, ensuring superior quality and cost-effective real-time responses, while eliminating vendor lock-in. This makes it ideal for businesses looking to integrate generative AI into their workflows with full data ownership and scalable solutions.
  • 28
    Giga ML

    Giga ML

    Giga ML

    We just launched X1 large series of Models. Giga ML's most powerful model is available for pre-training and fine-tuning with on-prem deployment. Since we are Open AI compatible, your existing integrations with long chain, llama-index, and all others work seamlessly. You can continue pre-training of LLM's with domain-specific data books or docs or company docs. The world of large language models (LLMs) rapidly expanding, offering unprecedented opportunities for natural language processing across various domains. However, some critical challenges have remained unaddressed. At Giga ML, we proudly introduce the X1 Large 32k model, a pioneering on-premise LLM solution that addresses these critical issues.
  • 29
    Martian

    Martian

    Martian

    By using the best-performing model for each request, we can achieve higher performance than any single model. Martian outperforms GPT-4 across OpenAI's evals (open/evals). We turn opaque black boxes into interpretable representations. Our router is the first tool built on top of our model mapping method. We are developing many other applications of model mapping including turning transformers from indecipherable matrices into human-readable programs. If a company experiences an outage or high latency period, automatically reroute to other providers so your customers never experience any issues. Determine how much you could save by using the Martian Model Router with our interactive cost calculator. Input your number of users, tokens per session, and sessions per month, and specify your cost/quality tradeoff.
  • 30
    Phi-2

    Phi-2

    Microsoft

    We are now releasing Phi-2, a 2.7 billion-parameter language model that demonstrates outstanding reasoning and language understanding capabilities, showcasing state-of-the-art performance among base language models with less than 13 billion parameters. On complex benchmarks Phi-2 matches or outperforms models up to 25x larger, thanks to new innovations in model scaling and training data curation. With its compact size, Phi-2 is an ideal playground for researchers, including for exploration around mechanistic interpretability, safety improvements, or fine-tuning experimentation on a variety of tasks. We have made Phi-2 available in the Azure AI Studio model catalog to foster research and development on language models.