Compare the Top AI Models as of December 2025 - Page 8

AI Models Clear Filters
  • 1
    Ministral 3

    Ministral 3

    Mistral AI

    Mistral 3 is the latest generation of open-weight AI models from Mistral AI, offering a full family of models, from small, edge-optimized versions to a flagship, large-scale multimodal model. The lineup includes three compact “Ministral 3” models (3B, 8B, and 14B parameters) designed for efficiency and deployment on constrained hardware (even laptops, drones, or edge devices), plus the powerful “Mistral Large 3,” a sparse mixture-of-experts model with 675 billion total parameters (41 billion active). The models support multimodal and multilingual tasks, not only text, but also image understanding, and have demonstrated best-in-class performance on general prompts, multilingual conversations, and multimodal inputs. The base and instruction-fine-tuned versions are released under the Apache 2.0 license, enabling broad customization and integration in enterprise and open source projects.
    Starting Price: Free
  • 2
    Qwen3-VL

    Qwen3-VL

    Alibaba

    Qwen3-VL is the newest vision-language model in the Qwen family (by Alibaba Cloud), designed to fuse powerful text understanding/generation with advanced visual and video comprehension into one unified multimodal model. It accepts inputs in mixed modalities, text, images, and video, and handles long, interleaved contexts natively (up to 256 K tokens, with extensibility beyond). Qwen3-VL delivers major advances in spatial reasoning, visual perception, and multimodal reasoning; the model architecture incorporates several innovations such as Interleaved-MRoPE (for robust spatio-temporal positional encoding), DeepStack (to leverage multi-level features from its Vision Transformer backbone for refined image-text alignment), and text–timestamp alignment (for precise reasoning over video content and temporal events). These upgrades enable Qwen3-VL to interpret complex scenes, follow dynamic video sequences, read and reason about visual layouts.
    Starting Price: Free
  • 3
    Devstral 2

    Devstral 2

    Mistral AI

    Devstral 2 is a next-generation, open source agentic AI model tailored for software engineering: it doesn’t just suggest code snippets, it understands and acts across entire codebases, enabling multi-file edits, bug fixes, refactoring, dependency resolution, and context-aware code generation. The Devstral 2 family includes a large 123-billion-parameter model as well as a smaller 24-billion-parameter variant (“Devstral Small 2”), giving teams flexibility; the larger model excels in heavy-duty coding tasks requiring deep context, while the smaller one can run on more modest hardware. With a vast context window of up to 256 K tokens, Devstral 2 can reason across extensive repositories, track project history, and maintain a consistent understanding of lengthy files, an advantage for complex, real-world projects. The CLI tracks project metadata, Git statuses, and directory structure to give the model context, making “vibe-coding” more powerful.
    Starting Price: Free
  • 4
    Devstral Small 2
    Devstral Small 2 is the compact, 24 billion-parameter variant of the new coding-focused model family from Mistral AI, released under the permissive Apache 2.0 license to enable both local deployment and API use. Alongside its larger sibling (Devstral 2), this model brings “agentic coding” capabilities to environments with modest compute: it supports a large 256K-token context window, enabling it to understand and make changes across entire codebases. On the standard code-generation benchmark (SWE-Bench Verified), Devstral Small 2 scores around 68.0%, placing it among open-weight models many times its size. Because of its reduced size and efficient design, Devstral Small 2 can run on a single GPU or even CPU-only setups, making it practical for developers, small teams, or hobbyists without access to data-center hardware. Despite its compact footprint, Devstral Small 2 retains key capabilities of larger models; it can reason across multiple files and track dependencies.
    Starting Price: Free
  • 5
    DeepCoder

    DeepCoder

    Agentica Project

    DeepCoder is a fully open source code-reasoning and generation model released by Agentica Project in collaboration with Together AI. It is fine-tuned from DeepSeek-R1-Distilled-Qwen-14B using distributed reinforcement learning, achieving a 60.6% accuracy on LiveCodeBench (representing an 8% improvement over the base), a performance level that matches that of proprietary models such as o3-mini (2025-01-031 Low) and o1 while using only 14 billion parameters. It was trained over 2.5 weeks on 32 H100 GPUs with a curated dataset of roughly 24,000 coding problems drawn from verified sources (including TACO-Verified, PrimeIntellect SYNTHETIC-1, and LiveCodeBench submissions), each problem requiring a verifiable solution and at least five unit tests to ensure reliability for RL training. To handle long-range context, DeepCoder employs techniques such as iterative context lengthening and overlong filtering.
    Starting Price: Free
  • 6
    DeepSWE

    DeepSWE

    Agentica Project

    DeepSWE is a fully open source, state-of-the-art coding agent built on top of the Qwen3-32B foundation model and trained exclusively via reinforcement learning (RL), without supervised finetuning or distillation from proprietary models. It is developed using rLLM, Agentica’s open source RL framework for language agents. DeepSWE operates as an agent; it interacts with a simulated development environment (via the R2E-Gym environment) using a suite of tools (file editor, search, shell-execution, submit/finish), enabling it to navigate codebases, edit multiple files, compile/run tests, and iteratively produce patches or complete engineering tasks. DeepSWE exhibits emergent behaviors beyond simple code generation; when presented with bugs or feature requests, the agent reasons about edge cases, seeks existing tests in the repository, proposes patches, writes extra tests for regressions, and dynamically adjusts its “thinking” effort.
    Starting Price: Free
  • 7
    DeepScaleR

    DeepScaleR

    Agentica Project

    DeepScaleR is a 1.5-billion-parameter language model fine-tuned from DeepSeek-R1-Distilled-Qwen-1.5B using distributed reinforcement learning and a novel iterative context-lengthening strategy that gradually increases its context window from 8K to 24K tokens during training. It was trained on ~40,000 carefully curated mathematical problems drawn from competition-level datasets like AIME (1984–2023), AMC (pre-2023), Omni-MATH, and STILL. DeepScaleR achieves 43.1% accuracy on AIME 2024, a roughly 14.3 percentage point boost over the base model, and surpasses the performance of the proprietary O1-Preview model despite its much smaller size. It also posts strong results on a suite of math benchmarks (e.g., MATH-500, AMC 2023, Minerva Math, OlympiadBench), demonstrating that small, efficient models tuned with RL can match or exceed larger baselines on reasoning tasks.
    Starting Price: Free
  • 8
    GLM-4.6V

    GLM-4.6V

    Zhipu AI

    GLM-4.6V is a state-of-the-art open source multimodal vision-language model from the Z.ai (GLM-V) family designed for reasoning, perception, and action. It ships in two variants: a full-scale version (106B parameters) for cloud or high-performance clusters, and a lightweight “Flash” variant (9B) optimized for local deployment or low-latency use. GLM-4.6V supports a native context window of up to 128K tokens during training, enabling it to process very long documents or multimodal inputs. Crucially, it integrates native Function Calling, meaning the model can take images, screenshots, documents, or other visual media as input directly (without manual text conversion), reason about them, and trigger tool calls, bridging “visual perception” with “executable action.” This enables a wide spectrum of capabilities; interleaved image-and-text content generation (for example, combining document understanding with text summarization or generation of image-annotated responses).
    Starting Price: Free
  • 9
    GLM-4.1V

    GLM-4.1V

    Zhipu AI

    GLM-4.1V is a vision-language model, providing a powerful, compact multimodal model designed for reasoning and perception across images, text, and documents. The 9-billion-parameter variant (GLM-4.1V-9B-Thinking) is built on the GLM-4-9B foundation and enhanced through a specialized training paradigm using Reinforcement Learning with Curriculum Sampling (RLCS). It supports a 64k-token context window and accepts high-resolution inputs (up to 4K images, any aspect ratio), enabling it to handle complex tasks such as optical character recognition, image captioning, chart and document parsing, video and scene understanding, GUI-agent workflows (e.g., interpreting screenshots, recognizing UI elements), and general vision-language reasoning. In benchmark evaluations at the 10 B-parameter scale, GLM-4.1V-9B-Thinking achieved top performance on 23 of 28 tasks.
    Starting Price: Free
  • 10
    GLM-4.5V-Flash
    GLM-4.5V-Flash is an open source vision-language model, designed to bring strong multimodal capabilities into a lightweight, deployable package. It supports image, video, document, and GUI inputs, enabling tasks such as scene understanding, chart and document parsing, screen reading, and multi-image analysis. Compared to larger models in the series, GLM-4.5V-Flash offers a compact footprint while retaining core VLM capabilities like visual reasoning, video understanding, GUI task handling, and complex document parsing. It can serve in “GUI agent” workflows, meaning it can interpret screenshots or desktop captures, recognize icons or UI elements, and assist with automated desktop or web-based tasks. Although it forgoes some of the largest-model performance gains, GLM-4.5V-Flash remains versatile for real-world multimodal tasks where efficiency, lower resource usage, and broad modality support are prioritized.
    Starting Price: Free
  • 11
    GLM-4.5V

    GLM-4.5V

    Zhipu AI

    GLM-4.5V builds on the GLM-4.5-Air foundation, using a Mixture-of-Experts (MoE) architecture with 106 billion total parameters and 12 billion activation parameters. It achieves state-of-the-art performance among open-source VLMs of similar scale across 42 public benchmarks, excelling in image, video, document, and GUI-based tasks. It supports a broad range of multimodal capabilities, including image reasoning (scene understanding, spatial recognition, multi-image analysis), video understanding (segmentation, event recognition), complex chart and long-document parsing, GUI-agent workflows (screen reading, icon recognition, desktop automation), and precise visual grounding (e.g., locating objects and returning bounding boxes). GLM-4.5V also introduces a “Thinking Mode” switch, allowing users to choose between fast responses or deeper reasoning when needed.
    Starting Price: Free
  • 12
    Grok Voice Agent
    The Grok Voice Agent API is xAI’s new developer platform for building fast, intelligent, and multilingual voice agents. It is powered by the same in-house voice technology used by Grok Voice in mobile apps and Tesla vehicles. The API enables voice agents to speak dozens of languages, call tools, and search real-time data. Grok Voice Agents are engineered for low latency, delivering audio responses in under one second. The platform ranks first on the Big Bench Audio benchmark for voice reasoning performance. Developers benefit from a simple, flat pricing model based on connection time. The Grok Voice Agent API brings production-proven voice intelligence to custom applications.
    Starting Price: $0.05 per minute
  • 13
    SAM Audio
    SAM Audio is a next-generation AI model for detailed audio segmentation and editing. It lets users isolate specific sounds from complex audio mixtures using intuitive prompts that mimic how people think about sound. You can type descriptive text (like “remove dog barking” or “keep vocals only”), click on objects in a video to pull their associated audio, or mark specific time spans where target sounds occur — all in one unified system. SAM Audio is available for experimentation and integration through Meta’s Segment Anything Playground platform, where users can upload their own audio or video files and instantly try SAM Audio’s capabilities. It’s also downloadable for use in custom audio and research workflows. Unlike traditional audio tools that focus on single, narrow tasks, SAM Audio supports multiple kinds of prompts and real-world sound environments with high accuracy.
    Starting Price: Free
  • 14
    RoBERTa
    RoBERTa builds on BERT’s language masking strategy, wherein the system learns to predict intentionally hidden sections of text within otherwise unannotated language examples. RoBERTa, which was implemented in PyTorch, modifies key hyperparameters in BERT, including removing BERT’s next-sentence pretraining objective, and training with much larger mini-batches and learning rates. This allows RoBERTa to improve on the masked language modeling objective compared with BERT and leads to better downstream task performance. We also explore training RoBERTa on an order of magnitude more data than BERT, for a longer amount of time. We used existing unannotated NLP datasets as well as CC-News, a novel set drawn from public news articles.
    Starting Price: Free
  • 15
    ESMFold
    ESMFold shows how AI can give us new tools to understand the natural world, much like the microscope, which enabled us to see into the world at an infinitesimal scale and opened up a whole new understanding of life. AI can help us understand the immense scope of natural diversity, and see biology in a new way. Much of AI research has focused on helping computers understand the world in a way similar to how humans do. The language of proteins is one that is beyond human comprehension and has eluded even the most powerful computational tools. AI has the potential to open up this language to our understanding. Studying AI in new domains such as biology can also give insight into artificial intelligence more broadly. Our work reveals connections across domains: large language models that are behind advances in machine translation, natural language understanding, speech recognition, and image generation are also able to learn deep information about biology.
    Starting Price: Free
  • 16
    XLNet

    XLNet

    XLNet

    XLNet is a new unsupervised language representation learning method based on a novel generalized permutation language modeling objective. Additionally, XLNet employs Transformer-XL as the backbone model, exhibiting excellent performance for language tasks involving long context. Overall, XLNet achieves state-of-the-art (SOTA) results on various downstream language tasks including question answering, natural language inference, sentiment analysis, and document ranking.
    Starting Price: Free
  • 17
    Hume AI

    Hume AI

    Hume AI

    Our platform is developed in tandem with scientific innovations that reveal how people experience and express over 30 distinct emotions. Expressive understanding and communication is critical to the future of voice assistants, health tech, social networks, and much more. Applications of AI should be supported by collaborative, rigorous, and inclusive science. AI should be prevented from treating human emotion as a means to an end. The benefits of AI should be shared by people from diverse backgrounds. People affected by AI should have enough data to make decisions about its use. AI should be deployed only with the informed consent of the people whom it affects.
    Starting Price: $3/month
  • 18
    FreedomGPT

    FreedomGPT

    Age of AI

    FreedomGPT is a 100% uncensored and private AI chatbot launched by Age of AI, LLC. Our VC firm invests in startups that will define the age of Artificial Intelligence and we hold openness as core. We believe AI will dramatically improve the lives of everyone on this planet if it is deployed responsibly with individual freedom as paramount. It was created to showcase the inevitability and necessity of unbiased and censor free AI. Most importantly it is 100% private. If generative AI is going to be an extension of the human psyche it must not be involuntarily exposed to others. A central Age of AI investing thesis is that everyone and every organization will need their own private LLM. We strive to invest in companies that make this a reality across numerous industry verticals.
    Starting Price: Free
  • 19
    CodeGen

    CodeGen

    Salesforce

    CodeGen is an open-source model for program synthesis. Trained on TPU-v4. Competitive with OpenAI Codex.
    Starting Price: Free
  • 20
    StarCoder

    StarCoder

    BigCode

    StarCoder and StarCoderBase are Large Language Models for Code (Code LLMs) trained on permissively licensed data from GitHub, including from 80+ programming languages, Git commits, GitHub issues, and Jupyter notebooks. Similar to LLaMA, we trained a ~15B parameter model for 1 trillion tokens. We fine-tuned StarCoderBase model for 35B Python tokens, resulting in a new model that we call StarCoder. We found that StarCoderBase outperforms existing open Code LLMs on popular programming benchmarks and matches or surpasses closed models such as code-cushman-001 from OpenAI (the original Codex model that powered early versions of GitHub Copilot). With a context length of over 8,000 tokens, the StarCoder models can process more input than any other open LLM, enabling a wide range of interesting applications. For example, by prompting the StarCoder models with a series of dialogues, we enabled them to act as a technical assistant.
    Starting Price: Free
  • 21
    Llama 2
    The next generation of our open source large language model. This release includes model weights and starting code for pretrained and fine-tuned Llama language models — ranging from 7B to 70B parameters. Llama 2 pretrained models are trained on 2 trillion tokens, and have double the context length than Llama 1. Its fine-tuned models have been trained on over 1 million human annotations. Llama 2 outperforms other open source language models on many external benchmarks, including reasoning, coding, proficiency, and knowledge tests. Llama 2 was pretrained on publicly available online data sources. The fine-tuned model, Llama-2-chat, leverages publicly available instruction datasets and over 1 million human annotations. We have a broad range of supporters around the world who believe in our open approach to today’s AI — companies that have given early feedback and are excited to build with Llama 2.
    Starting Price: Free
  • 22
    Code Llama
    Code Llama is a large language model (LLM) that can use text prompts to generate code. Code Llama is state-of-the-art for publicly available LLMs on code tasks, and has the potential to make workflows faster and more efficient for current developers and lower the barrier to entry for people who are learning to code. Code Llama has the potential to be used as a productivity and educational tool to help programmers write more robust, well-documented software. Code Llama is a state-of-the-art LLM capable of generating code, and natural language about code, from both code and natural language prompts. Code Llama is free for research and commercial use. Code Llama is built on top of Llama 2 and is available in three models: Code Llama, the foundational code model; Codel Llama - Python specialized for Python; and Code Llama - Instruct, which is fine-tuned for understanding natural language instructions.
    Starting Price: Free
  • 23
    ChatGPT Enterprise
    Enterprise-grade security & privacy and the most powerful version of ChatGPT yet. 1. Customer prompts or data are not used for training models 2. Data encryption at rest (AES-256) and in transit (TLS 1.2+) 3. SOC 2 compliant 4. Dedicated admin console and easy bulk member management 5. SSO and Domain Verification 6. Analytics dashboard to understand usage 7. Unlimited, high-speed access to GPT-4 and Advanced Data Analysis* 8. 32k token context windows for 4X longer inputs and memory 9. Shareable chat templates for your company to collaborate
    Starting Price: $60/user/month
  • 24
    GPT-5

    GPT-5

    OpenAI

    GPT-5 is OpenAI’s most advanced AI model, delivering smarter, faster, and more useful responses across a wide range of topics including math, science, finance, and law. It features built-in thinking capabilities that allow it to provide expert-level answers and perform complex reasoning. GPT-5 can handle long context lengths and generate detailed outputs, making it ideal for coding, research, and creative writing. The model includes a ‘verbosity’ parameter for customizable response length and improved personality control. It integrates with business tools like Google Drive and SharePoint to provide context-aware answers while respecting security permissions. Available to everyone, GPT-5 empowers users to collaborate with an AI assistant that feels like a knowledgeable colleague.
    Starting Price: $1.25 per 1M tokens
  • 25
    Upstage AI

    Upstage AI

    Upstage.ai

    Upstage AI builds powerful large language models and document processing engines designed to transform workflows across industries like insurance, healthcare, and finance. Their enterprise-grade AI technology delivers high accuracy and performance, enabling businesses to automate complex tasks such as claims processing, underwriting, and clinical document analysis. Key products include Solar Pro 2, a fast and grounded enterprise language model, Document Parse for converting PDFs and scans into machine-readable text, and Information Extract for precise data extraction from contracts and invoices. Upstage’s AI solutions help companies save time and reduce manual work by providing instant, accurate answers from large document sets. The platform supports flexible deployment options including cloud, on-premises, and hybrid, meeting strict compliance requirements. Trusted by global clients, Upstage continues to advance AI innovation with top conference publications and industry awards.
    Starting Price: $0.5 per 1M tokens
  • 26
    Command R+

    Command R+

    Cohere AI

    Command R+ is Cohere's newest large language model, optimized for conversational interaction and long-context tasks. It aims at being extremely performant, enabling companies to move beyond proof of concept and into production. We recommend using Command R+ for those workflows that lean on complex RAG functionality and multi-step tool use (agents). Command R, on the other hand, is great for simpler retrieval augmented generation (RAG) and single-step tool use tasks, as well as applications where price is a major consideration.
    Starting Price: Free
  • 27
    Medical LLM

    Medical LLM

    John Snow Labs

    John Snow Labs' Medical LLM is an advanced, domain-specific large language model (LLM) designed to revolutionize the way healthcare organizations harness the power of artificial intelligence. This innovative platform is tailored specifically for the healthcare industry, combining cutting-edge natural language processing (NLP) capabilities with a deep understanding of medical terminology, clinical workflows, and regulatory requirements. The result is a powerful tool that enables healthcare providers, researchers, and administrators to unlock new insights, improve patient outcomes, and drive operational efficiency. At the heart of the Healthcare LLM is its comprehensive training on vast amounts of healthcare data, including clinical notes, research papers, and regulatory documents. This specialized training allows the model to accurately interpret and generate medical text, making it an invaluable asset for tasks such as clinical documentation, automated coding, and medical research.
  • 28
    TinyLlama

    TinyLlama

    TinyLlama

    The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs. We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
    Starting Price: Free
  • 29
    Pixtral Large

    Pixtral Large

    Mistral AI

    Pixtral Large is a 124-billion-parameter open-weight multimodal model developed by Mistral AI, building upon their Mistral Large 2 architecture. It integrates a 123-billion-parameter multimodal decoder with a 1-billion-parameter vision encoder, enabling advanced understanding of documents, charts, and natural images while maintaining leading text comprehension capabilities. With a context window of 128,000 tokens, Pixtral Large can process at least 30 high-resolution images simultaneously. The model has demonstrated state-of-the-art performance on benchmarks such as MathVista, DocVQA, and VQAv2, surpassing models like GPT-4o and Gemini-1.5 Pro. Pixtral Large is available under the Mistral Research License for research and educational use, and under the Mistral Commercial License for commercial applications.
    Starting Price: Free
  • 30
    Liquid AI

    Liquid AI

    Liquid AI

    Our goal at Liquid is to build the most capable AI systems to solve problems at every scale, such that users can build, access, and control their AI solutions. This is to ensure that AI will be meaningfully, reliably, and efficiently integrated at all enterprises. Long term, Liquid will create and deploy frontier-AI-powered solutions that are available to everyone. We build white-box models within a white-box organization.