Alternatives to GPT-5

Compare GPT-5 alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to GPT-5 in 2026. Compare features, ratings, user reviews, pricing, and more from GPT-5 competitors and alternatives in order to make an informed decision for your business.

  • 1
    ERNIE 4.5
    ERNIE 4.5 is a cutting-edge conversational AI platform developed by Baidu, leveraging advanced natural language processing (NLP) models to enable highly sophisticated human-like interactions. The platform is part of Baidu’s ERNIE (Enhanced Representation through Knowledge Integration) series, which integrates multimodal capabilities, including text, image, and voice. ERNIE 4.5 enhances the ability of AI models to understand complex context and deliver more accurate, nuanced responses, making it suitable for various applications, from customer service and virtual assistants to content creation and enterprise-level automation.
    Starting Price: $0.55 per 1M tokens
  • 2
    ERNIE 5.0
    ERNIE 5.0 is a next-generation conversational AI platform developed by Baidu, designed to deliver natural, human-like interactions across multiple domains. Built on Baidu’s Enhanced Representation through Knowledge Integration (ERNIE) framework, it fuses advanced natural language processing (NLP) with deep contextual understanding. The model supports multimodal capabilities, allowing it to process and generate text, images, and voice seamlessly. ERNIE 5.0’s refined contextual awareness enables it to handle complex conversations with greater precision and nuance. Its applications span customer service, content generation, and enterprise automation, enhancing both user engagement and productivity. With its robust architecture, ERNIE 5.0 represents a major step forward in Baidu’s pursuit of intelligent, knowledge-driven AI systems.
  • 3
    ERNIE X1
    ERNIE X1 is an advanced conversational AI model developed by Baidu as part of their ERNIE (Enhanced Representation through Knowledge Integration) series. Unlike previous versions, ERNIE X1 is designed to be more efficient in understanding and generating human-like responses. It incorporates cutting-edge machine learning techniques to handle complex queries, making it capable of not only processing text but also generating images and engaging in multimodal communication. ERNIE X1 is often used in natural language processing applications such as chatbots, virtual assistants, and enterprise automation, offering significant improvements in accuracy, contextual understanding, and response quality.
    Starting Price: $0.28 per 1M tokens
  • 4
    ERNIE X1.1
    ERNIE X1.1 is Baidu’s upgraded reasoning model that delivers major improvements over its predecessor. It achieves 34.8% higher factual accuracy, 12.5% better instruction following, and 9.6% stronger agentic capabilities compared to ERNIE X1. In benchmark testing, it surpasses DeepSeek R1-0528 and performs on par with GPT-5 and Gemini 2.5 Pro. Built on the foundation of ERNIE 4.5, it has been enhanced with extensive mid-training and post-training, including reinforcement learning. The model is available through ERNIE Bot, the Wenxiaoyan app, and Baidu’s Qianfan MaaS platform via API. These upgrades are designed to reduce hallucinations, improve reliability, and strengthen real-world AI task performance.
  • 5
    Claude Sonnet 4
    Claude Sonnet 4, the latest evolution of Anthropic’s language models, offers a significant upgrade in coding, reasoning, and performance. Designed for diverse use cases, Sonnet 4 builds upon the success of its predecessor, Claude Sonnet 3.7, delivering more precise responses and better task execution. With a state-of-the-art 72.7% performance on the SWE-bench, it stands out in agentic scenarios, offering enhanced steerability and clear reasoning capabilities. Whether handling software development, multi-feature app creation, or complex problem-solving, Claude Sonnet 4 ensures higher code quality, reduced errors, and a smoother development process.
    Starting Price: $3 / 1 million tokens (input)
  • 6
    Claude Sonnet 4.5
    Claude Sonnet 4.5 is Anthropic’s latest frontier model, designed to excel in long-horizon coding, agentic workflows, and intensive computer use while maintaining safety and alignment. It achieves state-of-the-art performance on the SWE-bench Verified benchmark (for software engineering) and leads on OSWorld (a computer use benchmark), with the ability to sustain focus over 30 hours on complex, multi-step tasks. The model introduces improvements in tool handling, memory management, and context processing, enabling more sophisticated reasoning, better domain understanding (from finance and law to STEM), and deeper code comprehension. It supports context editing and memory tools to sustain long conversations or multi-agent tasks, and allows code execution and file creation within Claude apps. Sonnet 4.5 is deployed at AI Safety Level 3 (ASL-3), with classifiers protecting against inputs or outputs tied to risky domains, and includes mitigations against prompt injection.
  • 7
    Claude Haiku 4.5
    Anthropic has launched Claude Haiku 4.5, its latest small-language model designed to deliver near-frontier performance at significantly lower cost. The model provides similar coding and reasoning quality as the company’s mid-tier Sonnet 4, yet it runs at roughly one-third of the cost and more than twice the speed. In benchmarks cited by Anthropic, Haiku 4.5 meets or exceeds Sonnet 4’s performance in key tasks such as code generation and multi-step “computer use” workflows. It is optimized for real-time, low-latency scenarios such as chat assistants, customer service agents, and pair-programming support. Haiku 4.5 is made available via the Claude API under the identifier “claude-haiku-4-5” and supports large-scale deployments where cost, responsiveness, and near-frontier intelligence matter. Claude Haiku 4.5 is available now on Claude Code and our apps. Its efficiency means you can accomplish more within your usage limits while maintaining premium model performance.
    Starting Price: $1 per million input tokens
  • 8
    Claude Opus 4.1
    Claude Opus 4.1 is an incremental upgrade to Claude Opus 4 that boosts coding, agentic reasoning, and data-analysis performance without changing deployment complexity. It raises coding accuracy to 74.5 percent on SWE-bench Verified and sharpens in-depth research and detailed tracking for agentic search tasks. GitHub reports notable gains in multi-file code refactoring, while Rakuten Group highlights its precision in pinpointing exact corrections within large codebases without introducing bugs. Independent benchmarks show about a one-standard-deviation improvement on junior developer tests compared to Opus 4, mirroring major leaps seen in prior Claude releases. Opus 4.1 is available now to paid Claude users, in Claude Code, and via the Anthropic API (model ID claude-opus-4-1-20250805), as well as through Amazon Bedrock and Google Cloud Vertex AI, and integrates seamlessly into existing workflows with no additional setup beyond selecting the new model.
  • 9
    Claude Opus 4.5
    Claude Opus 4.5 is Anthropic’s newest flagship model, delivering major improvements in reasoning, coding, agentic workflows, and real-world problem solving. It outperforms previous models and leading competitors on benchmarks such as SWE-bench, multilingual coding tests, and advanced agent evaluations. Opus 4.5 also introduces stronger safety features, including significantly higher resistance to prompt injection and improved alignment across sensitive tasks. Developers gain new controls through the Claude API—like effort parameters, context compaction, and advanced tool use—allowing for more efficient, longer-running agentic workflows. Product updates across Claude, Claude Code, the Chrome extension, and Excel integrations expand how users interact with the model for software engineering, research, and everyday productivity. Overall, Claude Opus 4.5 marks a substantial step forward in capability, reliability, and usability for developers, enterprises, and end users.
  • 10
    Composer 1
    Composer is Cursor’s custom-built agentic AI model optimized specifically for software engineering tasks and designed to power fast, interactive coding assistance directly within the Cursor IDE, a VS Code-derived editor enhanced with intelligent automation. It is a mixture-of-experts model trained with reinforcement learning (RL) on real-world coding problems across large codebases, so it can produce high-speed, context-aware responses, from code edits and planning to answers that understand project structure, tools, and conventions, with generation speeds roughly four times faster than similar models in benchmarks. Composer is specialized for development workflows, leveraging long-context understanding, semantic search, and limited tool access (like file editing and terminal commands) so it can solve complex engineering requests with efficient and practical outputs.
    Starting Price: $20 per month
  • 11
    Composer 1.5
    Composer 1.5 is the latest agentic coding model from Cursor that balances speed and intelligence for everyday code tasks by scaling reinforcement learning approximately 20x more than its predecessor, enabling stronger performance on real-world programming challenges. It’s designed as a “thinking model” that generates internal reasoning tokens to analyze a user’s codebase and plan next steps, responding quickly to simple problems and engaging deeper reasoning on complex ones, while remaining interactive and fast for daily development workflows. To handle long-running tasks, Composer 1.5 introduces self-summarization, allowing the model to compress and carry forward context when it reaches context limits, which helps maintain accuracy across varying input lengths. Internal benchmarks show it surpasses Composer 1 in coding tasks, especially on more difficult issues, making it more capable for interactive use within Cursor’s environment.
  • 12
    Command A Translate
    Command A Translate is Cohere’s enterprise-grade machine translation model crafted to deliver secure, high-quality translation across 23 business-relevant languages. Built on a powerful 111-billion-parameter architecture with an 8K-input / 8K-output context window, it achieves industry-leading performance that surpasses models like GPT-5, DeepSeek-V3, DeepL Pro, and Google Translate across a broad suite of benchmarks. The model supports private deployments for sensitive workflows, allowing enterprises full control over their data, and introduces an innovative “Deep Translation” workflow, an agentic, multi-step refinement process that iteratively enhances translation quality for complex use cases. External validation from RWS Group confirms its excellence in challenging translation tasks. Additionally, the model’s weights are available for research via Hugging Face under a CC-BY-NC license, enabling deep customization, fine-tuning, and private deployment flexibility.
  • 13
    DeepSeek R1

    DeepSeek R1

    DeepSeek

    DeepSeek-R1 is an advanced open-source reasoning model developed by DeepSeek, designed to rival OpenAI's Model o1. Accessible via web, app, and API, it excels in complex tasks such as mathematics and coding, demonstrating superior performance on benchmarks like the American Invitational Mathematics Examination (AIME) and MATH. DeepSeek-R1 employs a mixture of experts (MoE) architecture with 671 billion total parameters, activating 37 billion parameters per token, enabling efficient and accurate reasoning capabilities. This model is part of DeepSeek's commitment to advancing artificial general intelligence (AGI) through open-source innovation.
  • 14
    DeepSeek R2

    DeepSeek R2

    DeepSeek

    DeepSeek R2 is the anticipated successor to DeepSeek R1, a groundbreaking AI reasoning model launched in January 2025 by the Chinese AI startup DeepSeek. Building on R1’s success, which disrupted the AI industry with its cost-effective performance rivaling top-tier models like OpenAI’s o1, R2 promises a quantum leap in capabilities. It is expected to deliver exceptional speed and human-like reasoning, excelling in complex tasks such as advanced coding and high-level mathematical problem-solving. Leveraging DeepSeek’s innovative Mixture-of-Experts architecture and efficient training methods, R2 aims to outperform its predecessor while maintaining a low computational footprint, potentially expanding its reasoning abilities to languages beyond English.
    Starting Price: Free
  • 15
    DeepSeek V3.1
    DeepSeek V3.1 is a groundbreaking open-weight large language model featuring a massive 685-billion parameters and an extended 128,000‑token context window, enabling it to process documents equivalent to 400-page books in a single prompt. It delivers integrated capabilities for chat, reasoning, and code generation within a unified hybrid architecture, seamlessly blending these functions into one coherent model. V3.1 supports a variety of tensor formats to give developers flexibility in optimizing performance across different hardware. Early benchmark results show robust performance, including a 71.6% score on the Aider coding benchmark, putting it on par with or ahead of systems like Claude Opus 4 and doing so at a far lower cost. Made available under an open source license on Hugging Face with minimal fanfare, DeepSeek V3.1 is poised to reshape access to high-performance AI, challenging traditional proprietary models.
    Starting Price: Free
  • 16
    DeepSeek-V3.2
    DeepSeek-V3.2 is a next-generation open large language model designed for efficient reasoning, complex problem solving, and advanced agentic behavior. It introduces DeepSeek Sparse Attention (DSA), a long-context attention mechanism that dramatically reduces computation while preserving performance. The model is trained with a scalable reinforcement learning framework, allowing it to achieve results competitive with GPT-5 and even surpass it in its Speciale variant. DeepSeek-V3.2 also includes a large-scale agent task synthesis pipeline that generates structured reasoning and tool-use demonstrations for post-training. The model features an updated chat template with new tool-calling logic and the optional developer role for agent workflows. With gold-medal performance in the IMO and IOI 2025 competitions, DeepSeek-V3.2 demonstrates elite reasoning capabilities for both research and applied AI scenarios.
    Starting Price: Free
  • 17
    DeepSeek-V3.2-Exp
    Introducing DeepSeek-V3.2-Exp, our latest experimental model built on V3.1-Terminus, debuting DeepSeek Sparse Attention (DSA) for faster and more efficient inference and training on long contexts. DSA enables fine-grained sparse attention with minimal loss in output quality, boosting performance for long-context tasks while reducing compute costs. Benchmarks indicate that V3.2-Exp performs on par with V3.1-Terminus despite these efficiency gains. The model is now live across app, web, and API. Alongside this, the DeepSeek API prices have been cut by over 50% immediately to make access more affordable. For a transitional period, users can still access V3.1-Terminus via a temporary API endpoint until October 15, 2025. DeepSeek welcomes feedback on DSA via its feedback portal. In conjunction with the release, DeepSeek-V3.2-Exp has been open-sourced: the model weights and supporting technology (including key GPU kernels in TileLang and CUDA) are available on Hugging Face.
    Starting Price: Free
  • 18
    DeepSeek-V3.2-Speciale
    DeepSeek-V3.2-Speciale is a high-compute variant of the DeepSeek-V3.2 model, created specifically for deep reasoning and advanced problem-solving tasks. It builds on DeepSeek Sparse Attention (DSA), a custom long-context attention mechanism that reduces computational overhead while preserving high performance. Through a large-scale reinforcement learning framework and extensive post-training compute, the Speciale variant surpasses GPT-5 on reasoning benchmarks and matches the capabilities of Gemini-3.0-Pro. The model achieved gold-medal performance in the International Mathematical Olympiad (IMO) 2025 and International Olympiad in Informatics (IOI) 2025. DeepSeek-V3.2-Speciale does not support tool-calling, making it purely optimized for uninterrupted reasoning and analytical accuracy. Released under the MIT license, it provides researchers and developers an open, state-of-the-art model focused entirely on high-precision reasoning.
    Starting Price: Free
  • 19
    Grok 3
    Grok-3, developed by xAI, represents a significant advancement in the field of artificial intelligence, aiming to set new benchmarks in AI capabilities. It is designed to be a multimodal AI, capable of processing and understanding data from various sources including text, images, and audio, which allows for a more integrated and comprehensive interaction with users. Grok-3 is built on an unprecedented scale, with training involving ten times more computational resources than its predecessor, leveraging 100,000 Nvidia H100 GPUs on the Colossus supercomputer. This extensive computational power is expected to enhance Grok-3's performance in areas like reasoning, coding, and real-time analysis of current events through direct access to X posts. The model is anticipated to outperform not only its earlier versions but also compete with other leading AI models in the generative AI landscape.
  • 20
    Grok 3 Think
    Grok 3 Think, the latest iteration of xAI's AI model, is designed to enhance reasoning capabilities using advanced reinforcement learning. It can think through complex problems for extended periods, from seconds to minutes, improving its answers by backtracking, exploring alternatives, and refining its approach. This model, trained on an unprecedented scale, delivers remarkable performance in tasks such as mathematics, coding, and world knowledge, showing impressive results in competitions like the American Invitational Mathematics Examination. Grok 3 Think not only provides accurate solutions but also offers transparency by allowing users to inspect the reasoning behind its decisions, setting a new standard for AI problem-solving.
  • 21
    Grok 4
    Grok 4 is the latest AI model from Elon Musk’s xAI, marking a significant advancement in AI reasoning and natural language understanding. Developed on the Colossus supercomputer, Grok 4 supports multimodal inputs including text and images, with plans to add video capabilities soon. It features enhanced precision in language tasks and has demonstrated superior performance in scientific reasoning and visual problem-solving compared to other leading AI models. Designed for developers, researchers, and technical users, Grok 4 offers powerful tools for complex tasks. The model incorporates improved moderation to address previous concerns about biased or problematic outputs. Grok 4 represents a major leap forward in AI’s ability to understand and generate human-like responses.
  • 22
    Grok 4 Fast
    Grok 4 Fast is the latest AI model from xAI, engineered to deliver rapid and efficient query processing. It improves upon earlier versions with faster response times, lower latency, and higher accuracy across a variety of topics. With enhanced natural language understanding, the model excels in both casual conversation and complex problem-solving. A key feature is its real-time data analysis capability, ensuring users receive up-to-date insights when needed. Grok 4 Fast is accessible across multiple platforms, including Grok, X, and mobile apps for iOS and Android. By combining speed, reliability, and scalability, it offers an ideal solution for anyone seeking instant, intelligent answers.
  • 23
    Grok 4 Heavy
    Grok 4 Heavy is the most powerful AI model offered by xAI, designed as a multi-agent system to deliver cutting-edge reasoning and intelligence. Built on the Colossus supercomputer, it achieves a 50% score on the challenging HLE benchmark, outperforming many competitors. This advanced model supports multimodal inputs including text and images, with plans to add video capabilities. Grok 4 Heavy targets power users such as developers, researchers, and technical enthusiasts who require top-tier AI performance. Access is provided through the premium “SuperGrok Heavy” subscription priced at $300 per month. xAI has enhanced moderation and removed problematic system prompts to ensure responsible and ethical AI use.
  • 24
    Grok 4.1
    Grok 4.1 is an advanced AI model developed by Elon Musk’s xAI, designed to push the limits of reasoning and natural language understanding. Built on the powerful Colossus supercomputer, it processes multimodal inputs including text and images, with upcoming support for video. The model delivers exceptional accuracy in scientific, technical, and linguistic tasks. Its architecture enables complex reasoning and nuanced response generation that rivals the best AI systems in the world. Enhanced moderation ensures more responsible and unbiased outputs than earlier versions. Grok 4.1 is a breakthrough in creating AI that can think, interpret, and respond more like a human.
  • 25
    Grok 4.1 Fast
    Grok 4.1 Fast is the newest xAI model designed to deliver advanced tool-calling capabilities with a massive 2-million-token context window. It excels at complex real-world tasks such as customer support, finance, troubleshooting, and dynamic agent workflows. The model pairs seamlessly with the new Agent Tools API, which enables real-time web search, X search, file retrieval, and secure code execution. This combination gives developers the power to build fully autonomous, production-grade agents that plan, reason, and use tools effectively. Grok 4.1 Fast is trained with long-horizon reinforcement learning, ensuring stable multi-turn accuracy even across extremely long prompts. With its speed, cost-efficiency, and high benchmark scores, it sets a new standard for scalable enterprise-grade AI agents.
  • 26
    Grok 4.20
    Grok 4.20 is an advanced artificial intelligence model developed by xAI to elevate reasoning and natural language understanding. Built on the high-performance Colossus supercomputer, it is engineered for speed, scale, and accuracy. Grok 4.20 processes multimodal inputs such as text and images, with video support planned for future releases. The model excels in scientific, technical, and linguistic tasks, delivering highly precise and context-aware responses. Its architecture supports deep reasoning and sophisticated problem-solving capabilities. Enhanced moderation improves output reliability and reduces bias compared to earlier versions. Overall, Grok 4.20 represents a significant step toward more human-like AI reasoning and interpretation.
  • 27
    Grok Code Fast 1
    Grok Code Fast 1 is a high-speed, economical reasoning model designed specifically for agentic coding workflows. Unlike traditional models that can feel slow in tool-based loops, it delivers near-instant responses, excelling in everyday software development tasks. Built from scratch with a programming-rich corpus and refined on real-world pull requests, it supports languages like TypeScript, Python, Java, Rust, C++, and Go. Developers can use it for everything from zero-to-one project building to precise bug fixes and codebase Q&A. With optimized inference and caching techniques, it achieves impressive responsiveness and a 90%+ cache hit rate when integrated with partners like GitHub Copilot, Cursor, and Cline. Offered at just $0.20 per million input tokens and $1.50 per million output tokens, Grok Code Fast 1 strikes a strong balance between speed, performance, and affordability.
    Starting Price: $0.20 per million input tokens
  • 28
    MAI-1-preview

    MAI-1-preview

    Microsoft

    MAI-1 Preview is Microsoft AI’s first end-to-end trained foundation model, built entirely in-house as a mixture-of-experts architecture. Pre-trained and post-trained on approximately 15,000 NVIDIA H100 GPUs, it is designed to follow instructions and generate helpful, responsive text for everyday user queries, representing a prototype of future Copilot capabilities. Now available for public testing on LMArena, MAI-1 Preview delivers an early glimpse into the platform’s trajectory, with plans to roll out select text-based applications within Copilot over the coming weeks to gather user feedback and refine performance. Microsoft reinforces that it will continue combining its own models, partner models, and developments from the open-source community to flexibly power experiences across millions of unique interactions each day.
  • 29
    Hermes 4

    Hermes 4

    Nous Research

    Hermes 4 is the latest evolution in Nous Research’s line of neutrally aligned, steerable foundational models, featuring novel hybrid reasoners that can dynamically shift between expressive, creative responses and efficient, standard replies based on user prompts. The model is designed to respond to system and user instructions, rather than adhering to any corporate ethics framework, producing interactions that feel more humanistic, less lecturing or sycophantic, and encouraging roleplay and creativity. By incorporating a special tag in prompts, users can trigger deeper, internally token-intensive reasoning when tackling complex problems, while retaining prompt efficiency when such depth isn't required. Trained on a dataset 50 times larger than that of Hermes 3, much of which was synthetically generated using Atropos, Hermes 4 shows significant performance improvements.
    Starting Price: Free
  • 30
    Gemini 2.5 Pro
    Gemini 2.5 Pro is an advanced AI model designed to handle complex tasks with enhanced reasoning and coding capabilities. Leading common benchmarks, it excels in math, science, and coding, demonstrating strong performance in tasks like web app creation and code transformation. Built on the Gemini 2.5 foundation, it features a 1 million token context window, enabling it to process vast datasets from various sources such as text, images, and code repositories. Available now in Google AI Studio, Gemini 2.5 Pro is optimized for more sophisticated applications and supports advanced users with improved performance for complex problem-solving.
    Starting Price: $19.99/month
  • 31
    Gemini 3 Pro
    Gemini 3 Pro is Google’s most advanced multimodal AI model, built for developers who want to bring ideas to life with intelligence, precision, and creativity. It delivers breakthrough performance across reasoning, coding, and multimodal understanding—surpassing Gemini 2.5 Pro in both speed and capability. The model excels in agentic workflows, enabling autonomous coding, debugging, and refactoring across entire projects with long-context awareness. With superior performance in image, video, and spatial reasoning, Gemini 3 Pro powers next-generation applications in development, robotics, XR, and document intelligence. Developers can access it through the Gemini API, Google AI Studio, or Vertex AI, integrating seamlessly into existing tools and IDEs. Whether generating code, analyzing visuals, or building interactive apps from a single prompt, Gemini 3 Pro represents the future of intelligent, multimodal AI development.
    Starting Price: $19.99/month
  • 32
    Genie 3

    Genie 3

    Google DeepMind

    Genie 3 is DeepMind’s next-generation, general-purpose world model capable of generating richly interactive 3D environments in real time at 24 frames per second and 720p resolution that remain consistent for several minutes. Prompted by text input, the system constructs dynamic virtual worlds where users (or embodied agents) can navigate and interact with natural phenomena from multiple perspectives, like first-person or isometric. A standout feature is its emergent long-horizon visual memory: Genie 3 maintains environmental consistency over extended durations, preserving off-screen elements and spatial coherence across revisits. It also supports “promptable world events,” enabling users to modify scenes, such as changing weather or introducing new objects, on the fly. Designed to support embodied agent research, Genie 3 seamlessly integrates with agents like SIMA, facilitating goal-based navigation and complex task accomplishment.
  • 33
    GLM-4.1V

    GLM-4.1V

    Zhipu AI

    GLM-4.1V is a vision-language model, providing a powerful, compact multimodal model designed for reasoning and perception across images, text, and documents. The 9-billion-parameter variant (GLM-4.1V-9B-Thinking) is built on the GLM-4-9B foundation and enhanced through a specialized training paradigm using Reinforcement Learning with Curriculum Sampling (RLCS). It supports a 64k-token context window and accepts high-resolution inputs (up to 4K images, any aspect ratio), enabling it to handle complex tasks such as optical character recognition, image captioning, chart and document parsing, video and scene understanding, GUI-agent workflows (e.g., interpreting screenshots, recognizing UI elements), and general vision-language reasoning. In benchmark evaluations at the 10 B-parameter scale, GLM-4.1V-9B-Thinking achieved top performance on 23 of 28 tasks.
    Starting Price: Free
  • 34
    GLM-4.5V

    GLM-4.5V

    Zhipu AI

    GLM-4.5V builds on the GLM-4.5-Air foundation, using a Mixture-of-Experts (MoE) architecture with 106 billion total parameters and 12 billion activation parameters. It achieves state-of-the-art performance among open-source VLMs of similar scale across 42 public benchmarks, excelling in image, video, document, and GUI-based tasks. It supports a broad range of multimodal capabilities, including image reasoning (scene understanding, spatial recognition, multi-image analysis), video understanding (segmentation, event recognition), complex chart and long-document parsing, GUI-agent workflows (screen reading, icon recognition, desktop automation), and precise visual grounding (e.g., locating objects and returning bounding boxes). GLM-4.5V also introduces a “Thinking Mode” switch, allowing users to choose between fast responses or deeper reasoning when needed.
    Starting Price: Free
  • 35
    GLM-4.5V-Flash
    GLM-4.5V-Flash is an open source vision-language model, designed to bring strong multimodal capabilities into a lightweight, deployable package. It supports image, video, document, and GUI inputs, enabling tasks such as scene understanding, chart and document parsing, screen reading, and multi-image analysis. Compared to larger models in the series, GLM-4.5V-Flash offers a compact footprint while retaining core VLM capabilities like visual reasoning, video understanding, GUI task handling, and complex document parsing. It can serve in “GUI agent” workflows, meaning it can interpret screenshots or desktop captures, recognize icons or UI elements, and assist with automated desktop or web-based tasks. Although it forgoes some of the largest-model performance gains, GLM-4.5V-Flash remains versatile for real-world multimodal tasks where efficiency, lower resource usage, and broad modality support are prioritized.
    Starting Price: Free
  • 36
    GLM-4.6

    GLM-4.6

    Zhipu AI

    GLM-4.6 advances upon its predecessor with stronger reasoning, coding, and agentic capabilities: it demonstrates clear improvements in inferential performance, supports tool use during inference, and more effectively integrates into agent frameworks. In benchmark tests spanning reasoning, coding, and agents, GLM-4.6 outperforms GLM-4.5 and shows competitive strength against models such as DeepSeek-V3.2-Exp and Claude Sonnet 4, though it still trails Claude Sonnet 4.5 in pure coding performance. In real-world tests using an extended “CC-Bench” suite across front-end development, tool building, data analysis, and algorithmic tasks, GLM-4.6 beats GLM-4.5 and approaches parity with Claude Sonnet 4, winning ~48.6% of head-to-head comparisons, while also achieving ~15% better token efficiency. GLM-4.6 is available via the Z.ai API, and developers can integrate it as an LLM backend or agent core using the platform’s API.
    Starting Price: Free
  • 37
    GLM-4.6V

    GLM-4.6V

    Zhipu AI

    GLM-4.6V is a state-of-the-art open source multimodal vision-language model from the Z.ai (GLM-V) family designed for reasoning, perception, and action. It ships in two variants: a full-scale version (106B parameters) for cloud or high-performance clusters, and a lightweight “Flash” variant (9B) optimized for local deployment or low-latency use. GLM-4.6V supports a native context window of up to 128K tokens during training, enabling it to process very long documents or multimodal inputs. Crucially, it integrates native Function Calling, meaning the model can take images, screenshots, documents, or other visual media as input directly (without manual text conversion), reason about them, and trigger tool calls, bridging “visual perception” with “executable action.” This enables a wide spectrum of capabilities; interleaved image-and-text content generation (for example, combining document understanding with text summarization or generation of image-annotated responses).
    Starting Price: Free
  • 38
    GPT‑5-Codex
    GPT-5-Codex is a version of GPT-5 further optimized for agentic coding within Codex, focusing on real-world software engineering tasks (building full projects from scratch, adding features & tests, debugging, large-scale refactors, and code reviews). Codex now moves faster, is more reliable, and works better in real-time across your development environments, whether in terminal/CLI, IDE extension, via the web, in GitHub, or even on mobile. GPT-5-Codex is the default model for cloud tasks and code review; developers can also opt to use it locally via Codex CLI or the IDE extension. It dynamically adjusts how much “reasoning time” it spends depending on task complexity; small, well-defined tasks are fast and snappy; more complex ones (refactors, large feature work) get more sustained effort. Code review is stronger; it catches critical bugs before shipping.
  • 39
    GPT-4.1

    GPT-4.1

    OpenAI

    GPT-4.1 is an advanced AI model from OpenAI, designed to enhance performance across key tasks such as coding, instruction following, and long-context comprehension. With a large context window of up to 1 million tokens, GPT-4.1 can process and understand extensive datasets, making it ideal for tasks like software development, document analysis, and AI agent workflows. Available through the API, GPT-4.1 offers significant improvements over previous models, excelling at real-world applications where efficiency and accuracy are crucial.
    Starting Price: $2 per 1M tokens (input)
  • 40
    GPT-4.5

    GPT-4.5

    OpenAI

    GPT-4.5 is a powerful AI model that improves upon its predecessor by scaling unsupervised learning, enhancing reasoning abilities, and offering improved collaboration capabilities. Designed to better understand human intent and collaborate in more natural, intuitive ways, GPT-4.5 delivers higher accuracy and lower hallucination rates across a broad range of topics. Its advanced capabilities enable it to generate creative and insightful content, solve complex problems, and assist with tasks in writing, design, and even space exploration. With improved AI-human interactions, GPT-4.5 is optimized for practical applications, making it more accessible and reliable for businesses and developers.
    Starting Price: $75.00 / 1M tokens
  • 41
    GPT-5-Codex-Mini
    GPT-5-Codex-Mini is a compact and cost-efficient version of GPT-5-Codex designed to deliver roughly four times more usage with only a slight tradeoff in capability. It’s optimized for handling routine or lighter programming tasks while maintaining reliable output quality. Developers can access it through the CLI and IDE extension by signing in with ChatGPT, with API access coming soon. The system automatically suggests switching to GPT-5-Codex-Mini when users near 90% of their rate limits, helping extend uninterrupted usage. ChatGPT Plus, Business, and Edu users receive 50% higher rate limits, offering more flexibility for frequent workflows. Pro and Enterprise accounts are prioritized for faster processing, ensuring smoother, high-speed performance across larger workloads.
  • 42
    GPT-5 mini
    GPT-5 mini is a streamlined, faster, and more affordable variant of OpenAI’s GPT-5, optimized for well-defined tasks and precise prompts. It supports text and image inputs and delivers high-quality text outputs with a 400,000-token context window and up to 128,000 output tokens. This model excels at rapid response times, making it suitable for applications requiring fast, accurate language understanding without the full overhead of GPT-5. Pricing is cost-effective, with input tokens at $0.25 per million and output tokens at $2 per million, providing savings over the flagship model. GPT-5 mini supports advanced features like streaming, function calling, structured outputs, and fine-tuning, but does not support audio input or image generation. It integrates well with various API endpoints including chat completions, responses, and embeddings, making it versatile for many AI-powered tasks.
    Starting Price: $0.25 per 1M tokens
  • 43
    GPT-5 nano
    GPT-5 nano is OpenAI’s fastest and most affordable version of the GPT-5 family, designed for high-speed text processing tasks like summarization and classification. It supports text and image inputs, generating high-quality text outputs with a large 400,000-token context window and up to 128,000 output tokens. GPT-5 nano offers very fast response times, making it ideal for applications requiring quick turnaround without sacrificing quality. Pricing is extremely competitive, with input tokens costing $0.05 per million and output tokens $0.40 per million, making it accessible for budget-conscious projects. The model supports advanced API features such as streaming, function calling, structured outputs, and fine-tuning. While it supports image input, it does not handle audio input or web search, focusing on core text tasks efficiently.
    Starting Price: $0.05 per 1M tokens
  • 44
    GPT-5 pro
    GPT-5 Pro is OpenAI’s most advanced AI model, designed to tackle the most complex and challenging tasks with extended reasoning capabilities. It builds on GPT-5’s unified architecture, using scaled, efficient parallel compute to provide highly comprehensive and accurate responses. GPT-5 Pro achieves state-of-the-art performance on difficult benchmarks like GPQA, excelling in areas such as health, science, math, and coding. It makes significantly fewer errors than earlier models and delivers responses that experts find more relevant and useful. The model automatically balances quick answers and deep thinking, allowing users to get expert-level insights efficiently. GPT-5 Pro is available to Pro subscribers and powers some of the most demanding applications requiring advanced intelligence.
  • 45
    GPT-5 thinking
    GPT-5 Thinking is the deeper reasoning mode within the GPT-5 unified AI system, designed to tackle complex, open-ended problems that require extended cognitive effort. It works alongside the faster GPT-5 model, dynamically engaging when queries demand more detailed analysis and thoughtful responses. This mode significantly reduces hallucinations and improves factual accuracy, producing more reliable answers on challenging topics like science, math, coding, and health. GPT-5 Thinking is also better at recognizing its own limitations, communicating clearly when tasks are impossible or underspecified. It incorporates advanced safety features to minimize harmful outputs and provide nuanced, helpful answers even in ambiguous or sensitive contexts. Available to all users, it helps bring expert-level intelligence to everyday and advanced use cases alike.
  • 46
    GPT-5.1

    GPT-5.1

    OpenAI

    GPT-5.1 is the latest update in the GPT-5 series, designed to make ChatGPT dramatically smarter and more conversational. The release introduces two distinct model variants: GPT-5.1 Instant, which is described as the most-used model and is now warmer, better at following instructions, and more intelligent; and GPT-5.1 Thinking, which is the advanced reasoning engine that’s been tuned to be easier to understand, faster on straightforward tasks, and more persistent on complex ones. Users' queries are now routed automatically to the variant best-suited to the task. The update emphasizes not just improved raw intelligence but also enhanced communication style; the models are tuned to be more natural, enjoyable to talk to, and better aligned with user intents. The system card addendum notes that GPT-5.1 Instant uses “adaptive reasoning” that lets it decide when to think more deeply before responding, while GPT-5.1 Thinking adapts its thinking time accurately to the question at hand.
  • 47
    GPT-5.2 Instant
    GPT-5.2 Instant is the fast, capable variant of OpenAI’s GPT-5.2 model family designed for everyday work and learning with clear improvements in information-seeking questions, how-tos and walkthroughs, technical writing, and translation compared to prior versions. It builds on the warmer conversational tone introduced in GPT-5.1 Instant and produces clearer explanations that surface key information upfront, making it easier for users to get concise, accurate answers quickly. GPT-5.2 Instant delivers speed and responsiveness for typical tasks like answering queries, generating summaries, assisting with research, and helping with writing and editing, while incorporating broader enhancements from the GPT-5.2 series in reasoning, long-context handling, and factual grounding. As part of the GPT-5.2 lineup, it shares the same foundational improvements that boost overall reliability and performance across a wide range of everyday activities.
  • 48
    GPT-5.2 Pro
    GPT-5.2 Pro is the highest-capability variant of OpenAI’s latest GPT-5.2 model family, built to deliver professional-grade reasoning, complex task performance, and enhanced accuracy for demanding knowledge work, creative problem-solving, and enterprise-level applications. It builds on the foundational improvements of GPT-5.2, including stronger general intelligence, superior long-context understanding, better factual grounding, and improved tool use, while using more compute and deeper processing to produce more thoughtful, reliable, and context-rich responses for users with intricate, multi-step requirements. GPT-5.2 Pro is designed to handle challenging workflows such as advanced coding and debugging, deep data analysis, research synthesis, extensive document comprehension, and complex project planning with greater precision and fewer errors than lighter variants.
  • 49
    GPT-5.2 Thinking
    GPT-5.2 Thinking is the highest-capability configuration in OpenAI’s GPT-5.2 model family, engineered for deep, expert-level reasoning, complex task execution, and advanced problem solving across long contexts and professional domains. Built on the foundational GPT-5.2 architecture with improvements in grounding, stability, and reasoning quality, this variant applies more compute and reasoning effort to generate responses that are more accurate, structured, and contextually rich when handling highly intricate workflows, multi-step analysis, and domain-specific challenges. GPT-5.2 Thinking excels at tasks that require sustained logical coherence, such as detailed research synthesis, advanced coding and debugging, complex data interpretation, strategic planning, and sophisticated technical writing, and it outperforms lighter variants on benchmarks that test professional skills and deep comprehension.
  • 50
    Kimi K2

    Kimi K2

    Moonshot AI

    Kimi K2 is a state-of-the-art open source large language model series built on a mixture-of-experts (MoE) architecture, featuring 1 trillion total parameters and 32 billion activated parameters for task-specific efficiency. Trained with the Muon optimizer on over 15.5 trillion tokens and stabilized by MuonClip’s attention-logit clamping, it delivers exceptional performance in frontier knowledge, reasoning, mathematics, coding, and general agentic workflows. Moonshot AI provides two variants, Kimi-K2-Base for research-level fine-tuning and Kimi-K2-Instruct pre-trained for immediate chat and tool-driven interactions, enabling both custom development and drop-in agentic capabilities. Benchmarks show it outperforms leading open source peers and rivals top proprietary models in coding tasks and complex task breakdowns, while its 128 K-token context length, tool-calling API compatibility, and support for industry-standard inference engines.
    Starting Price: Free