Alternatives to Yi-Large

Compare Yi-Large alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to Yi-Large in 2026. Compare features, ratings, user reviews, pricing, and more from Yi-Large competitors and alternatives in order to make an informed decision for your business.

  • 1
    Yi-Lightning

    Yi-Lightning

    Yi-Lightning

    Yi-Lightning, developed by 01.AI under the leadership of Kai-Fu Lee, represents the latest advancement in large language models with a focus on high performance and cost-efficiency. It boasts a maximum context length of 16K tokens and is priced at $0.14 per million tokens for both input and output, making it remarkably competitive. Yi-Lightning leverages an enhanced Mixture-of-Experts (MoE) architecture, incorporating fine-grained expert segmentation and advanced routing strategies, which contribute to its efficiency in training and inference. This model has excelled in various domains, achieving top rankings in categories like Chinese, math, coding, and hard prompts on the chatbot arena, where it secured the 6th position overall and 9th in style control. Its development included comprehensive pre-training, supervised fine-tuning, and reinforcement learning from human feedback, ensuring both performance and safety, with optimizations in memory usage and inference speed.
  • 2
    CodeQwen

    CodeQwen

    Alibaba

    CodeQwen is the code version of Qwen, the large language model series developed by the Qwen team, Alibaba Cloud. It is a transformer-based decoder-only language model pre-trained on a large amount of data of codes. Strong code generation capabilities and competitive performance across a series of benchmarks. Supporting long context understanding and generation with the context length of 64K tokens. CodeQwen supports 92 coding languages and provides excellent performance in text-to-SQL, bug fixes, etc. You can just write several lines of code with transformers to chat with CodeQwen. Essentially, we build the tokenizer and the model from pre-trained methods, and we use the generate method to perform chatting with the help of the chat template provided by the tokenizer. We apply the ChatML template for chat models following our previous practice. The model completes the code snippets according to the given prompts, without any additional formatting.
    Starting Price: Free
  • 3
    Mistral Large 2
    Mistral AI has launched the Mistral Large 2, an advanced AI model designed to excel in code generation, multilingual capabilities, and complex reasoning tasks. The model features a 128k context window, supporting dozens of languages including English, French, Spanish, and Arabic, as well as over 80 programming languages. Mistral Large 2 is tailored for high-throughput single-node inference, making it ideal for large-context applications. Its improved performance on benchmarks like MMLU and its enhanced code generation and reasoning abilities ensure accuracy and efficiency. The model also incorporates better function calling and retrieval, supporting complex business applications.
    Starting Price: Free
  • 4
    PaLM 2

    PaLM 2

    Google

    PaLM 2 is our next generation large language model that builds on Google’s legacy of breakthrough research in machine learning and responsible AI. It excels at advanced reasoning tasks, including code and math, classification and question answering, translation and multilingual proficiency, and natural language generation better than our previous state-of-the-art LLMs, including PaLM. It can accomplish these tasks because of the way it was built – bringing together compute-optimal scaling, an improved dataset mixture, and model architecture improvements. PaLM 2 is grounded in Google’s approach to building and deploying AI responsibly. It was evaluated rigorously for its potential harms and biases, capabilities and downstream uses in research and in-product applications. It’s being used in other state-of-the-art models, like Med-PaLM 2 and Sec-PaLM, and is powering generative AI features and tools at Google, like Bard and the PaLM API.
  • 5
    Qwen2

    Qwen2

    Alibaba

    Qwen2 is the large language model series developed by Qwen team, Alibaba Cloud. Qwen2 is a series of large language models developed by the Qwen team at Alibaba Cloud. It includes both base language models and instruction-tuned models, ranging from 0.5 billion to 72 billion parameters, and features both dense models and a Mixture-of-Experts model. The Qwen2 series is designed to surpass most previous open-weight models, including its predecessor Qwen1.5, and to compete with proprietary models across a broad spectrum of benchmarks in language understanding, generation, multilingual capabilities, coding, mathematics, and reasoning.
    Starting Price: Free
  • 6
    Granite Code
    We introduce the Granite series of decoder-only code models for code generative tasks (e.g., fixing bugs, explaining code, documenting code), trained with code written in 116 programming languages. A comprehensive evaluation of the Granite Code model family on diverse tasks demonstrates that our models consistently reach state-of-the-art performance among available open source code LLMs. The key advantages of Granite Code models include: All-rounder Code LLM: Granite Code models achieve competitive or state-of-the-art performance on different kinds of code-related tasks, including code generation, explanation, fixing, editing, translation, and more. Demonstrating their ability to solve diverse coding tasks. Trustworthy Enterprise-Grade LLM: All our models are trained on license-permissible data collected following IBM's AI Ethics principles and guided by IBM’s Corporate Legal team for trustworthy enterprise usage.
    Starting Price: Free
  • 7
    Mistral NeMo

    Mistral NeMo

    Mistral AI

    Mistral NeMo, our new best small model. A state-of-the-art 12B model with 128k context length, and released under the Apache 2.0 license. Mistral NeMo is a 12B model built in collaboration with NVIDIA. Mistral NeMo offers a large context window of up to 128k tokens. Its reasoning, world knowledge, and coding accuracy are state-of-the-art in its size category. As it relies on standard architecture, Mistral NeMo is easy to use and a drop-in replacement in any system using Mistral 7B. We have released pre-trained base and instruction-tuned checkpoints under the Apache 2.0 license to promote adoption for researchers and enterprises. Mistral NeMo was trained with quantization awareness, enabling FP8 inference without any performance loss. The model is designed for global, multilingual applications. It is trained on function calling and has a large context window. Compared to Mistral 7B, it is much better at following precise instructions, reasoning, and handling multi-turn conversations.
    Starting Price: Free
  • 8
    DeepSeek-V2

    DeepSeek-V2

    DeepSeek

    DeepSeek-V2 is a state-of-the-art Mixture-of-Experts (MoE) language model introduced by DeepSeek-AI, characterized by its economical training and efficient inference capabilities. With a total of 236 billion parameters, of which only 21 billion are active per token, it supports a context length of up to 128K tokens. DeepSeek-V2 employs innovative architectures like Multi-head Latent Attention (MLA) for efficient inference by compressing the Key-Value (KV) cache and DeepSeekMoE for cost-effective training through sparse computation. This model significantly outperforms its predecessor, DeepSeek 67B, by saving 42.5% in training costs, reducing the KV cache by 93.3%, and enhancing generation throughput by 5.76 times. Pretrained on an 8.1 trillion token corpus, DeepSeek-V2 excels in language understanding, coding, and reasoning tasks, making it a top-tier performer among open-source models.
    Starting Price: Free
  • 9
    Llama 2
    The next generation of our open source large language model. This release includes model weights and starting code for pretrained and fine-tuned Llama language models — ranging from 7B to 70B parameters. Llama 2 pretrained models are trained on 2 trillion tokens, and have double the context length than Llama 1. Its fine-tuned models have been trained on over 1 million human annotations. Llama 2 outperforms other open source language models on many external benchmarks, including reasoning, coding, proficiency, and knowledge tests. Llama 2 was pretrained on publicly available online data sources. The fine-tuned model, Llama-2-chat, leverages publicly available instruction datasets and over 1 million human annotations. We have a broad range of supporters around the world who believe in our open approach to today’s AI — companies that have given early feedback and are excited to build with Llama 2.
    Starting Price: Free
  • 10
    Mistral Large

    Mistral Large

    Mistral AI

    Mistral Large is Mistral AI's flagship language model, designed for advanced text generation and complex multilingual reasoning tasks, including text comprehension, transformation, and code generation. It supports English, French, Spanish, German, and Italian, offering a nuanced understanding of grammar and cultural contexts. With a 32,000-token context window, it can accurately recall information from extensive documents. The model's precise instruction-following and native function-calling capabilities facilitate application development and tech stack modernization. Mistral Large is accessible through Mistral's platform, Azure AI Studio, and Azure Machine Learning, and can be self-deployed for sensitive use cases. Benchmark evaluations indicate that Mistral Large achieves strong results, making it the world's second-ranked model generally available through an API, next to GPT-4.
    Starting Price: Free
  • 11
    Claude Haiku 4.5
    Anthropic has launched Claude Haiku 4.5, its latest small-language model designed to deliver near-frontier performance at significantly lower cost. The model provides similar coding and reasoning quality as the company’s mid-tier Sonnet 4, yet it runs at roughly one-third of the cost and more than twice the speed. In benchmarks cited by Anthropic, Haiku 4.5 meets or exceeds Sonnet 4’s performance in key tasks such as code generation and multi-step “computer use” workflows. It is optimized for real-time, low-latency scenarios such as chat assistants, customer service agents, and pair-programming support. Haiku 4.5 is made available via the Claude API under the identifier “claude-haiku-4-5” and supports large-scale deployments where cost, responsiveness, and near-frontier intelligence matter. Claude Haiku 4.5 is available now on Claude Code and our apps. Its efficiency means you can accomplish more within your usage limits while maintaining premium model performance.
    Starting Price: $1 per million input tokens
  • 12
    Xiaomi MiMo

    Xiaomi MiMo

    Xiaomi Technology

    The Xiaomi MiMo API open platform is a developer-oriented interface for accessing and integrating Xiaomi’s MiMo family of AI models, including reasoning and language models such as MiMo-V2-Flash, into applications and services through standardized APIs and cloud endpoints, enabling developers to build AI-enabled features like conversational agents, reasoning workflows, code assistance, and search-augmented tasks without managing model infrastructure themselves. It offers REST-style API access with authentication, request signing, and structured responses so software can send prompts and receive generated text or processed outputs programmatically, and it supports common operations like text generation, prompt handling, and inference over MiMo models. By providing documentation and onboarding tools, the open platform lets teams integrate Xiaomi’s latest open source large language models, which leverage Mixture-of-Experts (MoE) architectures.
    Starting Price: Free
  • 13
    Claude Opus 4.5
    Claude Opus 4.5 is Anthropic’s newest flagship model, delivering major improvements in reasoning, coding, agentic workflows, and real-world problem solving. It outperforms previous models and leading competitors on benchmarks such as SWE-bench, multilingual coding tests, and advanced agent evaluations. Opus 4.5 also introduces stronger safety features, including significantly higher resistance to prompt injection and improved alignment across sensitive tasks. Developers gain new controls through the Claude API—like effort parameters, context compaction, and advanced tool use—allowing for more efficient, longer-running agentic workflows. Product updates across Claude, Claude Code, the Chrome extension, and Excel integrations expand how users interact with the model for software engineering, research, and everyday productivity. Overall, Claude Opus 4.5 marks a substantial step forward in capability, reliability, and usability for developers, enterprises, and end users.
  • 14
    MiniMax-M2.1
    MiniMax-M2.1 is an open-source, agentic large language model designed for advanced coding, tool use, and long-horizon planning. It was released to the community to make high-performance AI agents more transparent, controllable, and accessible. The model is optimized for robustness in software engineering, instruction following, and complex multi-step workflows. MiniMax-M2.1 supports multilingual development and performs strongly across real-world coding scenarios. It is suitable for building autonomous applications that require reasoning, planning, and execution. The model weights are fully open, enabling local deployment and customization. MiniMax-M2.1 represents a major step toward democratizing top-tier agent capabilities.
    Starting Price: Free
  • 15
    Megatron-Turing
    Megatron-Turing Natural Language Generation model (MT-NLG), is the largest and the most powerful monolithic transformer English language model with 530 billion parameters. This 105-layer, transformer-based MT-NLG improves upon the prior state-of-the-art models in zero-, one-, and few-shot settings. It demonstrates unmatched accuracy in a broad set of natural language tasks such as, Completion prediction, Reading comprehension, Commonsense reasoning, Natural language inferences, Word sense disambiguation, etc. With the intent of accelerating research on the largest English language model till date and enabling customers to experiment, employ and apply such a large language model on downstream language tasks - NVIDIA is pleased to announce an Early Access program for its managed API service to MT-NLG mode.
  • 16
    Codestral Mamba
    As a tribute to Cleopatra, whose glorious destiny ended in tragic snake circumstances, we are proud to release Codestral Mamba, a Mamba2 language model specialized in code generation, available under an Apache 2.0 license. Codestral Mamba is another step in our effort to study and provide new architectures. It is available for free use, modification, and distribution, and we hope it will open new perspectives in architecture research. Mamba models offer the advantage of linear time inference and the theoretical ability to model sequences of infinite length. It allows users to engage with the model extensively with quick responses, irrespective of the input length. This efficiency is especially relevant for code productivity use cases, this is why we trained this model with advanced code and reasoning capabilities, enabling it to perform on par with SOTA transformer-based models.
    Starting Price: Free
  • 17
    Qwen3.5

    Qwen3.5

    Alibaba

    Qwen3.5 is a next-generation open-weight multimodal large language model designed to power native vision-language agents. The flagship release, Qwen3.5-397B-A17B, combines a hybrid linear attention architecture with sparse mixture-of-experts, activating only 17 billion parameters per forward pass out of 397 billion total to maximize efficiency. It delivers strong benchmark performance across reasoning, coding, multilingual understanding, visual reasoning, and agent-based tasks. The model expands language support from 119 to 201 languages and dialects while introducing a 1M-token context window in its hosted version, Qwen3.5-Plus. Built for multimodal tasks, it processes text, images, and video with advanced spatial reasoning and tool integration. Qwen3.5 also incorporates scalable reinforcement learning environments to improve general agent capabilities. Designed for developers and enterprises, it enables efficient, tool-augmented, multimodal AI workflows.
    Starting Price: Free
  • 18
    Gemini 2.0
    Gemini 2.0 is an advanced AI-powered model developed by Google, designed to offer groundbreaking capabilities in natural language understanding, reasoning, and multimodal interactions. Building on the success of its predecessor, Gemini 2.0 integrates large language processing with enhanced problem-solving and decision-making abilities, enabling it to interpret and generate human-like responses with greater accuracy and nuance. Unlike traditional AI models, Gemini 2.0 is trained to handle multiple data types simultaneously, including text, images, and code, making it a versatile tool for research, business, education, and creative industries. Its core improvements include better contextual understanding, reduced bias, and a more efficient architecture that ensures faster, more reliable outputs. Gemini 2.0 is positioned as a major step forward in the evolution of AI, pushing the boundaries of human-computer interaction.
  • 19
    Kimi K2

    Kimi K2

    Moonshot AI

    Kimi K2 is a state-of-the-art open source large language model series built on a mixture-of-experts (MoE) architecture, featuring 1 trillion total parameters and 32 billion activated parameters for task-specific efficiency. Trained with the Muon optimizer on over 15.5 trillion tokens and stabilized by MuonClip’s attention-logit clamping, it delivers exceptional performance in frontier knowledge, reasoning, mathematics, coding, and general agentic workflows. Moonshot AI provides two variants, Kimi-K2-Base for research-level fine-tuning and Kimi-K2-Instruct pre-trained for immediate chat and tool-driven interactions, enabling both custom development and drop-in agentic capabilities. Benchmarks show it outperforms leading open source peers and rivals top proprietary models in coding tasks and complex task breakdowns, while its 128 K-token context length, tool-calling API compatibility, and support for industry-standard inference engines.
    Starting Price: Free
  • 20
    Samsung Gauss
    Samsung Gauss is a new AI model developed by Samsung Electronics. It is a large language model (LLM) that has been trained on a massive dataset of text and code. Samsung Gauss is able to generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. Samsung Gauss is still under development, but it has already learned to perform many kinds of tasks, including: Following instructions and completing requests thoughtfully. Answering your questions in a comprehensive and informative way, even if they are open ended, challenging, or strange. Generating different creative text formats, like poems, code, scripts, musical pieces, email, letters, etc. Here are some examples of what Samsung Gauss can do: Translation: Samsung Gauss can translate text between many different languages, including English, French, German, Spanish, Chinese, Japanese, and Korean. Coding: Samsung Gauss can generate code.
  • 21
    Stable LM

    Stable LM

    Stability AI

    Stable LM: Stability AI Language Models. The release of Stable LM builds on our experience in open-sourcing earlier language models with EleutherAI, a nonprofit research hub. These language models include GPT-J, GPT-NeoX, and the Pythia suite, which were trained on The Pile open-source dataset. Many recent open-source language models continue to build on these efforts, including Cerebras-GPT and Dolly-2. Stable LM is trained on a new experimental dataset built on The Pile, but three times larger with 1.5 trillion tokens of content. We will release details on the dataset in due course. The richness of this dataset gives Stable LM surprisingly high performance in conversational and coding tasks, despite its small size of 3 to 7 billion parameters (by comparison, GPT-3 has 175 billion parameters). Stable LM 3B is a compact language model designed to operate on portable digital devices like handhelds and laptops, and we’re excited about its capabilities and portability.
    Starting Price: Free
  • 22
    GPT-J

    GPT-J

    EleutherAI

    GPT-J is a cutting-edge language model created by the research organization EleutherAI. In terms of performance, GPT-J exhibits a level of proficiency comparable to that of OpenAI's renowned GPT-3 model in a range of zero-shot tasks. Notably, GPT-J has demonstrated the ability to surpass GPT-3 in tasks related to generating code. The latest iteration of this language model, known as GPT-J-6B, is built upon a linguistic dataset referred to as The Pile. This dataset, which is publicly available, encompasses a substantial volume of 825 gibibytes of language data, organized into 22 distinct subsets. While GPT-J shares certain capabilities with ChatGPT, it is important to note that GPT-J is not designed to operate as a chatbot; rather, its primary function is to predict text. In a significant development in March 2023, Databricks introduced Dolly, a model that follows instructions and is licensed under Apache.
    Starting Price: Free
  • 23
    GLM-4.7

    GLM-4.7

    Zhipu AI

    GLM-4.7 is an advanced large language model designed to significantly elevate coding, reasoning, and agentic task performance. It delivers major improvements over GLM-4.6 in multilingual coding, terminal-based tasks, and real-world software engineering benchmarks such as SWE-bench and Terminal Bench. GLM-4.7 supports “thinking before acting,” enabling more stable, accurate, and controllable behavior in complex coding and agent workflows. The model also introduces strong gains in UI and frontend generation, producing cleaner webpages, better layouts, and more polished slides. Enhanced tool-using capabilities allow GLM-4.7 to perform more effectively in web browsing, automation, and agent benchmarks. Its reasoning and mathematical performance has improved substantially, showing strong results on advanced evaluation suites. GLM-4.7 is available via Z.ai, API platforms, coding agents, and local deployment for flexible adoption.
    Starting Price: Free
  • 24
    Gemini 1.5 Pro
    The Gemini 1.5 Pro AI model is a state-of-the-art language model designed to deliver highly accurate, context-aware, and human-like responses across a variety of applications. Built with cutting-edge neural architecture, it excels in natural language understanding, generation, and reasoning tasks. The model is fine-tuned for versatility, supporting tasks like content creation, code generation, data analysis, and complex problem-solving. Its advanced algorithms ensure nuanced comprehension, enabling it to adapt to different domains and conversational styles seamlessly. With a focus on scalability and efficiency, the Gemini 1.5 Pro is optimized for both small-scale implementations and enterprise-level integrations, making it a powerful tool for enhancing productivity and innovation.
  • 25
    StarCoder

    StarCoder

    BigCode

    StarCoder and StarCoderBase are Large Language Models for Code (Code LLMs) trained on permissively licensed data from GitHub, including from 80+ programming languages, Git commits, GitHub issues, and Jupyter notebooks. Similar to LLaMA, we trained a ~15B parameter model for 1 trillion tokens. We fine-tuned StarCoderBase model for 35B Python tokens, resulting in a new model that we call StarCoder. We found that StarCoderBase outperforms existing open Code LLMs on popular programming benchmarks and matches or surpasses closed models such as code-cushman-001 from OpenAI (the original Codex model that powered early versions of GitHub Copilot). With a context length of over 8,000 tokens, the StarCoder models can process more input than any other open LLM, enabling a wide range of interesting applications. For example, by prompting the StarCoder models with a series of dialogues, we enabled them to act as a technical assistant.
    Starting Price: Free
  • 26
    LongLLaMA

    LongLLaMA

    LongLLaMA

    This repository contains the research preview of LongLLaMA, a large language model capable of handling long contexts of 256k tokens or even more. LongLLaMA is built upon the foundation of OpenLLaMA and fine-tuned using the Focused Transformer (FoT) method. LongLLaMA code is built upon the foundation of Code Llama. We release a smaller 3B base variant (not instruction tuned) of the LongLLaMA model on a permissive license (Apache 2.0) and inference code supporting longer contexts on hugging face. Our model weights can serve as the drop-in replacement of LLaMA in existing implementations (for short context up to 2048 tokens). Additionally, we provide evaluation results and comparisons against the original OpenLLaMA models.
    Starting Price: Free
  • 27
    Mistral Large 3
    Mistral Large 3 is a next-generation, open multimodal AI model built with a powerful sparse Mixture-of-Experts architecture featuring 41B active parameters out of 675B total. Designed from scratch on NVIDIA H200 GPUs, it delivers frontier-level reasoning, multilingual performance, and advanced image understanding while remaining fully open-weight under the Apache 2.0 license. The model achieves top-tier results on modern instruction benchmarks, positioning it among the strongest permissively licensed foundation models available today. With native support across vLLM, TensorRT-LLM, and major cloud providers, Mistral Large 3 offers exceptional accessibility and performance efficiency. Its design enables enterprise-grade customization, letting teams fine-tune or adapt the model for domain-specific workflows and proprietary applications. Mistral Large 3 represents a major advancement in open AI, offering frontier intelligence without sacrificing transparency or control.
    Starting Price: Free
  • 28
    GPT-5.4 nano
    GPT-5.4 nano is a lightweight and highly efficient AI model designed for fast, cost-effective task execution. It is optimized for simple and high-volume tasks such as classification, data extraction, and basic coding support. The model delivers quick responses with minimal latency, making it ideal for real-time and large-scale applications. GPT-5.4 nano improves significantly over previous nano models in both performance and efficiency. It supports essential capabilities like tool use and structured data processing. The model is commonly used as a supporting component within larger AI systems. By focusing on speed and affordability, GPT-5.4 nano enables scalable automation across various workflows.
  • 29
    Grok 4
    Grok 4 is the latest AI model from Elon Musk’s xAI, marking a significant advancement in AI reasoning and natural language understanding. Developed on the Colossus supercomputer, Grok 4 supports multimodal inputs including text and images, with plans to add video capabilities soon. It features enhanced precision in language tasks and has demonstrated superior performance in scientific reasoning and visual problem-solving compared to other leading AI models. Designed for developers, researchers, and technical users, Grok 4 offers powerful tools for complex tasks. The model incorporates improved moderation to address previous concerns about biased or problematic outputs. Grok 4 represents a major leap forward in AI’s ability to understand and generate human-like responses.
  • 30
    Ministral 3B

    Ministral 3B

    Mistral AI

    Mistral AI introduced two state-of-the-art models for on-device computing and edge use cases, named "les Ministraux": Ministral 3B and Ministral 8B. These models set a new frontier in knowledge, commonsense reasoning, function-calling, and efficiency in the sub-10B category. They can be used or tuned for various applications, from orchestrating agentic workflows to creating specialist task workers. Both models support up to 128k context length (currently 32k on vLLM), and Ministral 8B features a special interleaved sliding-window attention pattern for faster and memory-efficient inference. These models were built to provide a compute-efficient and low-latency solution for scenarios such as on-device translation, internet-less smart assistants, local analytics, and autonomous robotics. Used in conjunction with larger language models like Mistral Large, les Ministraux also serve as efficient intermediaries for function-calling in multi-step agentic workflows.
    Starting Price: Free
  • 31
    Qwen3

    Qwen3

    Alibaba

    Qwen3, the latest iteration of the Qwen family of large language models, introduces groundbreaking features that enhance performance across coding, math, and general capabilities. With models like the Qwen3-235B-A22B and Qwen3-30B-A3B, Qwen3 achieves impressive results compared to top-tier models, thanks to its hybrid thinking modes that allow users to control the balance between deep reasoning and quick responses. The platform supports 119 languages and dialects, making it an ideal choice for global applications. Its pre-training process, which uses 36 trillion tokens, enables robust performance, and advanced reinforcement learning (RL) techniques continue to refine its capabilities. Available on platforms like Hugging Face and ModelScope, Qwen3 offers a powerful tool for developers and researchers working in diverse fields.
    Starting Price: Free
  • 32
    Claude Opus 4.6
    Claude Opus 4.6 is Anthropic’s flagship AI model designed to push the boundaries of reasoning, coding, and real-world problem solving. It delivers significant performance gains over previous versions and competing models across key benchmarks. Opus 4.6 excels on SWE-bench, multilingual coding evaluations, and advanced agent-based tests. The model is built to support complex, long-running agentic workflows with greater efficiency. Enhanced safety measures improve resistance to prompt injection and strengthen alignment on sensitive tasks. Developers benefit from new API controls such as effort parameters, context compaction, and advanced tool usage. These improvements make Opus 4.6 more powerful, reliable, and versatile across use cases.
  • 33
    Claude Opus 3

    Claude Opus 3

    Anthropic

    Opus, our most intelligent model, outperforms its peers on most of the common evaluation benchmarks for AI systems, including undergraduate level expert knowledge (MMLU), graduate level expert reasoning (GPQA), basic mathematics (GSM8K), and more. It exhibits near-human levels of comprehension and fluency on complex tasks, leading the frontier of general intelligence. All Claude 3 models show increased capabilities in analysis and forecasting, nuanced content creation, code generation, and conversing in non-English languages like Spanish, Japanese, and French.
  • 34
    Amazon Nova Premier
    Amazon Nova Premier is the most advanced model in their Nova family, designed to handle complex tasks and act as a teacher for model distillation. Available on Amazon Bedrock, Nova Premier can process text, images, and video inputs, making it capable of managing intricate workflows, multi-step planning, and the precise execution of tasks across various data sources. The model features a context length of one million tokens, enabling it to handle large-scale documents and code bases efficiently. Furthermore, Nova Premier allows users to create smaller, faster, and more cost-effective versions of its models, such as Nova Pro and Nova Micro, for specific use cases through model distillation.
  • 35
    Claude Sonnet 3.7
    Claude Sonnet 3.7, developed by Anthropic, is a cutting-edge AI model that combines rapid response with deep reflective reasoning. This innovative model allows users to toggle between quick, efficient responses and more thoughtful, reflective answers, making it ideal for complex problem-solving. By allowing Claude to self-reflect before answering, it excels at tasks that require high-level reasoning and nuanced understanding. With its ability to engage in deeper thought processes, Claude Sonnet 3.7 enhances tasks such as coding, natural language processing, and critical thinking applications. Available across various platforms, it offers a powerful tool for professionals and organizations seeking a high-performance, adaptable AI.
  • 36
    Grok 4.20
    Grok 4.20 is an advanced artificial intelligence model developed by xAI to elevate reasoning and natural language understanding. Built on the high-performance Colossus supercomputer, it is engineered for speed, scale, and accuracy. Grok 4.20 processes multimodal inputs such as text and images, with video support planned for future releases. The model excels in scientific, technical, and linguistic tasks, delivering highly precise and context-aware responses. Its architecture supports deep reasoning and sophisticated problem-solving capabilities. Enhanced moderation improves output reliability and reduces bias compared to earlier versions. Overall, Grok 4.20 represents a significant step toward more human-like AI reasoning and interpretation.
  • 37
    Codestral

    Codestral

    Mistral AI

    We introduce Codestral, our first-ever code model. Codestral is an open-weight generative AI model explicitly designed for code generation tasks. It helps developers write and interact with code through a shared instruction and completion API endpoint. As it masters code and English, it can be used to design advanced AI applications for software developers. Codestral is trained on a diverse dataset of 80+ programming languages, including the most popular ones, such as Python, Java, C, C++, JavaScript, and Bash. It also performs well on more specific ones like Swift and Fortran. This broad language base ensures Codestral can assist developers in various coding environments and projects.
    Starting Price: Free
  • 38
    Claude Opus 4.1
    Claude Opus 4.1 is an incremental upgrade to Claude Opus 4 that boosts coding, agentic reasoning, and data-analysis performance without changing deployment complexity. It raises coding accuracy to 74.5 percent on SWE-bench Verified and sharpens in-depth research and detailed tracking for agentic search tasks. GitHub reports notable gains in multi-file code refactoring, while Rakuten Group highlights its precision in pinpointing exact corrections within large codebases without introducing bugs. Independent benchmarks show about a one-standard-deviation improvement on junior developer tests compared to Opus 4, mirroring major leaps seen in prior Claude releases. Opus 4.1 is available now to paid Claude users, in Claude Code, and via the Anthropic API (model ID claude-opus-4-1-20250805), as well as through Amazon Bedrock and Google Cloud Vertex AI, and integrates seamlessly into existing workflows with no additional setup beyond selecting the new model.
  • 39
    Code Llama
    Code Llama is a large language model (LLM) that can use text prompts to generate code. Code Llama is state-of-the-art for publicly available LLMs on code tasks, and has the potential to make workflows faster and more efficient for current developers and lower the barrier to entry for people who are learning to code. Code Llama has the potential to be used as a productivity and educational tool to help programmers write more robust, well-documented software. Code Llama is a state-of-the-art LLM capable of generating code, and natural language about code, from both code and natural language prompts. Code Llama is free for research and commercial use. Code Llama is built on top of Llama 2 and is available in three models: Code Llama, the foundational code model; Codel Llama - Python specialized for Python; and Code Llama - Instruct, which is fine-tuned for understanding natural language instructions.
    Starting Price: Free
  • 40
    GPT-4.1

    GPT-4.1

    OpenAI

    GPT-4.1 is an advanced AI model from OpenAI, designed to enhance performance across key tasks such as coding, instruction following, and long-context comprehension. With a large context window of up to 1 million tokens, GPT-4.1 can process and understand extensive datasets, making it ideal for tasks like software development, document analysis, and AI agent workflows. Available through the API, GPT-4.1 offers significant improvements over previous models, excelling at real-world applications where efficiency and accuracy are crucial.
    Starting Price: $2 per 1M tokens (input)
  • 41
    OPT

    OPT

    Meta

    Large language models, which are often trained for hundreds of thousands of compute days, have shown remarkable capabilities for zero- and few-shot learning. Given their computational cost, these models are difficult to replicate without significant capital. For the few that are available through APIs, no access is granted to the full model weights, making them difficult to study. We present Open Pre-trained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M to 175B parameters, which we aim to fully and responsibly share with interested researchers. We show that OPT-175B is comparable to GPT-3, while requiring only 1/7th the carbon footprint to develop. We are also releasing our logbook detailing the infrastructure challenges we faced, along with code for experimenting with all of the released models.
  • 42
    MiniMax M2

    MiniMax M2

    MiniMax

    MiniMax M2 is an open source foundation model built specifically for agentic applications and coding workflows, striking a new balance of performance, speed, and cost. It excels in end-to-end development scenarios, handling programming, tool-calling, and complex, long-chain workflows with capabilities such as Python integration, while delivering inference speeds of around 100 tokens per second and offering API pricing at just ~8% of the cost of comparable proprietary models. The model supports “Lightning Mode” for high-speed, lightweight agent tasks, and “Pro Mode” for in-depth full-stack development, report generation, and web-based tool orchestration; its weights are fully open source and available for local deployment with vLLM or SGLang. MiniMax M2 positions itself as a production-ready model that enables agents to complete independent tasks, such as data analysis, programming, tool orchestration, and large-scale multi-step logic at real organizational scale.
    Starting Price: $0.30 per million input tokens
  • 43
    Mercury Coder

    Mercury Coder

    Inception Labs

    Mercury, the latest innovation from Inception Labs, is the first commercial-scale diffusion large language model (dLLM), offering a 10x speed increase and significantly lower costs compared to traditional autoregressive models. Built for high-performance reasoning, coding, and structured text generation, Mercury processes over 1000 tokens per second on NVIDIA H100 GPUs, making it one of the fastest LLMs available. Unlike conventional models that generate text one token at a time, Mercury refines responses using a coarse-to-fine diffusion approach, improving accuracy and reducing hallucinations. With Mercury Coder, a specialized coding model, developers can experience cutting-edge AI-driven code generation with superior speed and efficiency.
    Starting Price: Free
  • 44
    Claude Mythos

    Claude Mythos

    Anthropic

    Claude Mythos is an upcoming and highly advanced AI model from Anthropic, currently in testing and considered the company’s most powerful system to date. It represents a new tier beyond existing models like Claude Opus, with major improvements in reasoning, coding, and cybersecurity capabilities. Early reports suggest that Claude Mythos is designed to push the boundaries of AI performance, positioning it as a next-generation foundation model. However, its release is being approached cautiously due to its potential risks and complexity.
  • 45
    Claude Sonnet 4
    Claude Sonnet 4, the latest evolution of Anthropic’s language models, offers a significant upgrade in coding, reasoning, and performance. Designed for diverse use cases, Sonnet 4 builds upon the success of its predecessor, Claude Sonnet 3.7, delivering more precise responses and better task execution. With a state-of-the-art 72.7% performance on the SWE-bench, it stands out in agentic scenarios, offering enhanced steerability and clear reasoning capabilities. Whether handling software development, multi-feature app creation, or complex problem-solving, Claude Sonnet 4 ensures higher code quality, reduced errors, and a smoother development process.
    Starting Price: $3 / 1 million tokens (input)
  • 46
    GLM-5

    GLM-5

    Zhipu AI

    GLM-5 is Z.ai’s latest large language model built for complex systems engineering and long-horizon agentic tasks. It scales significantly beyond GLM-4.5, increasing total parameters and training data while integrating DeepSeek Sparse Attention to reduce deployment costs without sacrificing long-context capacity. The model combines enhanced pre-training with a new asynchronous reinforcement learning infrastructure called slime, improving training efficiency and post-training refinement. GLM-5 achieves best-in-class performance among open-source models across reasoning, coding, and agent benchmarks, narrowing the gap with leading frontier models. It ranks highly on evaluations such as Vending Bench 2, demonstrating strong long-term planning and operational capabilities. The model is open-sourced under the MIT License.
    Starting Price: Free
  • 47
    Falcon 3

    Falcon 3

    Technology Innovation Institute (TII)

    Falcon 3 is an open-source large language model (LLM) developed by the Technology Innovation Institute (TII) to make advanced AI accessible to a broader audience. Designed for efficiency, it operates seamlessly on lightweight devices, including laptops, without compromising performance. The Falcon 3 ecosystem comprises four scalable models, each tailored to diverse applications, and supports multiple languages while optimizing resource usage. This latest iteration in TII's LLM series achieves state-of-the-art results in reasoning, language understanding, instruction following, code, and mathematics tasks. By combining high performance with resource efficiency, Falcon 3 aims to democratize access to AI, empowering users across various sectors to leverage advanced technology without the need for extensive computational resources.
    Starting Price: Free
  • 48
    Gemini 3.1 Flash-Lite
    Gemini 3.1 Flash-Lite is Google’s fastest and most cost-efficient model in the Gemini 3 series, designed for high-volume developer workloads. It delivers strong performance at scale while maintaining affordability, with pricing set at $0.25 per million input tokens and $1.50 per million output tokens. The model significantly improves speed, offering a 2.5x faster time to first answer token and a 45% increase in output speed compared to Gemini 2.5 Flash. Despite its lower cost tier, it achieves high benchmark results, including an Elo score of 1432 and strong performance across reasoning and multimodal evaluations. Gemini 3.1 Flash-Lite supports adaptive “thinking levels,” allowing developers to control how much reasoning power is used for different tasks. It is suitable for large-scale applications such as translation, content moderation, user interface generation, and simulation building.
  • 49
    Aya

    Aya

    Cohere AI

    Aya is a new state-of-the-art, open-source, massively multilingual, generative large language research model (LLM) covering 101 different languages — more than double the number of languages covered by existing open-source models. Aya helps researchers unlock the powerful potential of LLMs for dozens of languages and cultures largely ignored by most advanced models on the market today. We are open-sourcing both the Aya model, as well as the largest multilingual instruction fine-tuned dataset to-date with a size of 513 million covering 114 languages. This data collection includes rare annotations from native and fluent speakers all around the world, ensuring that AI technology can effectively serve a broad global audience that have had limited access to-date.
  • 50
    LTM-1

    LTM-1

    Magic AI

    Magic’s LTM-1 enables 50x larger context windows than transformers. Magic's trained a Large Language Model (LLM) that’s able to take in the gigantic amounts of context when generating suggestions. For our coding assistant, this means Magic can now see your entire repository of code. Larger context windows can allow AI models to reference more explicit, factual information and their own action history. We hope to be able to utilize this research to improve reliability and coherence.