Alternatives to Baichuan-13B

Compare Baichuan-13B alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to Baichuan-13B in 2026. Compare features, ratings, user reviews, pricing, and more from Baichuan-13B competitors and alternatives in order to make an informed decision for your business.

  • 1
    Qwen-7B

    Qwen-7B

    Alibaba

    Qwen-7B is the 7B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-7B is a Transformer-based large language model, which is pretrained on a large volume of data, including web texts, books, codes, etc. Additionally, based on the pretrained Qwen-7B, we release Qwen-7B-Chat, a large-model-based AI assistant, which is trained with alignment techniques. The features of the Qwen-7B series include: Trained with high-quality pretraining data. We have pretrained Qwen-7B on a self-constructed large-scale high-quality dataset of over 2.2 trillion tokens. The dataset includes plain texts and codes, and it covers a wide range of domains, including general domain data and professional domain data. Strong performance. In comparison with the models of the similar model size, we outperform the competitors on a series of benchmark datasets, which evaluates natural language understanding, mathematics, coding, etc. And more.
  • 2
    Mistral 7B

    Mistral 7B

    Mistral AI

    Mistral 7B is a 7.3-billion-parameter language model that outperforms larger models like Llama 2 13B across various benchmarks. It employs Grouped-Query Attention (GQA) for faster inference and Sliding Window Attention (SWA) to efficiently handle longer sequences. Released under the Apache 2.0 license, Mistral 7B is accessible for deployment across diverse platforms, including local environments and major cloud services. Additionally, a fine-tuned version, Mistral 7B Instruct, demonstrates enhanced performance in instruction-following tasks, surpassing models like Llama 2 13B Chat.
  • 3
    ChatGLM

    ChatGLM

    Zhipu AI

    ChatGLM-6B is an open-source, Chinese-English bilingual dialogue language model based on the General Language Model (GLM) architecture with 6.2 billion parameters. Combined with model quantization technology, users can deploy locally on consumer-grade graphics cards (only 6GB of video memory is required at the INT4 quantization level). ChatGLM-6B uses technology similar to ChatGPT, optimized for Chinese Q&A and dialogue. After about 1T identifiers of Chinese and English bilingual training, supplemented by supervision and fine-tuning, feedback self-help, human feedback reinforcement learning and other technologies, ChatGLM-6B with 6.2 billion parameters has been able to generate answers that are quite in line with human preferences.
  • 4
    Llama 2
    The next generation of our open source large language model. This release includes model weights and starting code for pretrained and fine-tuned Llama language models — ranging from 7B to 70B parameters. Llama 2 pretrained models are trained on 2 trillion tokens, and have double the context length than Llama 1. Its fine-tuned models have been trained on over 1 million human annotations. Llama 2 outperforms other open source language models on many external benchmarks, including reasoning, coding, proficiency, and knowledge tests. Llama 2 was pretrained on publicly available online data sources. The fine-tuned model, Llama-2-chat, leverages publicly available instruction datasets and over 1 million human annotations. We have a broad range of supporters around the world who believe in our open approach to today’s AI — companies that have given early feedback and are excited to build with Llama 2.
  • 5
    DeepSeek-V2

    DeepSeek-V2

    DeepSeek

    DeepSeek-V2 is a state-of-the-art Mixture-of-Experts (MoE) language model introduced by DeepSeek-AI, characterized by its economical training and efficient inference capabilities. With a total of 236 billion parameters, of which only 21 billion are active per token, it supports a context length of up to 128K tokens. DeepSeek-V2 employs innovative architectures like Multi-head Latent Attention (MLA) for efficient inference by compressing the Key-Value (KV) cache and DeepSeekMoE for cost-effective training through sparse computation. This model significantly outperforms its predecessor, DeepSeek 67B, by saving 42.5% in training costs, reducing the KV cache by 93.3%, and enhancing generation throughput by 5.76 times. Pretrained on an 8.1 trillion token corpus, DeepSeek-V2 excels in language understanding, coding, and reasoning tasks, making it a top-tier performer among open-source models.
  • 6
    Kimi K2

    Kimi K2

    Moonshot AI

    Kimi K2 is a state-of-the-art open source large language model series built on a mixture-of-experts (MoE) architecture, featuring 1 trillion total parameters and 32 billion activated parameters for task-specific efficiency. Trained with the Muon optimizer on over 15.5 trillion tokens and stabilized by MuonClip’s attention-logit clamping, it delivers exceptional performance in frontier knowledge, reasoning, mathematics, coding, and general agentic workflows. Moonshot AI provides two variants, Kimi-K2-Base for research-level fine-tuning and Kimi-K2-Instruct pre-trained for immediate chat and tool-driven interactions, enabling both custom development and drop-in agentic capabilities. Benchmarks show it outperforms leading open source peers and rivals top proprietary models in coding tasks and complex task breakdowns, while its 128 K-token context length, tool-calling API compatibility, and support for industry-standard inference engines.
  • 7
    GigaChat 3 Ultra
    GigaChat 3 Ultra is a 702-billion-parameter Mixture-of-Experts model built from scratch to deliver frontier-level reasoning, multilingual capability, and deep Russian-language fluency. It activates just 36 billion parameters per token, enabling massive scale with practical inference speeds. The model was trained on a 14-trillion-token corpus combining natural, multilingual, and high-quality synthetic data to strengthen reasoning, math, coding, and linguistic performance. Unlike modified foreign checkpoints, GigaChat 3 Ultra is entirely original—giving developers full control, modern alignment, and a dataset free of inherited limitations. Its architecture leverages MoE, MTP, and MLA to match open-source ecosystems and integrate easily with popular inference and fine-tuning tools. With leading results on Russian benchmarks and competitive performance on global tasks, GigaChat 3 Ultra represents one of the largest and most capable open-source LLMs in the world.
  • 8
    Olmo 2
    Olmo 2 is a family of fully open language models developed by the Allen Institute for AI (AI2), designed to provide researchers and developers with transparent access to training data, open-source code, reproducible training recipes, and comprehensive evaluations. These models are trained on up to 5 trillion tokens and are competitive with leading open-weight models like Llama 3.1 on English academic benchmarks. Olmo 2 emphasizes training stability, implementing techniques to prevent loss spikes during long training runs, and utilizes staged training interventions during late pretraining to address capability deficiencies. The models incorporate state-of-the-art post-training methodologies from AI2's Tülu 3, resulting in the creation of Olmo 2-Instruct models. An actionable evaluation framework, the Open Language Modeling Evaluation System (OLMES), was established to guide improvements through development stages, consisting of 20 evaluation benchmarks assessing core capabilities.
  • 9
    Stable LM

    Stable LM

    Stability AI

    Stable LM: Stability AI Language Models. The release of Stable LM builds on our experience in open-sourcing earlier language models with EleutherAI, a nonprofit research hub. These language models include GPT-J, GPT-NeoX, and the Pythia suite, which were trained on The Pile open-source dataset. Many recent open-source language models continue to build on these efforts, including Cerebras-GPT and Dolly-2. Stable LM is trained on a new experimental dataset built on The Pile, but three times larger with 1.5 trillion tokens of content. We will release details on the dataset in due course. The richness of this dataset gives Stable LM surprisingly high performance in conversational and coding tasks, despite its small size of 3 to 7 billion parameters (by comparison, GPT-3 has 175 billion parameters). Stable LM 3B is a compact language model designed to operate on portable digital devices like handhelds and laptops, and we’re excited about its capabilities and portability.
  • 10
    TinyLlama

    TinyLlama

    TinyLlama

    The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs. We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
  • 11
    Olmo 3
    Olmo 3 is a fully open model family spanning 7 billion and 32 billion parameter variants that delivers not only high-performing base, reasoning, instruction, and reinforcement-learning models, but also exposure of the entire model flow, including raw training data, intermediate checkpoints, training code, long-context support (65,536 token window), and provenance tooling. Starting with the Dolma 3 dataset (≈9 trillion tokens) and its disciplined mix of web text, scientific PDFs, code, and long-form documents, the pre-training, mid-training, and long-context phases shape the base models, which are then post-trained via supervised fine-tuning, direct preference optimisation, and RL with verifiable rewards to yield the Think and Instruct variants. The 32 B Think model is described as the strongest fully open reasoning model to date, competitively close to closed-weight peers in math, code, and complex reasoning.
  • 12
    ERNIE 3.0 Titan
    Pre-trained language models have achieved state-of-the-art results in various Natural Language Processing (NLP) tasks. GPT-3 has shown that scaling up pre-trained language models can further exploit their enormous potential. A unified framework named ERNIE 3.0 was recently proposed for pre-training large-scale knowledge enhanced models and trained a model with 10 billion parameters. ERNIE 3.0 outperformed the state-of-the-art models on various NLP tasks. In order to explore the performance of scaling up ERNIE 3.0, we train a hundred-billion-parameter model called ERNIE 3.0 Titan with up to 260 billion parameters on the PaddlePaddle platform. Furthermore, We design a self-supervised adversarial loss and a controllable language modeling loss to make ERNIE 3.0 Titan generate credible and controllable texts.
  • 13
    PanGu-Σ

    PanGu-Σ

    Huawei

    Significant advancements in the field of natural language processing, understanding, and generation have been achieved through the expansion of large language models. This study introduces a system which utilizes Ascend 910 AI processors and the MindSpore framework to train a language model with over a trillion parameters, specifically 1.085T, named PanGu-{\Sigma}. This model, which builds upon the foundation laid by PanGu-{\alpha}, takes the traditionally dense Transformer model and transforms it into a sparse one using a concept known as Random Routed Experts (RRE). The model was efficiently trained on a dataset of 329 billion tokens using a technique called Expert Computation and Storage Separation (ECSS), leading to a 6.3-fold increase in training throughput via heterogeneous computing. Experimentation indicates that PanGu-{\Sigma} sets a new standard in zero-shot learning for various downstream Chinese NLP tasks.
  • 14
    StarCoder

    StarCoder

    BigCode

    StarCoder and StarCoderBase are Large Language Models for Code (Code LLMs) trained on permissively licensed data from GitHub, including from 80+ programming languages, Git commits, GitHub issues, and Jupyter notebooks. Similar to LLaMA, we trained a ~15B parameter model for 1 trillion tokens. We fine-tuned StarCoderBase model for 35B Python tokens, resulting in a new model that we call StarCoder. We found that StarCoderBase outperforms existing open Code LLMs on popular programming benchmarks and matches or surpasses closed models such as code-cushman-001 from OpenAI (the original Codex model that powered early versions of GitHub Copilot). With a context length of over 8,000 tokens, the StarCoder models can process more input than any other open LLM, enabling a wide range of interesting applications. For example, by prompting the StarCoder models with a series of dialogues, we enabled them to act as a technical assistant.
  • 15
    mT5

    mT5

    Google

    Multilingual T5 (mT5) is a massively multilingual pretrained text-to-text transformer model, trained following a similar recipe as T5. This repo can be used to reproduce the experiments in the mT5 paper. mT5 is pretrained on the mC4 corpus, covering 101 languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, and more.
  • 16
    DeepSeek-V3.1-Terminus
    DeepSeek has released DeepSeek-V3.1-Terminus, which enhances the V3.1 architecture by incorporating user feedback to improve output stability, consistency, and agent performance. It notably reduces instances of mixed Chinese/English character output and unintended garbled characters, resulting in cleaner, more consistent language generation. The update upgrades both the code agent and search agent subsystems to yield stronger, more reliable performance across benchmarks. DeepSeek-V3.1-Terminus is also available as an open source model, and its weights are published on Hugging Face. The model structure remains the same as DeepSeek-V3, ensuring compatibility with existing deployment methods, with updated inference demos provided for community use. While trained at a scale of 685B parameters, the model includes FP8, BF16, and F32 tensor formats, offering flexibility across environments.
  • 17
    Phi-2

    Phi-2

    Microsoft

    We are now releasing Phi-2, a 2.7 billion-parameter language model that demonstrates outstanding reasoning and language understanding capabilities, showcasing state-of-the-art performance among base language models with less than 13 billion parameters. On complex benchmarks Phi-2 matches or outperforms models up to 25x larger, thanks to new innovations in model scaling and training data curation. With its compact size, Phi-2 is an ideal playground for researchers, including for exploration around mechanistic interpretability, safety improvements, or fine-tuning experimentation on a variety of tasks. We have made Phi-2 available in the Azure AI Studio model catalog to foster research and development on language models.
  • 18
    Llama 4 Behemoth
    Llama 4 Behemoth is Meta's most powerful AI model to date, featuring a massive 288 billion active parameters. It excels in multimodal tasks, outperforming previous models like GPT-4.5 and Gemini 2.0 Pro across multiple STEM-focused benchmarks such as MATH-500 and GPQA Diamond. As the teacher model for the Llama 4 series, Behemoth sets the foundation for models like Llama 4 Maverick and Llama 4 Scout. While still in training, Llama 4 Behemoth demonstrates unmatched intelligence, pushing the boundaries of AI in fields like math, multilinguality, and image understanding.
  • 19
    Dolly

    Dolly

    Databricks

    Dolly is a cheap-to-build LLM that exhibits a surprising degree of the instruction following capabilities exhibited by ChatGPT. Whereas the work from the Alpaca team showed that state-of-the-art models could be coaxed into high quality instruction-following behavior, we find that even years-old open source models with much earlier architectures exhibit striking behaviors when fine tuned on a small corpus of instruction training data. Dolly works by taking an existing open source 6 billion parameter model from EleutherAI and modifying it ever so slightly to elicit instruction following capabilities such as brainstorming and text generation not present in the original model, using data from Alpaca.
  • 20
    Hunyuan Motion 1.0

    Hunyuan Motion 1.0

    Tencent Hunyuan

    Hunyuan Motion (also known as HY-Motion 1.0) is a state-of-the-art text-to-3D motion generation AI model that uses a billion-parameter Diffusion Transformer with flow matching to turn natural language prompts into high-quality, skeleton-based 3D character animation in seconds. It understands descriptive text in English and Chinese and produces smooth, physically plausible motion sequences that integrate seamlessly into standard 3D animation pipelines by exporting to skeleton formats such as SMPL or SMPLH and common formats like FBX or BVH for use in Blender, Unity, Unreal Engine, Maya, and other tools. The model’s three-stage training pipeline (large-scale pre-training on thousands of hours of motion data, fine-tuning on curated sequences, and reinforcement learning from human feedback) enhances its ability to follow complex instructions and generate realistic, temporally coherent motion.
  • 21
    Orpheus TTS

    Orpheus TTS

    Canopy Labs

    Canopy Labs has introduced Orpheus, a family of state-of-the-art speech large language models (LLMs) designed for human-level speech generation. These models are built on the Llama-3 architecture and are trained on over 100,000 hours of English speech data, enabling them to produce natural intonation, emotion, and rhythm that surpasses current state-of-the-art closed source models. Orpheus supports zero-shot voice cloning, allowing users to replicate voices without prior fine-tuning, and offers guided emotion and intonation control through simple tags. The models achieve low latency, with approximately 200ms streaming latency for real-time applications, reducible to around 100ms with input streaming. Canopy Labs has released both pre-trained and fine-tuned 3B-parameter models under the permissive Apache 2.0 license, with plans to release smaller models of 1B, 400M, and 150M parameters for use on resource-constrained devices.
  • 22
    Qwen3-Max

    Qwen3-Max

    Alibaba

    Qwen3-Max is Alibaba’s latest trillion-parameter large language model, designed to push performance in agentic tasks, coding, reasoning, and long-context processing. It is built atop the Qwen3 family and benefits from the architectural, training, and inference advances introduced there; mixing thinker and non-thinker modes, a “thinking budget” mechanism, and support for dynamic mode switching based on complexity. The model reportedly processes extremely long inputs (hundreds of thousands of tokens), supports tool invocation, and exhibits strong performance on benchmarks in coding, multi-step reasoning, and agent benchmarks (e.g., Tau2-Bench). While its initial variant emphasizes instruction following (non-thinking mode), Alibaba plans to bring reasoning capabilities online to enable autonomous agent behavior. Qwen3-Max inherits multilingual support and extensive pretraining on trillions of tokens, and it is delivered via API interfaces compatible with OpenAI-style functions.
  • 23
    Llama 4 Maverick
    Llama 4 Maverick is one of the most advanced multimodal AI models from Meta, featuring 17 billion active parameters and 128 experts. It surpasses its competitors like GPT-4o and Gemini 2.0 Flash in a broad range of benchmarks, especially in tasks related to coding, reasoning, and multilingual capabilities. Llama 4 Maverick combines image and text understanding, enabling it to deliver industry-leading results in image-grounding tasks and precise, high-quality output. With its efficient performance at a reduced parameter size, Maverick offers exceptional value, especially in general assistant and chat applications.
  • 24
    DeepSeek R1

    DeepSeek R1

    DeepSeek

    DeepSeek-R1 is an advanced open-source reasoning model developed by DeepSeek, designed to rival OpenAI's Model o1. Accessible via web, app, and API, it excels in complex tasks such as mathematics and coding, demonstrating superior performance on benchmarks like the American Invitational Mathematics Examination (AIME) and MATH. DeepSeek-R1 employs a mixture of experts (MoE) architecture with 671 billion total parameters, activating 37 billion parameters per token, enabling efficient and accurate reasoning capabilities. This model is part of DeepSeek's commitment to advancing artificial general intelligence (AGI) through open-source innovation.
  • 25
    Llama 4 Scout
    Llama 4 Scout is a powerful 17 billion active parameter multimodal AI model that excels in both text and image processing. With an industry-leading context length of 10 million tokens, it outperforms its predecessors, including Llama 3, in tasks such as multi-document summarization and parsing large codebases. Llama 4 Scout is designed to handle complex reasoning tasks while maintaining high efficiency, making it perfect for use cases requiring long-context comprehension and image grounding. It offers cutting-edge performance in image-related tasks and is particularly well-suited for applications requiring both text and visual understanding.
  • 26
    ALBERT

    ALBERT

    Google

    ALBERT is a self-supervised Transformer model that was pretrained on a large corpus of English data. This means it does not require manual labelling, and instead uses an automated process to generate inputs and labels from raw texts. It is trained with two distinct objectives in mind. The first is Masked Language Modeling (MLM), which randomly masks 15% of words in the input sentence and requires the model to predict them. This technique differs from RNNs and autoregressive models like GPT as it allows the model to learn bidirectional sentence representations. The second objective is Sentence Ordering Prediction (SOP), which entails predicting the ordering of two consecutive segments of text during pretraining.
  • 27
    QwQ-32B

    QwQ-32B

    Alibaba

    ​QwQ-32B is an advanced reasoning model developed by Alibaba Cloud's Qwen team, designed to enhance AI's problem-solving capabilities. With 32 billion parameters, it achieves performance comparable to state-of-the-art models like DeepSeek's R1, which has 671 billion parameters. This efficiency is achieved through optimized parameter utilization, allowing QwQ-32B to perform complex tasks such as mathematical reasoning, coding, and general problem-solving with fewer resources. The model supports a context length of up to 32,000 tokens, enabling it to process extensive input data effectively. QwQ-32B is accessible via Alibaba's chatbot service, Qwen Chat, and is open sourced under the Apache 2.0 license, promoting collaboration and further development within the AI community.
  • 28
    BitNet

    BitNet

    Microsoft

    The BitNet b1.58 2B4T is a cutting-edge 1-bit Large Language Model (LLM) developed by Microsoft, designed to enhance computational efficiency while maintaining high performance. This model, built with approximately 2 billion parameters and trained on 4 trillion tokens, uses innovative quantization techniques to optimize memory usage, energy consumption, and latency. The platform supports multiple modalities and is particularly valuable for applications in AI-powered text generation, offering substantial efficiency gains compared to full-precision models.
  • 29
    DeepScaleR

    DeepScaleR

    Agentica Project

    DeepScaleR is a 1.5-billion-parameter language model fine-tuned from DeepSeek-R1-Distilled-Qwen-1.5B using distributed reinforcement learning and a novel iterative context-lengthening strategy that gradually increases its context window from 8K to 24K tokens during training. It was trained on ~40,000 carefully curated mathematical problems drawn from competition-level datasets like AIME (1984–2023), AMC (pre-2023), Omni-MATH, and STILL. DeepScaleR achieves 43.1% accuracy on AIME 2024, a roughly 14.3 percentage point boost over the base model, and surpasses the performance of the proprietary O1-Preview model despite its much smaller size. It also posts strong results on a suite of math benchmarks (e.g., MATH-500, AMC 2023, Minerva Math, OlympiadBench), demonstrating that small, efficient models tuned with RL can match or exceed larger baselines on reasoning tasks.
  • 30
    Yi-Lightning

    Yi-Lightning

    Yi-Lightning

    Yi-Lightning, developed by 01.AI under the leadership of Kai-Fu Lee, represents the latest advancement in large language models with a focus on high performance and cost-efficiency. It boasts a maximum context length of 16K tokens and is priced at $0.14 per million tokens for both input and output, making it remarkably competitive. Yi-Lightning leverages an enhanced Mixture-of-Experts (MoE) architecture, incorporating fine-grained expert segmentation and advanced routing strategies, which contribute to its efficiency in training and inference. This model has excelled in various domains, achieving top rankings in categories like Chinese, math, coding, and hard prompts on the chatbot arena, where it secured the 6th position overall and 9th in style control. Its development included comprehensive pre-training, supervised fine-tuning, and reinforcement learning from human feedback, ensuring both performance and safety, with optimizations in memory usage and inference speed.
  • 31
    Falcon-40B

    Falcon-40B

    Technology Innovation Institute (TII)

    Falcon-40B is a 40B parameters causal decoder-only model built by TII and trained on 1,000B tokens of RefinedWeb enhanced with curated corpora. It is made available under the Apache 2.0 license. Why use Falcon-40B? It is the best open-source model currently available. Falcon-40B outperforms LLaMA, StableLM, RedPajama, MPT, etc. See the OpenLLM Leaderboard. It features an architecture optimized for inference, with FlashAttention and multiquery. It is made available under a permissive Apache 2.0 license allowing for commercial use, without any royalties or restrictions. ⚠️ This is a raw, pretrained model, which should be further finetuned for most usecases. If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at Falcon-40B-Instruct.
  • 32
    NVIDIA NeMo Megatron
    NVIDIA NeMo Megatron is an end-to-end framework for training and deploying LLMs with billions and trillions of parameters. NVIDIA NeMo Megatron, part of the NVIDIA AI platform, offers an easy, efficient, and cost-effective containerized framework to build and deploy LLMs. Designed for enterprise application development, it builds upon the most advanced technologies from NVIDIA research and provides an end-to-end workflow for automated distributed data processing, training large-scale customized GPT-3, T5, and multilingual T5 (mT5) models, and deploying models for inference at scale. Harnessing the power of LLMs is made easy through validated and converged recipes with predefined configurations for training and inference. Customizing models is simplified by the hyperparameter tool, which automatically searches for the best hyperparameter configurations and performance for training and inference on any given distributed GPU cluster configuration.
  • 33
    GLM-5

    GLM-5

    Zhipu AI

    GLM-5 is Z.ai’s latest large language model built for complex systems engineering and long-horizon agentic tasks. It scales significantly beyond GLM-4.5, increasing total parameters and training data while integrating DeepSeek Sparse Attention to reduce deployment costs without sacrificing long-context capacity. The model combines enhanced pre-training with a new asynchronous reinforcement learning infrastructure called slime, improving training efficiency and post-training refinement. GLM-5 achieves best-in-class performance among open-source models across reasoning, coding, and agent benchmarks, narrowing the gap with leading frontier models. It ranks highly on evaluations such as Vending Bench 2, demonstrating strong long-term planning and operational capabilities. The model is open-sourced under the MIT License.
  • 34
    CodeGemma
    CodeGemma is a collection of powerful, lightweight models that can perform a variety of coding tasks like fill-in-the-middle code completion, code generation, natural language understanding, mathematical reasoning, and instruction following. CodeGemma has 3 model variants, a 7B pre-trained variant that specializes in code completion and generation from code prefixes and/or suffixes, a 7B instruction-tuned variant for natural language-to-code chat and instruction following; and a state-of-the-art 2B pre-trained variant that provides up to 2x faster code completion. Complete lines, and functions, and even generate entire blocks of code, whether you're working locally or using Google Cloud resources. Trained on 500 billion tokens of primarily English language data from web documents, mathematics, and code, CodeGemma models generate code that's not only more syntactically correct but also semantically meaningful, reducing errors and debugging time.
  • 35
    Solar Mini

    Solar Mini

    Upstage AI

    Solar Mini is a pre‑trained large language model that delivers GPT‑3.5‑comparable responses with 2.5× faster inference while staying under 30 billion parameters. It achieved first place on the Hugging Face Open LLM Leaderboard in December 2023 by combining a 32‑layer Llama 2 architecture, initialized with high‑quality Mistral 7B weights, with an innovative “depth up‑scaling” (DUS) approach that deepens the model efficiently without adding complex modules. After DUS, continued pretraining restores and enhances performance, and instruction tuning in a QA format, especially for Korean, refines its ability to follow user prompts, while alignment tuning ensures its outputs meet human or advanced AI preferences. Solar Mini outperforms competitors such as Llama 2, Mistral 7B, Ko‑Alpaca, and KULLM across a variety of benchmarks, proving that compact size need not sacrifice capability.
    Starting Price: $0.1 per 1M tokens
  • 36
    Pixtral Large

    Pixtral Large

    Mistral AI

    Pixtral Large is a 124-billion-parameter open-weight multimodal model developed by Mistral AI, building upon their Mistral Large 2 architecture. It integrates a 123-billion-parameter multimodal decoder with a 1-billion-parameter vision encoder, enabling advanced understanding of documents, charts, and natural images while maintaining leading text comprehension capabilities. With a context window of 128,000 tokens, Pixtral Large can process at least 30 high-resolution images simultaneously. The model has demonstrated state-of-the-art performance on benchmarks such as MathVista, DocVQA, and VQAv2, surpassing models like GPT-4o and Gemini-1.5 Pro. Pixtral Large is available under the Mistral Research License for research and educational use, and under the Mistral Commercial License for commercial applications.
  • 37
    PanGu-α

    PanGu-α

    Huawei

    PanGu-α is developed under the MindSpore and trained on a cluster of 2048 Ascend 910 AI processors. The training parallelism strategy is implemented based on MindSpore Auto-parallel, which composes five parallelism dimensions to scale the training task to 2048 processors efficiently, including data parallelism, op-level model parallelism, pipeline model parallelism, optimizer model parallelism and rematerialization. To enhance the generalization ability of PanGu-α, we collect 1.1TB high-quality Chinese data from a wide range of domains to pretrain the model. We empirically test the generation ability of PanGu-α in various scenarios including text summarization, question answering, dialogue generation, etc. Moreover, we investigate the effect of model scales on the few-shot performances across a broad range of Chinese NLP tasks. The experimental results demonstrate the superior capabilities of PanGu-α in performing various tasks under few-shot or zero-shot settings.
  • 38
    Teuken 7B

    Teuken 7B

    OpenGPT-X

    Teuken-7B is a multilingual, open source language model developed under the OpenGPT-X initiative, specifically designed to cater to Europe's diverse linguistic landscape. It has been trained on a dataset comprising over 50% non-English texts, encompassing all 24 official languages of the European Union, ensuring robust performance across these languages. A key innovation in Teuken-7B is its custom multilingual tokenizer, optimized for European languages, which enhances training efficiency and reduces inference costs compared to standard monolingual tokenizers. The model is available in two versions, Teuken-7B-Base, the foundational pre-trained model, and Teuken-7B-Instruct, which has undergone instruction tuning for improved performance in following user prompts. Both versions are accessible on Hugging Face, promoting transparency and collaboration within the AI community. The development of Teuken-7B underscores a commitment to creating AI models that reflect Europe's diversity.
  • 39
    Nemotron 3 Super
    Nemotron-3 Super is part of NVIDIA’s Nemotron 3 family of open models designed to enable advanced agentic AI systems that can reason, plan, and execute multi-step workflows across complex environments. The model introduces a hybrid Mamba-Transformer Mixture-of-Experts architecture that combines the efficiency of state-space Mamba layers with the contextual understanding of transformer attention, allowing it to process long sequences and complex reasoning tasks with high accuracy and throughput. This architecture activates only a subset of model parameters for each token, improving computational efficiency while maintaining strong reasoning capabilities and enabling scalable inference for large workloads. Nemotron-3 Super contains roughly 120 billion parameters with around 12 billion active during inference, accelerating multi-step reasoning and collaborative agent interactions across large contexts.
  • 40
    OpenELM

    OpenELM

    Apple

    OpenELM is an open-source language model family developed by Apple. It uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy compared to existing open language models of similar size. OpenELM is trained on publicly available datasets and achieves state-of-the-art performance for its size.
  • 41
    K2 Think

    K2 Think

    Institute of Foundation Models

    K2 Think is an open source advanced reasoning model developed collaboratively by the Institute of Foundation Models at MBZUAI and G42. Despite only having 32 billion parameters, it delivers performance comparable to flagship models with many more parameters. It excels in mathematical reasoning, achieving top scores on competitive benchmarks such as AIME ’24/’25, HMMT ’25, and OMNI-Math-HARD. K2 Think is part of a suite of UAE-developed open models, alongside Jais (Arabic), NANDA (Hindi), and SHERKALA (Kazakh), and builds on the foundation laid by K2-65B, the fully reproducible open source foundation model released in 2024. The model is designed to be open, fast, and flexible, offering a web app interface for exploration, and with its efficiency in parameter positioning, it is a breakthrough in compact architectures for advanced AI reasoning.
  • 42
    Ministral 3

    Ministral 3

    Mistral AI

    Mistral 3 is the latest generation of open-weight AI models from Mistral AI, offering a full family of models, from small, edge-optimized versions to a flagship, large-scale multimodal model. The lineup includes three compact “Ministral 3” models (3B, 8B, and 14B parameters) designed for efficiency and deployment on constrained hardware (even laptops, drones, or edge devices), plus the powerful “Mistral Large 3,” a sparse mixture-of-experts model with 675 billion total parameters (41 billion active). The models support multimodal and multilingual tasks, not only text, but also image understanding, and have demonstrated best-in-class performance on general prompts, multilingual conversations, and multimodal inputs. The base and instruction-fine-tuned versions are released under the Apache 2.0 license, enabling broad customization and integration in enterprise and open source projects.
  • 43
    Kimi K2 Thinking

    Kimi K2 Thinking

    Moonshot AI

    Kimi K2 Thinking is an advanced open source reasoning model developed by Moonshot AI, designed specifically for long-horizon, multi-step workflows where the system interleaves chain-of-thought processes with tool invocation across hundreds of sequential tasks. The model uses a mixture-of-experts architecture with a total of 1 trillion parameters, yet only about 32 billion parameters are activated per inference pass, optimizing efficiency while maintaining vast capacity. It supports a context window of up to 256,000 tokens, enabling the handling of extremely long inputs and reasoning chains without losing coherence. Native INT4 quantization is built in, which reduces inference latency and memory usage without performance degradation. Kimi K2 Thinking is explicitly built for agentic workflows; it can autonomously call external tools, manage sequential logic steps (up to and typically between 200-300 tool calls in a single chain), and maintain consistent reasoning.
  • 44
    Tülu 3
    Tülu 3 is an advanced instruction-following language model developed by the Allen Institute for AI (Ai2), designed to enhance capabilities in areas such as knowledge, reasoning, mathematics, coding, and safety. Built upon the Llama 3 Base, Tülu 3 employs a comprehensive four-stage post-training process: meticulous prompt curation and synthesis, supervised fine-tuning on a diverse set of prompts and completions, preference tuning using both off- and on-policy data, and a novel reinforcement learning approach to bolster specific skills with verifiable rewards. This open-source model distinguishes itself by providing full transparency, including access to training data, code, and evaluation tools, thereby closing the performance gap between open and proprietary fine-tuning methods. Evaluations indicate that Tülu 3 outperforms other open-weight models of similar size, such as Llama 3.1-Instruct and Qwen2.5-Instruct, across various benchmarks.
  • 45
    Cerebras-GPT
    State-of-the-art language models are extremely challenging to train; they require huge compute budgets, complex distributed compute techniques and deep ML expertise. As a result, few organizations train large language models (LLMs) from scratch. And increasingly those that have the resources and expertise are not open sourcing the results, marking a significant change from even a few months back. At Cerebras, we believe in fostering open access to the most advanced models. With this in mind, we are proud to announce the release to the open source community of Cerebras-GPT, a family of seven GPT models ranging from 111 million to 13 billion parameters. Trained using the Chinchilla formula, these models provide the highest accuracy for a given compute budget. Cerebras-GPT has faster training times, lower training costs, and consumes less energy than any publicly available model to date.
  • 46
    RedPajama

    RedPajama

    RedPajama

    Foundation models such as GPT-4 have driven rapid improvement in AI. However, the most powerful models are closed commercial models or only partially open. RedPajama is a project to create a set of leading, fully open-source models. Today, we are excited to announce the completion of the first step of this project: the reproduction of the LLaMA training dataset of over 1.2 trillion tokens. The most capable foundation models today are closed behind commercial APIs, which limits research, customization, and their use with sensitive data. Fully open-source models hold the promise of removing these limitations, if the open community can close the quality gap between open and closed models. Recently, there has been much progress along this front. In many ways, AI is having its Linux moment. Stable Diffusion showed that open-source can not only rival the quality of commercial offerings like DALL-E but can also lead to incredible creativity from broad participation by communities.
  • 47
    Llama

    Llama

    Meta

    Llama (Large Language Model Meta AI) is a state-of-the-art foundational large language model designed to help researchers advance their work in this subfield of AI. Smaller, more performant models such as Llama enable others in the research community who don’t have access to large amounts of infrastructure to study these models, further democratizing access in this important, fast-changing field. Training smaller foundation models like Llama is desirable in the large language model space because it requires far less computing power and resources to test new approaches, validate others’ work, and explore new use cases. Foundation models train on a large set of unlabeled data, which makes them ideal for fine-tuning for a variety of tasks. We are making Llama available at several sizes (7B, 13B, 33B, and 65B parameters) and also sharing a Llama model card that details how we built the model in keeping with our approach to Responsible AI practices.
  • 48
    Reka Flash 3
    ​Reka Flash 3 is a 21-billion-parameter multimodal AI model developed by Reka AI, designed to excel in general chat, coding, instruction following, and function calling. It processes and reasons with text, images, video, and audio inputs, offering a compact, general-purpose solution for various applications. Trained from scratch on diverse datasets, including publicly accessible and synthetic data, Reka Flash 3 underwent instruction tuning on curated, high-quality data to optimize performance. The final training stage involved reinforcement learning using REINFORCE Leave One-Out (RLOO) with both model-based and rule-based rewards, enhancing its reasoning capabilities. With a context length of 32,000 tokens, Reka Flash 3 performs competitively with proprietary models like OpenAI's o1-mini, making it suitable for low-latency or on-device deployments. The model's full precision requires 39GB (fp16), but it can be compressed to as small as 11GB using 4-bit quantization.
  • 49
    DBRX

    DBRX

    Databricks

    Today, we are excited to introduce DBRX, an open, general-purpose LLM created by Databricks. Across a range of standard benchmarks, DBRX sets a new state-of-the-art for established open LLMs. Moreover, it provides the open community and enterprises building their own LLMs with capabilities that were previously limited to closed model APIs; according to our measurements, it surpasses GPT-3.5, and it is competitive with Gemini 1.0 Pro. It is an especially capable code model, surpassing specialized models like CodeLLaMA-70B in programming, in addition to its strength as a general-purpose LLM. This state-of-the-art quality comes with marked improvements in training and inference performance. DBRX advances the state-of-the-art in efficiency among open models thanks to its fine-grained mixture-of-experts (MoE) architecture. Inference is up to 2x faster than LLaMA2-70B, and DBRX is about 40% of the size of Grok-1 in terms of both total and active parameter counts.
  • 50
    DeepSeek R2

    DeepSeek R2

    DeepSeek

    DeepSeek R2 is the anticipated successor to DeepSeek R1, a groundbreaking AI reasoning model launched in January 2025 by the Chinese AI startup DeepSeek. Building on R1’s success, which disrupted the AI industry with its cost-effective performance rivaling top-tier models like OpenAI’s o1, R2 promises a quantum leap in capabilities. It is expected to deliver exceptional speed and human-like reasoning, excelling in complex tasks such as advanced coding and high-level mathematical problem-solving. Leveraging DeepSeek’s innovative Mixture-of-Experts architecture and efficient training methods, R2 aims to outperform its predecessor while maintaining a low computational footprint, potentially expanding its reasoning abilities to languages beyond English.