Alternatives to Athene-V2

Compare Athene-V2 alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to Athene-V2 in 2026. Compare features, ratings, user reviews, pricing, and more from Athene-V2 competitors and alternatives in order to make an informed decision for your business.

  • 1
    Qwen2.5-Max
    Qwen2.5-Max is a large-scale Mixture-of-Experts (MoE) model developed by the Qwen team, pretrained on over 20 trillion tokens and further refined through Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF). In evaluations, it outperforms models like DeepSeek V3 in benchmarks such as Arena-Hard, LiveBench, LiveCodeBench, and GPQA-Diamond, while also demonstrating competitive results in other assessments, including MMLU-Pro. Qwen2.5-Max is accessible via API through Alibaba Cloud and can be explored interactively on Qwen Chat.
  • 2
    DeepScaleR

    DeepScaleR

    Agentica Project

    DeepScaleR is a 1.5-billion-parameter language model fine-tuned from DeepSeek-R1-Distilled-Qwen-1.5B using distributed reinforcement learning and a novel iterative context-lengthening strategy that gradually increases its context window from 8K to 24K tokens during training. It was trained on ~40,000 carefully curated mathematical problems drawn from competition-level datasets like AIME (1984–2023), AMC (pre-2023), Omni-MATH, and STILL. DeepScaleR achieves 43.1% accuracy on AIME 2024, a roughly 14.3 percentage point boost over the base model, and surpasses the performance of the proprietary O1-Preview model despite its much smaller size. It also posts strong results on a suite of math benchmarks (e.g., MATH-500, AMC 2023, Minerva Math, OlympiadBench), demonstrating that small, efficient models tuned with RL can match or exceed larger baselines on reasoning tasks.
  • 3
    Athens

    Athens

    Athens

    Athens is a private, open-source tool for technology pioneers. Dynamically create, connect, and compound your research and documentation using a collaborative knowledge graph. Learn new research workflows, connect with 2,500+ people who love learning, and weigh in on the future of self-hosted knowledge graphs. The problem today is that we are getting drowned in information. If we don't take notes, we forget everything. So we take notes, but then we have too many notes! Search doesn't work. Folders don't work. And no one does tagging. Athens lets you take notes without praying to the search gods, without double-clicking endlessly on folders, and without manual tagging. Athens Research is a remote learning community building the most powerful and transparent, open-source knowledge base; the result is Athens: a free knowledge graph for research and notetaking. Athens is open-source, private, extensible, and community-driven.
  • 4
    Mistral 7B

    Mistral 7B

    Mistral AI

    Mistral 7B is a 7.3-billion-parameter language model that outperforms larger models like Llama 2 13B across various benchmarks. It employs Grouped-Query Attention (GQA) for faster inference and Sliding Window Attention (SWA) to efficiently handle longer sequences. Released under the Apache 2.0 license, Mistral 7B is accessible for deployment across diverse platforms, including local environments and major cloud services. Additionally, a fine-tuned version, Mistral 7B Instruct, demonstrates enhanced performance in instruction-following tasks, surpassing models like Llama 2 13B Chat.
  • 5
    Qwen3-Coder
    Qwen3‑Coder is an agentic code model available in multiple sizes, led by the 480B‑parameter Mixture‑of‑Experts variant (35B active) that natively supports 256K‑token contexts (extendable to 1M) and achieves state‑of‑the‑art results comparable to Claude Sonnet 4. Pre‑training on 7.5T tokens (70 % code) and synthetic data cleaned via Qwen2.5‑Coder optimized both coding proficiency and general abilities, while post‑training employs large‑scale, execution‑driven reinforcement learning, scaling test‑case generation for diverse coding challenges, and long‑horizon RL across 20,000 parallel environments to excel on multi‑turn software‑engineering benchmarks like SWE‑Bench Verified without test‑time scaling. Alongside the model, the open source Qwen Code CLI (forked from Gemini Code) unleashes Qwen3‑Coder in agentic workflows with customized prompts, function calling protocols, and seamless integration with Node.js, OpenAI SDKs, and environment variables.
  • 6
    Smaug-72B
    Smaug-72B is a powerful open-source large language model (LLM) known for several key features: High Performance: It currently holds the top spot on the Hugging Face Open LLM leaderboard, surpassing models like GPT-3.5 in various benchmarks. This means it excels at tasks like understanding, responding to, and generating human-like text. Open Source: Unlike many other advanced LLMs, Smaug-72B is freely available for anyone to use and modify, fostering collaboration and innovation in the AI community. Focus on Reasoning and Math: It specifically shines in handling reasoning and mathematical tasks, attributing this strength to unique fine-tuning techniques developed by Abacus AI, the creators of Smaug-72B. Based on Qwen-72B: It's technically a fine-tuned version of another powerful LLM called Qwen-72B, released by Alibaba, further improving upon its capabilities. Overall, Smaug-72B represents a significant step forward in open-source AI.
  • 7
    Tülu 3
    Tülu 3 is an advanced instruction-following language model developed by the Allen Institute for AI (Ai2), designed to enhance capabilities in areas such as knowledge, reasoning, mathematics, coding, and safety. Built upon the Llama 3 Base, Tülu 3 employs a comprehensive four-stage post-training process: meticulous prompt curation and synthesis, supervised fine-tuning on a diverse set of prompts and completions, preference tuning using both off- and on-policy data, and a novel reinforcement learning approach to bolster specific skills with verifiable rewards. This open-source model distinguishes itself by providing full transparency, including access to training data, code, and evaluation tools, thereby closing the performance gap between open and proprietary fine-tuning methods. Evaluations indicate that Tülu 3 outperforms other open-weight models of similar size, such as Llama 3.1-Instruct and Qwen2.5-Instruct, across various benchmarks.
  • 8
    Qwen2

    Qwen2

    Alibaba

    Qwen2 is the large language model series developed by Qwen team, Alibaba Cloud. Qwen2 is a series of large language models developed by the Qwen team at Alibaba Cloud. It includes both base language models and instruction-tuned models, ranging from 0.5 billion to 72 billion parameters, and features both dense models and a Mixture-of-Experts model. The Qwen2 series is designed to surpass most previous open-weight models, including its predecessor Qwen1.5, and to compete with proprietary models across a broad spectrum of benchmarks in language understanding, generation, multilingual capabilities, coding, mathematics, and reasoning.
  • 9
    Solar Pro 2

    Solar Pro 2

    Upstage AI

    Solar Pro 2 is Upstage’s latest frontier‑scale large language model, designed to power complex tasks and agent‑like workflows across domains such as finance, healthcare, and legal. Packaged in a compact 31 billion‑parameter architecture, it delivers top‑tier multilingual performance, especially in Korean, where it outperforms much larger models on benchmarks like Ko‑MMLU, Hae‑Rae, and Ko‑IFEval, while also excelling in English and Japanese. Beyond superior language understanding and generation, Solar Pro 2 offers next‑level intelligence through an advanced Reasoning Mode that significantly boosts multi‑step task accuracy on challenges ranging from general reasoning (MMLU, MMLU‑Pro, HumanEval) to complex mathematics (Math500, AIME) and software engineering (SWE‑Bench Agentless), achieving problem‑solving efficiency comparable to or exceeding that of models twice its size. Enhanced tool‑use capabilities enable the model to interact seamlessly with external APIs and data sources.
    Starting Price: $0.1 per 1M tokens
  • 10
    GigaChat 3 Ultra
    GigaChat 3 Ultra is a 702-billion-parameter Mixture-of-Experts model built from scratch to deliver frontier-level reasoning, multilingual capability, and deep Russian-language fluency. It activates just 36 billion parameters per token, enabling massive scale with practical inference speeds. The model was trained on a 14-trillion-token corpus combining natural, multilingual, and high-quality synthetic data to strengthen reasoning, math, coding, and linguistic performance. Unlike modified foreign checkpoints, GigaChat 3 Ultra is entirely original—giving developers full control, modern alignment, and a dataset free of inherited limitations. Its architecture leverages MoE, MTP, and MLA to match open-source ecosystems and integrate easily with popular inference and fine-tuning tools. With leading results on Russian benchmarks and competitive performance on global tasks, GigaChat 3 Ultra represents one of the largest and most capable open-source LLMs in the world.
  • 11
    Qwen2.5-VL-32B
    Qwen2.5-VL-32B is a state-of-the-art AI model designed for multimodal tasks, offering advanced capabilities in both text and image reasoning. It builds upon the earlier Qwen2.5-VL series, improving response quality with more human-like, formatted answers. The model excels in mathematical reasoning, fine-grained image understanding, and complex, multi-step reasoning tasks, such as those found in MathVista and MMMU benchmarks. Its superior performance has been demonstrated in comparison to other models, outperforming the larger Qwen2-VL-72B in certain areas. With improved image parsing and visual logic deduction, Qwen2.5-VL-32B provides a detailed, accurate analysis of images and can generate responses based on complex visual inputs. It has been optimized for both text and image tasks, making it ideal for applications requiring sophisticated reasoning and understanding across different media.
  • 12
    Athenic AI

    Athenic AI

    Athenic AI

    Discover the nuance behind the trends by following a guided journey of data analytics questions to understand what’s truly going on. Empower your stakeholders with self-service data analytics. Give them the ability to access and analyze the data they need, when they need it. Increase efficiency, reduce dependency on IT, and make data-driven decisions faster with a self-service analytics solution. Athenic AI connects to your data stored in a database, data warehouse or an application like a CRM or ERP platform to answer your questions without SQL knowledge or a business analyst. We have built Athenic to be able to interpret natural language and translate into SQL queries. We have even added the option to give the AI additional context in natural language.
  • 13
    AAAFx

    AAAFx

    AAAFx

    AAAFx, or Triple A FX, is a regulated broker that offers a platform for trading forex and CFDs. The company has been in business since 2007 and is licensed and regulated in Europe and South Africa. AAAFx is headquartered in Athens, Greece and serves 176 countries.
  • 14
    Qwen3-Max

    Qwen3-Max

    Alibaba

    Qwen3-Max is Alibaba’s latest trillion-parameter large language model, designed to push performance in agentic tasks, coding, reasoning, and long-context processing. It is built atop the Qwen3 family and benefits from the architectural, training, and inference advances introduced there; mixing thinker and non-thinker modes, a “thinking budget” mechanism, and support for dynamic mode switching based on complexity. The model reportedly processes extremely long inputs (hundreds of thousands of tokens), supports tool invocation, and exhibits strong performance on benchmarks in coding, multi-step reasoning, and agent benchmarks (e.g., Tau2-Bench). While its initial variant emphasizes instruction following (non-thinking mode), Alibaba plans to bring reasoning capabilities online to enable autonomous agent behavior. Qwen3-Max inherits multilingual support and extensive pretraining on trillions of tokens, and it is delivered via API interfaces compatible with OpenAI-style functions.
  • 15
    Pixtral Large

    Pixtral Large

    Mistral AI

    Pixtral Large is a 124-billion-parameter open-weight multimodal model developed by Mistral AI, building upon their Mistral Large 2 architecture. It integrates a 123-billion-parameter multimodal decoder with a 1-billion-parameter vision encoder, enabling advanced understanding of documents, charts, and natural images while maintaining leading text comprehension capabilities. With a context window of 128,000 tokens, Pixtral Large can process at least 30 high-resolution images simultaneously. The model has demonstrated state-of-the-art performance on benchmarks such as MathVista, DocVQA, and VQAv2, surpassing models like GPT-4o and Gemini-1.5 Pro. Pixtral Large is available under the Mistral Research License for research and educational use, and under the Mistral Commercial License for commercial applications.
  • 16
    Command A Translate
    Command A Translate is Cohere’s enterprise-grade machine translation model crafted to deliver secure, high-quality translation across 23 business-relevant languages. Built on a powerful 111-billion-parameter architecture with an 8K-input / 8K-output context window, it achieves industry-leading performance that surpasses models like GPT-5, DeepSeek-V3, DeepL Pro, and Google Translate across a broad suite of benchmarks. The model supports private deployments for sensitive workflows, allowing enterprises full control over their data, and introduces an innovative “Deep Translation” workflow, an agentic, multi-step refinement process that iteratively enhances translation quality for complex use cases. External validation from RWS Group confirms its excellence in challenging translation tasks. Additionally, the model’s weights are available for research via Hugging Face under a CC-BY-NC license, enabling deep customization, fine-tuning, and private deployment flexibility.
  • 17
    Qwen-7B

    Qwen-7B

    Alibaba

    Qwen-7B is the 7B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-7B is a Transformer-based large language model, which is pretrained on a large volume of data, including web texts, books, codes, etc. Additionally, based on the pretrained Qwen-7B, we release Qwen-7B-Chat, a large-model-based AI assistant, which is trained with alignment techniques. The features of the Qwen-7B series include: Trained with high-quality pretraining data. We have pretrained Qwen-7B on a self-constructed large-scale high-quality dataset of over 2.2 trillion tokens. The dataset includes plain texts and codes, and it covers a wide range of domains, including general domain data and professional domain data. Strong performance. In comparison with the models of the similar model size, we outperform the competitors on a series of benchmark datasets, which evaluates natural language understanding, mathematics, coding, etc. And more.
  • 18
    CodeQwen

    CodeQwen

    Alibaba

    CodeQwen is the code version of Qwen, the large language model series developed by the Qwen team, Alibaba Cloud. It is a transformer-based decoder-only language model pre-trained on a large amount of data of codes. Strong code generation capabilities and competitive performance across a series of benchmarks. Supporting long context understanding and generation with the context length of 64K tokens. CodeQwen supports 92 coding languages and provides excellent performance in text-to-SQL, bug fixes, etc. You can just write several lines of code with transformers to chat with CodeQwen. Essentially, we build the tokenizer and the model from pre-trained methods, and we use the generate method to perform chatting with the help of the chat template provided by the tokenizer. We apply the ChatML template for chat models following our previous practice. The model completes the code snippets according to the given prompts, without any additional formatting.
  • 19
    GLM-4.7

    GLM-4.7

    Zhipu AI

    GLM-4.7 is an advanced large language model designed to significantly elevate coding, reasoning, and agentic task performance. It delivers major improvements over GLM-4.6 in multilingual coding, terminal-based tasks, and real-world software engineering benchmarks such as SWE-bench and Terminal Bench. GLM-4.7 supports “thinking before acting,” enabling more stable, accurate, and controllable behavior in complex coding and agent workflows. The model also introduces strong gains in UI and frontend generation, producing cleaner webpages, better layouts, and more polished slides. Enhanced tool-using capabilities allow GLM-4.7 to perform more effectively in web browsing, automation, and agent benchmarks. Its reasoning and mathematical performance has improved substantially, showing strong results on advanced evaluation suites. GLM-4.7 is available via Z.ai, API platforms, coding agents, and local deployment for flexible adoption.
  • 20
    FLUX.1 Krea
    FLUX.1 Krea is an open source, guidance-distilled 12 billion-parameter diffusion transformer released by Krea in collaboration with Black Forest Labs, engineered to deliver superior aesthetic control and photorealism while eschewing the generic “AI look.” Fully compatible with the FLUX.1-dev ecosystem, it starts from a raw, untainted base model (flux-dev-raw) rich in world knowledge and employs a two-phase post-training pipeline, supervised fine-tuning on a hand-curated mix of high-quality and synthetic samples, followed by reinforcement learning from human feedback using opinionated preference data, to bias outputs toward a distinct style. By leveraging negative prompts during pre-training, custom loss functions for classifier-free guidance, and targeted preference labels, it achieves significant quality improvements with under one million examples, all without extensive prompting or additional LoRA modules.
  • 21
    GLM-5

    GLM-5

    Zhipu AI

    GLM-5 is Z.ai’s latest large language model built for complex systems engineering and long-horizon agentic tasks. It scales significantly beyond GLM-4.5, increasing total parameters and training data while integrating DeepSeek Sparse Attention to reduce deployment costs without sacrificing long-context capacity. The model combines enhanced pre-training with a new asynchronous reinforcement learning infrastructure called slime, improving training efficiency and post-training refinement. GLM-5 achieves best-in-class performance among open-source models across reasoning, coding, and agent benchmarks, narrowing the gap with leading frontier models. It ranks highly on evaluations such as Vending Bench 2, demonstrating strong long-term planning and operational capabilities. The model is open-sourced under the MIT License.
  • 22
    Qwen2-VL

    Qwen2-VL

    Alibaba

    Qwen2-VL is the latest version of the vision language models based on Qwen2 in the Qwen model familities. Compared with Qwen-VL, Qwen2-VL has the capabilities of: SoTA understanding of images of various resolution & ratio: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc. Understanding videos of 20 min+: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc. Agent that can operate your mobiles, robots, etc.: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions. Multilingual Support: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images
  • 23
    Qwen Code
    Qwen3‑Coder is an agentic code model available in multiple sizes, led by the 480B‑parameter Mixture‑of‑Experts variant (35B active) that natively supports 256K‑token contexts (extendable to 1M) and achieves state‑of‑the‑art results on Agentic Coding, Browser‑Use, and Tool‑Use tasks comparable to Claude Sonnet 4. Pre‑training on 7.5T tokens (70 % code) and synthetic data cleaned via Qwen2.5‑Coder optimized both coding proficiency and general abilities, while post‑training employs large‑scale, execution‑driven reinforcement learning and long‑horizon RL across 20,000 parallel environments to excel on multi‑turn software‑engineering benchmarks like SWE‑Bench Verified without test‑time scaling. Alongside the model, the open source Qwen Code CLI (forked from Gemini Code) unleashes Qwen3‑Coder in agentic workflows with customized prompts, function calling protocols, and seamless integration with Node.js, OpenAI SDKs, and more.
  • 24
    DeepSeek-V3.2
    DeepSeek-V3.2 is a next-generation open large language model designed for efficient reasoning, complex problem solving, and advanced agentic behavior. It introduces DeepSeek Sparse Attention (DSA), a long-context attention mechanism that dramatically reduces computation while preserving performance. The model is trained with a scalable reinforcement learning framework, allowing it to achieve results competitive with GPT-5 and even surpass it in its Speciale variant. DeepSeek-V3.2 also includes a large-scale agent task synthesis pipeline that generates structured reasoning and tool-use demonstrations for post-training. The model features an updated chat template with new tool-calling logic and the optional developer role for agent workflows. With gold-medal performance in the IMO and IOI 2025 competitions, DeepSeek-V3.2 demonstrates elite reasoning capabilities for both research and applied AI scenarios.
  • 25
    QwQ-Max-Preview
    QwQ-Max-Preview is an advanced AI model built on the Qwen2.5-Max architecture, designed to excel in deep reasoning, mathematical problem-solving, coding, and agent-related tasks. This preview version offers a sneak peek at its capabilities, which include improved performance in a wide range of general-domain tasks and the ability to handle complex workflows. QwQ-Max-Preview is slated for an official open-source release under the Apache 2.0 license, offering further advancements and refinements in its full version. It also paves the way for a more accessible AI ecosystem, with the upcoming launch of the Qwen Chat app and smaller variants of the model like QwQ-32B, aimed at developers seeking local deployment options.
  • 26
    Sky-T1

    Sky-T1

    NovaSky

    Sky-T1-32B-Preview is an open source reasoning model developed by the NovaSky team at UC Berkeley's Sky Computing Lab. It matches the performance of proprietary models like o1-preview on reasoning and coding benchmarks, yet was trained for under $450, showcasing the feasibility of cost-effective, high-level reasoning capabilities. The model was fine-tuned from Qwen2.5-32B-Instruct using a curated dataset of 17,000 examples across diverse domains, including math and coding. The training was completed in 19 hours on eight H100 GPUs with DeepSpeed Zero-3 offloading. All aspects of the project, including data, code, and model weights, are fully open-source, empowering the academic and open-source communities to replicate and enhance the model's performance.
  • 27
    ERNIE X1.1
    ERNIE X1.1 is Baidu’s upgraded reasoning model that delivers major improvements over its predecessor. It achieves 34.8% higher factual accuracy, 12.5% better instruction following, and 9.6% stronger agentic capabilities compared to ERNIE X1. In benchmark testing, it surpasses DeepSeek R1-0528 and performs on par with GPT-5 and Gemini 2.5 Pro. Built on the foundation of ERNIE 4.5, it has been enhanced with extensive mid-training and post-training, including reinforcement learning. The model is available through ERNIE Bot, the Wenxiaoyan app, and Baidu’s Qianfan MaaS platform via API. These upgrades are designed to reduce hallucinations, improve reliability, and strengthen real-world AI task performance.
  • 28
    DeepSeek-V3.2-Speciale
    DeepSeek-V3.2-Speciale is a high-compute variant of the DeepSeek-V3.2 model, created specifically for deep reasoning and advanced problem-solving tasks. It builds on DeepSeek Sparse Attention (DSA), a custom long-context attention mechanism that reduces computational overhead while preserving high performance. Through a large-scale reinforcement learning framework and extensive post-training compute, the Speciale variant surpasses GPT-5 on reasoning benchmarks and matches the capabilities of Gemini-3.0-Pro. The model achieved gold-medal performance in the International Mathematical Olympiad (IMO) 2025 and International Olympiad in Informatics (IOI) 2025. DeepSeek-V3.2-Speciale does not support tool-calling, making it purely optimized for uninterrupted reasoning and analytical accuracy. Released under the MIT license, it provides researchers and developers an open, state-of-the-art model focused entirely on high-precision reasoning.
  • 29
    K2 Think

    K2 Think

    Institute of Foundation Models

    K2 Think is an open source advanced reasoning model developed collaboratively by the Institute of Foundation Models at MBZUAI and G42. Despite only having 32 billion parameters, it delivers performance comparable to flagship models with many more parameters. It excels in mathematical reasoning, achieving top scores on competitive benchmarks such as AIME ’24/’25, HMMT ’25, and OMNI-Math-HARD. K2 Think is part of a suite of UAE-developed open models, alongside Jais (Arabic), NANDA (Hindi), and SHERKALA (Kazakh), and builds on the foundation laid by K2-65B, the fully reproducible open source foundation model released in 2024. The model is designed to be open, fast, and flexible, offering a web app interface for exploration, and with its efficiency in parameter positioning, it is a breakthrough in compact architectures for advanced AI reasoning.
  • 30
    Kimi K2

    Kimi K2

    Moonshot AI

    Kimi K2 is a state-of-the-art open source large language model series built on a mixture-of-experts (MoE) architecture, featuring 1 trillion total parameters and 32 billion activated parameters for task-specific efficiency. Trained with the Muon optimizer on over 15.5 trillion tokens and stabilized by MuonClip’s attention-logit clamping, it delivers exceptional performance in frontier knowledge, reasoning, mathematics, coding, and general agentic workflows. Moonshot AI provides two variants, Kimi-K2-Base for research-level fine-tuning and Kimi-K2-Instruct pre-trained for immediate chat and tool-driven interactions, enabling both custom development and drop-in agentic capabilities. Benchmarks show it outperforms leading open source peers and rivals top proprietary models in coding tasks and complex task breakdowns, while its 128 K-token context length, tool-calling API compatibility, and support for industry-standard inference engines.
  • 31
    Phi-2

    Phi-2

    Microsoft

    We are now releasing Phi-2, a 2.7 billion-parameter language model that demonstrates outstanding reasoning and language understanding capabilities, showcasing state-of-the-art performance among base language models with less than 13 billion parameters. On complex benchmarks Phi-2 matches or outperforms models up to 25x larger, thanks to new innovations in model scaling and training data curation. With its compact size, Phi-2 is an ideal playground for researchers, including for exploration around mechanistic interpretability, safety improvements, or fine-tuning experimentation on a variety of tasks. We have made Phi-2 available in the Azure AI Studio model catalog to foster research and development on language models.
  • 32
    Devstral Small 2
    Devstral Small 2 is the compact, 24 billion-parameter variant of the new coding-focused model family from Mistral AI, released under the permissive Apache 2.0 license to enable both local deployment and API use. Alongside its larger sibling (Devstral 2), this model brings “agentic coding” capabilities to environments with modest compute: it supports a large 256K-token context window, enabling it to understand and make changes across entire codebases. On the standard code-generation benchmark (SWE-Bench Verified), Devstral Small 2 scores around 68.0%, placing it among open-weight models many times its size. Because of its reduced size and efficient design, Devstral Small 2 can run on a single GPU or even CPU-only setups, making it practical for developers, small teams, or hobbyists without access to data-center hardware. Despite its compact footprint, Devstral Small 2 retains key capabilities of larger models; it can reason across multiple files and track dependencies.
  • 33
    Molmo 2
    Molmo 2 is a new suite of state-of-the-art open vision-language models with fully open weights, training data, and training code that extends the original Molmo family’s grounded image understanding to video and multi-image inputs, enabling advanced video understanding, pointing, tracking, dense captioning, and question-answering capabilities; all with strong spatial and temporal reasoning across frames. Molmo 2 includes three variants: an 8 billion-parameter model optimized for overall video grounding and QA, a 4 billion-parameter version designed for efficiency, and a 7 billion-parameter Olmo-backed model offering a fully open end-to-end architecture including the underlying language model. These models outperform earlier Molmo versions on core benchmarks and set new open-model high-water marks for image and video understanding tasks, often competing with substantially larger proprietary systems while training on a fraction of the data used by comparable closed models.
  • 34
    Falcon Mamba 7B

    Falcon Mamba 7B

    Technology Innovation Institute (TII)

    Falcon Mamba 7B is the first open-source State Space Language Model (SSLM), introducing a groundbreaking architecture for Falcon models. Recognized as the top-performing open-source SSLM worldwide by Hugging Face, it sets a new benchmark in AI efficiency. Unlike traditional transformers, SSLMs operate with minimal memory requirements and can generate extended text sequences without additional overhead. Falcon Mamba 7B surpasses leading transformer-based models, including Meta’s Llama 3.1 8B and Mistral’s 7B, showcasing superior performance. This innovation underscores Abu Dhabi’s commitment to advancing AI research and development on a global scale.
  • 35
    Qwen

    Qwen

    Alibaba

    Qwen is a powerful, free AI assistant built on the advanced Qwen model series, designed to help anyone with creativity, research, problem-solving, and everyday tasks. While Qwen Chat is the main interface for most users, Qwen itself powers a broad range of intelligent capabilities including image generation, deep research, website creation, advanced reasoning, and context-aware search. Its multimodal intelligence enables Qwen to understand and process text, images, audio, and video simultaneously for richer insights. Qwen is available on web, desktop, and mobile, ensuring seamless access across all devices. For developers, the Qwen API provides OpenAI-compatible endpoints, making integration simple and allowing Qwen’s intelligence to power apps, services, and automation. Whether you're chatting through Qwen Chat or building with the Qwen API, Qwen delivers fast, flexible, and highly capable AI support.
  • 36
    QwQ-32B

    QwQ-32B

    Alibaba

    ​QwQ-32B is an advanced reasoning model developed by Alibaba Cloud's Qwen team, designed to enhance AI's problem-solving capabilities. With 32 billion parameters, it achieves performance comparable to state-of-the-art models like DeepSeek's R1, which has 671 billion parameters. This efficiency is achieved through optimized parameter utilization, allowing QwQ-32B to perform complex tasks such as mathematical reasoning, coding, and general problem-solving with fewer resources. The model supports a context length of up to 32,000 tokens, enabling it to process extensive input data effectively. QwQ-32B is accessible via Alibaba's chatbot service, Qwen Chat, and is open sourced under the Apache 2.0 license, promoting collaboration and further development within the AI community.
  • 37
    Yi-Lightning

    Yi-Lightning

    Yi-Lightning

    Yi-Lightning, developed by 01.AI under the leadership of Kai-Fu Lee, represents the latest advancement in large language models with a focus on high performance and cost-efficiency. It boasts a maximum context length of 16K tokens and is priced at $0.14 per million tokens for both input and output, making it remarkably competitive. Yi-Lightning leverages an enhanced Mixture-of-Experts (MoE) architecture, incorporating fine-grained expert segmentation and advanced routing strategies, which contribute to its efficiency in training and inference. This model has excelled in various domains, achieving top rankings in categories like Chinese, math, coding, and hard prompts on the chatbot arena, where it secured the 6th position overall and 9th in style control. Its development included comprehensive pre-training, supervised fine-tuning, and reinforcement learning from human feedback, ensuring both performance and safety, with optimizations in memory usage and inference speed.
  • 38
    Phi-4-mini-reasoning
    Phi-4-mini-reasoning is a 3.8-billion parameter transformer-based language model optimized for mathematical reasoning and step-by-step problem solving in environments with constrained computing or latency. Fine-tuned with synthetic data generated by the DeepSeek-R1 model, it balances efficiency with advanced reasoning ability. Trained on over one million diverse math problems spanning multiple levels of difficulty from middle school to Ph.D. level, Phi-4-mini-reasoning outperforms its base model on long sentence generation across various evaluations and surpasses larger models like OpenThinker-7B, Llama-3.2-3B-instruct, and DeepSeek-R1. It features a 128K-token context window and supports function calling, enabling integration with external tools and APIs. Phi-4-mini-reasoning can be quantized using Microsoft Olive or Apple MLX Framework for deployment on edge devices such as IoT, laptops, and mobile devices.
  • 39
    DeepSeekMath
    DeepSeekMath is a specialized 7B parameter language model developed by DeepSeek-AI, designed to push the boundaries of mathematical reasoning in open-source language models. It starts from the DeepSeek-Coder-v1.5 7B model and undergoes further pre-training with 120B math-related tokens sourced from Common Crawl, alongside natural language and code data. DeepSeekMath has demonstrated remarkable performance, achieving a 51.7% score on the competition-level MATH benchmark without external tools or voting techniques, closely competing with the likes of Gemini-Ultra and GPT-4. The model's capabilities are enhanced by a meticulous data selection pipeline and the introduction of Group Relative Policy Optimization (GRPO), which optimizes both mathematical reasoning and memory usage. DeepSeekMath is available in base, instruct, and RL versions, supporting both research and commercial use, and is aimed at those looking to explore or apply advanced mathematical problem-solving in AI contexts.
  • 40
    gpt-realtime
    GPT-Realtime is OpenAI’s most advanced, production-ready speech-to-speech model, now accessible through the fully available Realtime API. It delivers remarkably natural, expressive audio with fine-grained control over tone, pace, and accent. The model can comprehend nuanced human audio, including laughter, switch languages mid-sentence, and accurately process alphanumeric details like phone numbers across multiple languages. It significantly improves reasoning and instruction-following (achieving 82.8% on the BigBench Audio benchmark and 30.5% on MultiChallenge) and boasts enhanced function calling, now more reliable, timely, and accurate (scoring 66.5% on ComplexFuncBench). The model supports asynchronous tool invocation so conversations remain fluid even during long-running calls. The Realtime API also offers innovative capabilities such as image input support, SIP phone network integration, remote MCP server connection, and reusable conversation prompts.
    Starting Price: $20 per month
  • 41
    Qwen3.5

    Qwen3.5

    Alibaba

    Qwen3.5 is a next-generation open-weight multimodal large language model designed to power native vision-language agents. The flagship release, Qwen3.5-397B-A17B, combines a hybrid linear attention architecture with sparse mixture-of-experts, activating only 17 billion parameters per forward pass out of 397 billion total to maximize efficiency. It delivers strong benchmark performance across reasoning, coding, multilingual understanding, visual reasoning, and agent-based tasks. The model expands language support from 119 to 201 languages and dialects while introducing a 1M-token context window in its hosted version, Qwen3.5-Plus. Built for multimodal tasks, it processes text, images, and video with advanced spatial reasoning and tool integration. Qwen3.5 also incorporates scalable reinforcement learning environments to improve general agent capabilities. Designed for developers and enterprises, it enables efficient, tool-augmented, multimodal AI workflows.
  • 42
    Llama 2
    The next generation of our open source large language model. This release includes model weights and starting code for pretrained and fine-tuned Llama language models — ranging from 7B to 70B parameters. Llama 2 pretrained models are trained on 2 trillion tokens, and have double the context length than Llama 1. Its fine-tuned models have been trained on over 1 million human annotations. Llama 2 outperforms other open source language models on many external benchmarks, including reasoning, coding, proficiency, and knowledge tests. Llama 2 was pretrained on publicly available online data sources. The fine-tuned model, Llama-2-chat, leverages publicly available instruction datasets and over 1 million human annotations. We have a broad range of supporters around the world who believe in our open approach to today’s AI — companies that have given early feedback and are excited to build with Llama 2.
  • 43
    Phi-4-reasoning-plus
    Phi-4-reasoning-plus is a 14-billion parameter open-weight reasoning model that builds upon Phi-4-reasoning capabilities. It is further trained with reinforcement learning to utilize more inference-time compute, using 1.5x more tokens than Phi-4-reasoning, to deliver higher accuracy. Despite its significantly smaller size, Phi-4-reasoning-plus achieves better performance than OpenAI o1-mini and DeepSeek-R1 at most benchmarks, including mathematical reasoning and Ph.D. level science questions. It surpasses the full DeepSeek-R1 model (with 671 billion parameters) on the AIME 2025 test, the 2025 qualifier for the USA Math Olympiad. Phi-4-reasoning-plus is available on Azure AI Foundry and HuggingFace.
  • 44
    Ferret

    Ferret

    Apple

    An End-to-End MLLM that Accept Any-Form Referring and Ground Anything in Response. Ferret Model - Hybrid Region Representation + Spatial-aware Visual Sampler enable fine-grained and open-vocabulary referring and grounding in MLLM. GRIT Dataset (~1.1M) - A Large-scale, Hierarchical, Robust ground-and-refer instruction tuning dataset. Ferret-Bench - A multimodal evaluation benchmark that jointly requires Referring/Grounding, Semantics, Knowledge, and Reasoning.
  • 45
    Devstral

    Devstral

    Mistral AI

    Devstral is an open source, agentic large language model (LLM) developed by Mistral AI in collaboration with All Hands AI, specifically designed for software engineering tasks. It excels at navigating complex codebases, editing multiple files, and resolving real-world issues, outperforming all open source models on the SWE-Bench Verified benchmark with a score of 46.8%. Devstral is fine-tuned from Mistral-Small-3.1 and features a long context window of up to 128,000 tokens. It is optimized for local deployment on high-end hardware, such as a Mac with 32GB RAM or an Nvidia RTX 4090 GPU, and is compatible with inference frameworks like vLLM, Transformers, and Ollama. Released under the Apache 2.0 license, Devstral is available for free and can be accessed via Hugging Face, Ollama, Kaggle, Unsloth, and LM Studio.
    Starting Price: $0.1 per million input tokens
  • 46
    DeepSeek-V4

    DeepSeek-V4

    DeepSeek

    DeepSeek-V4 is a next-generation open large language model built for efficient reasoning, complex problem solving, and advanced agentic behavior. It introduces DeepSeek Sparse Attention (DSA), a long-context attention mechanism that significantly reduces computational overhead while maintaining strong performance. The model is trained using a scalable reinforcement learning framework to achieve results competitive with leading frontier models. It also incorporates a large-scale agent task synthesis pipeline to generate structured reasoning and tool-use demonstrations during post-training. An updated chat template includes enhanced tool-calling logic and an optional developer role to support agent workflows. DeepSeek-V4 delivers elite reasoning performance across both research and applied AI use cases.
  • 47
    DeepCoder

    DeepCoder

    Agentica Project

    DeepCoder is a fully open source code-reasoning and generation model released by Agentica Project in collaboration with Together AI. It is fine-tuned from DeepSeek-R1-Distilled-Qwen-14B using distributed reinforcement learning, achieving a 60.6% accuracy on LiveCodeBench (representing an 8% improvement over the base), a performance level that matches that of proprietary models such as o3-mini (2025-01-031 Low) and o1 while using only 14 billion parameters. It was trained over 2.5 weeks on 32 H100 GPUs with a curated dataset of roughly 24,000 coding problems drawn from verified sources (including TACO-Verified, PrimeIntellect SYNTHETIC-1, and LiveCodeBench submissions), each problem requiring a verifiable solution and at least five unit tests to ensure reliability for RL training. To handle long-range context, DeepCoder employs techniques such as iterative context lengthening and overlong filtering.
  • 48
    Olmo 3
    Olmo 3 is a fully open model family spanning 7 billion and 32 billion parameter variants that delivers not only high-performing base, reasoning, instruction, and reinforcement-learning models, but also exposure of the entire model flow, including raw training data, intermediate checkpoints, training code, long-context support (65,536 token window), and provenance tooling. Starting with the Dolma 3 dataset (≈9 trillion tokens) and its disciplined mix of web text, scientific PDFs, code, and long-form documents, the pre-training, mid-training, and long-context phases shape the base models, which are then post-trained via supervised fine-tuning, direct preference optimisation, and RL with verifiable rewards to yield the Think and Instruct variants. The 32 B Think model is described as the strongest fully open reasoning model to date, competitively close to closed-weight peers in math, code, and complex reasoning.
  • 49
    Qwen-Image-2.0
    Qwen-Image 2.0 is the latest AI image generation and editing model in the Qwen family that combines both generation and editing in a single unified architecture, delivering high-quality visuals with professional-grade typography and layout capabilities directly from natural-language prompts. It supports text-to-image and image editing workflows with a lightweight 7 billion-parameter model that runs quickly while producing native 2048x2048 resolution outputs and handling long, detailed instructions up to about 1,000 tokens so creators can generate complex infographics, posters, slides, comics, and photorealistic scenes with accurate, well-rendered English and other language text embedded in the visuals. The unified model design means users don’t need separate tools for creating and modifying images, making it easier to iterate on ideas and refine compositions.
  • 50
    DeepSeek R1

    DeepSeek R1

    DeepSeek

    DeepSeek-R1 is an advanced open-source reasoning model developed by DeepSeek, designed to rival OpenAI's Model o1. Accessible via web, app, and API, it excels in complex tasks such as mathematics and coding, demonstrating superior performance on benchmarks like the American Invitational Mathematics Examination (AIME) and MATH. DeepSeek-R1 employs a mixture of experts (MoE) architecture with 671 billion total parameters, activating 37 billion parameters per token, enabling efficient and accurate reasoning capabilities. This model is part of DeepSeek's commitment to advancing artificial general intelligence (AGI) through open-source innovation.