Alternatives to Xgen-small
Compare Xgen-small alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to Xgen-small in 2025. Compare features, ratings, user reviews, pricing, and more from Xgen-small competitors and alternatives in order to make an informed decision for your business.
-
1
Phi-4-mini-flash-reasoning
Microsoft
Phi-4-mini-flash-reasoning is a 3.8 billion‑parameter open model in Microsoft’s Phi family, purpose‑built for edge, mobile, and other resource‑constrained environments where compute, memory, and latency are tightly limited. It introduces the SambaY decoder‑hybrid‑decoder architecture with Gated Memory Units (GMUs) interleaved alongside Mamba state‑space and sliding‑window attention layers, delivering up to 10× higher throughput and a 2–3× reduction in latency compared to its predecessor without sacrificing advanced math and logic reasoning performance. Supporting a 64 K‑token context length and fine‑tuned on high‑quality synthetic data, it excels at long‑context retrieval, reasoning tasks, and real‑time inference, all deployable on a single GPU. Phi-4-mini-flash-reasoning is available today via Azure AI Foundry, NVIDIA API Catalog, and Hugging Face, enabling developers to build fast, scalable, logic‑intensive applications. -
2
GPT-4.1 mini
OpenAI
GPT-4.1 mini is a compact version of OpenAI’s powerful GPT-4.1 model, designed to provide high performance while significantly reducing latency and cost. With a smaller size and optimized architecture, GPT-4.1 mini still delivers impressive results in tasks such as coding, instruction following, and long-context processing. It supports up to 1 million tokens of context, making it an efficient solution for applications that require fast responses without sacrificing accuracy or depth.Starting Price: $0.40 per 1M tokens (input) -
3
Yi-Lightning
Yi-Lightning
Yi-Lightning, developed by 01.AI under the leadership of Kai-Fu Lee, represents the latest advancement in large language models with a focus on high performance and cost-efficiency. It boasts a maximum context length of 16K tokens and is priced at $0.14 per million tokens for both input and output, making it remarkably competitive. Yi-Lightning leverages an enhanced Mixture-of-Experts (MoE) architecture, incorporating fine-grained expert segmentation and advanced routing strategies, which contribute to its efficiency in training and inference. This model has excelled in various domains, achieving top rankings in categories like Chinese, math, coding, and hard prompts on the chatbot arena, where it secured the 6th position overall and 9th in style control. Its development included comprehensive pre-training, supervised fine-tuning, and reinforcement learning from human feedback, ensuring both performance and safety, with optimizations in memory usage and inference speed. -
4
Mistral NeMo
Mistral AI
Mistral NeMo, our new best small model. A state-of-the-art 12B model with 128k context length, and released under the Apache 2.0 license. Mistral NeMo is a 12B model built in collaboration with NVIDIA. Mistral NeMo offers a large context window of up to 128k tokens. Its reasoning, world knowledge, and coding accuracy are state-of-the-art in its size category. As it relies on standard architecture, Mistral NeMo is easy to use and a drop-in replacement in any system using Mistral 7B. We have released pre-trained base and instruction-tuned checkpoints under the Apache 2.0 license to promote adoption for researchers and enterprises. Mistral NeMo was trained with quantization awareness, enabling FP8 inference without any performance loss. The model is designed for global, multilingual applications. It is trained on function calling and has a large context window. Compared to Mistral 7B, it is much better at following precise instructions, reasoning, and handling multi-turn conversations.Starting Price: Free -
5
Kimi K2
Moonshot AI
Kimi K2 is a state-of-the-art open source large language model series built on a mixture-of-experts (MoE) architecture, featuring 1 trillion total parameters and 32 billion activated parameters for task-specific efficiency. Trained with the Muon optimizer on over 15.5 trillion tokens and stabilized by MuonClip’s attention-logit clamping, it delivers exceptional performance in frontier knowledge, reasoning, mathematics, coding, and general agentic workflows. Moonshot AI provides two variants, Kimi-K2-Base for research-level fine-tuning and Kimi-K2-Instruct pre-trained for immediate chat and tool-driven interactions, enabling both custom development and drop-in agentic capabilities. Benchmarks show it outperforms leading open source peers and rivals top proprietary models in coding tasks and complex task breakdowns, while its 128 K-token context length, tool-calling API compatibility, and support for industry-standard inference engines.Starting Price: Free -
6
Llama 4 Scout
Meta
Llama 4 Scout is a powerful 17 billion active parameter multimodal AI model that excels in both text and image processing. With an industry-leading context length of 10 million tokens, it outperforms its predecessors, including Llama 3, in tasks such as multi-document summarization and parsing large codebases. Llama 4 Scout is designed to handle complex reasoning tasks while maintaining high efficiency, making it perfect for use cases requiring long-context comprehension and image grounding. It offers cutting-edge performance in image-related tasks and is particularly well-suited for applications requiring both text and visual understanding.Starting Price: Free -
7
LongLLaMA
LongLLaMA
This repository contains the research preview of LongLLaMA, a large language model capable of handling long contexts of 256k tokens or even more. LongLLaMA is built upon the foundation of OpenLLaMA and fine-tuned using the Focused Transformer (FoT) method. LongLLaMA code is built upon the foundation of Code Llama. We release a smaller 3B base variant (not instruction tuned) of the LongLLaMA model on a permissive license (Apache 2.0) and inference code supporting longer contexts on hugging face. Our model weights can serve as the drop-in replacement of LLaMA in existing implementations (for short context up to 2048 tokens). Additionally, we provide evaluation results and comparisons against the original OpenLLaMA models.Starting Price: Free -
8
Command A Reasoning
Cohere AI
Command A Reasoning is Cohere’s most advanced enterprise-ready language model, engineered for high-stakes reasoning tasks and seamless integration into AI agent workflows. The model delivers exceptional reasoning performance, efficiency, and controllability, scaling across multi-GPU setups with support for up to 256,000-token context windows, ideal for handling long documents and multi-step agentic tasks. Organizations can fine-tune output precision and latency through a token budget, allowing a single model to flexibly serve both high-accuracy and high-throughput use cases. It powers Cohere’s North platform with leading benchmark performance and excels in multilingual contexts across 23 languages. Designed with enterprise safety in mind, it balances helpfulness with robust safeguards against harmful outputs. A lightweight deployment option allows running the model securely on a single H100 or A100 GPU, simplifying private, scalable use. -
9
Evo 2
Arc Institute
Evo 2 is a genomic foundation model capable of generalist prediction and design tasks across DNA, RNA, and proteins. It utilizes a frontier deep learning architecture to model biological sequences at single-nucleotide resolution, achieving near-linear scaling of compute and memory relative to context length. Trained with 40 billion parameters and a 1 megabase context length, Evo 2 processes over 9 trillion nucleotides from diverse eukaryotic and prokaryotic genomes. This extensive training enables Evo 2 to perform zero-shot function prediction across multiple biological modalities, including DNA, RNA, and proteins, and to generate novel sequences with plausible genomic architecture. The model's capabilities have been demonstrated in tasks such as designing functional CRISPR systems and predicting disease-causing mutations in human genes. Evo 2 is publicly accessible via Arc's GitHub repository and is integrated into the NVIDIA BioNeMo framework. -
10
DeepSeek-V3.2-Exp
DeepSeek
Introducing DeepSeek-V3.2-Exp, our latest experimental model built on V3.1-Terminus, debuting DeepSeek Sparse Attention (DSA) for faster and more efficient inference and training on long contexts. DSA enables fine-grained sparse attention with minimal loss in output quality, boosting performance for long-context tasks while reducing compute costs. Benchmarks indicate that V3.2-Exp performs on par with V3.1-Terminus despite these efficiency gains. The model is now live across app, web, and API. Alongside this, the DeepSeek API prices have been cut by over 50% immediately to make access more affordable. For a transitional period, users can still access V3.1-Terminus via a temporary API endpoint until October 15, 2025. DeepSeek welcomes feedback on DSA via its feedback portal. In conjunction with the release, DeepSeek-V3.2-Exp has been open-sourced: the model weights and supporting technology (including key GPU kernels in TileLang and CUDA) are available on Hugging Face. -
11
Jamba
AI21 Labs
Jamba is the most powerful & efficient long context model, open for builders and built for the enterprise. Jamba's latency outperforms all leading models of comparable sizes. Jamba's 256k context window is the longest openly available. Jamba's Mamba-Transformer MoE architecture is designed for cost & efficiency gains. Jamba offers key features of OOTB including function calls, JSON mode output, document objects, and citation mode. Jamba 1.5 models maintain high performance across the full length of their context window. Jamba 1.5 models achieve top scores across common quality benchmarks. Secure deployment that suits your enterprise. Seamlessly start using Jamba on our production-grade SaaS platform. The Jamba model family is available for deployment across our strategic partners. We offer VPC & on-prem deployments for enterprises that require custom solutions. For enterprises that have unique, bespoke requirements, we offer hands-on management, continuous pre-training, etc. -
12
Reka Flash 3
Reka
Reka Flash 3 is a 21-billion-parameter multimodal AI model developed by Reka AI, designed to excel in general chat, coding, instruction following, and function calling. It processes and reasons with text, images, video, and audio inputs, offering a compact, general-purpose solution for various applications. Trained from scratch on diverse datasets, including publicly accessible and synthetic data, Reka Flash 3 underwent instruction tuning on curated, high-quality data to optimize performance. The final training stage involved reinforcement learning using REINFORCE Leave One-Out (RLOO) with both model-based and rule-based rewards, enhancing its reasoning capabilities. With a context length of 32,000 tokens, Reka Flash 3 performs competitively with proprietary models like OpenAI's o1-mini, making it suitable for low-latency or on-device deployments. The model's full precision requires 39GB (fp16), but it can be compressed to as small as 11GB using 4-bit quantization. -
13
DeepSeek-V2
DeepSeek
DeepSeek-V2 is a state-of-the-art Mixture-of-Experts (MoE) language model introduced by DeepSeek-AI, characterized by its economical training and efficient inference capabilities. With a total of 236 billion parameters, of which only 21 billion are active per token, it supports a context length of up to 128K tokens. DeepSeek-V2 employs innovative architectures like Multi-head Latent Attention (MLA) for efficient inference by compressing the Key-Value (KV) cache and DeepSeekMoE for cost-effective training through sparse computation. This model significantly outperforms its predecessor, DeepSeek 67B, by saving 42.5% in training costs, reducing the KV cache by 93.3%, and enhancing generation throughput by 5.76 times. Pretrained on an 8.1 trillion token corpus, DeepSeek-V2 excels in language understanding, coding, and reasoning tasks, making it a top-tier performer among open-source models.Starting Price: Free -
14
Yi-Large
01.AI
Yi-Large is a proprietary large language model developed by 01.AI, offering a 32k context length with both input and output costs at $2 per million tokens. It stands out with its advanced capabilities in natural language processing, common-sense reasoning, and multilingual support, performing on par with leading models like GPT-4 and Claude3 in various benchmarks. Yi-Large is designed for tasks requiring complex inference, prediction, and language understanding, making it suitable for applications like knowledge search, data classification, and creating human-like chatbots. Its architecture is based on a decoder-only transformer with enhancements such as pre-normalization and Group Query Attention, and it has been trained on a vast, high-quality multilingual dataset. This model's versatility and cost-efficiency make it a strong contender in the AI market, particularly for enterprises aiming to deploy AI solutions globally.Starting Price: $0.19 per 1M input token -
15
GPT-4.1
OpenAI
GPT-4.1 is an advanced AI model from OpenAI, designed to enhance performance across key tasks such as coding, instruction following, and long-context comprehension. With a large context window of up to 1 million tokens, GPT-4.1 can process and understand extensive datasets, making it ideal for tasks like software development, document analysis, and AI agent workflows. Available through the API, GPT-4.1 offers significant improvements over previous models, excelling at real-world applications where efficiency and accuracy are crucial.Starting Price: $2 per 1M tokens (input) -
16
MiniMax-M1
MiniMax
MiniMax‑M1 is a large‑scale hybrid‑attention reasoning model released by MiniMax AI under the Apache 2.0 license. It supports an unprecedented 1 million‑token context window and up to 80,000-token outputs, enabling extended reasoning across long documents. Trained using large‑scale reinforcement learning with a novel CISPO algorithm, MiniMax‑M1 completed full training on 512 H800 GPUs in about three weeks. It achieves state‑of‑the‑art performance on benchmarks in mathematics, coding, software engineering, tool usage, and long‑context understanding, matching or outperforming leading models. Two model variants are available (40K and 80K thinking budgets), with weights and deployment scripts provided via GitHub and Hugging Face. -
17
Llama 2
Meta
The next generation of our open source large language model. This release includes model weights and starting code for pretrained and fine-tuned Llama language models — ranging from 7B to 70B parameters. Llama 2 pretrained models are trained on 2 trillion tokens, and have double the context length than Llama 1. Its fine-tuned models have been trained on over 1 million human annotations. Llama 2 outperforms other open source language models on many external benchmarks, including reasoning, coding, proficiency, and knowledge tests. Llama 2 was pretrained on publicly available online data sources. The fine-tuned model, Llama-2-chat, leverages publicly available instruction datasets and over 1 million human annotations. We have a broad range of supporters around the world who believe in our open approach to today’s AI — companies that have given early feedback and are excited to build with Llama 2.Starting Price: Free -
18
GPT-5
OpenAI
GPT-5 is OpenAI’s most advanced AI model, delivering smarter, faster, and more useful responses across a wide range of topics including math, science, finance, and law. It features built-in thinking capabilities that allow it to provide expert-level answers and perform complex reasoning. GPT-5 can handle long context lengths and generate detailed outputs, making it ideal for coding, research, and creative writing. The model includes a ‘verbosity’ parameter for customizable response length and improved personality control. It integrates with business tools like Google Drive and SharePoint to provide context-aware answers while respecting security permissions. Available to everyone, GPT-5 empowers users to collaborate with an AI assistant that feels like a knowledgeable colleague.Starting Price: $1.25 per 1M tokens -
19
FLUX.1 Krea
Krea
FLUX.1 Krea is an open source, guidance-distilled 12 billion-parameter diffusion transformer released by Krea in collaboration with Black Forest Labs, engineered to deliver superior aesthetic control and photorealism while eschewing the generic “AI look.” Fully compatible with the FLUX.1-dev ecosystem, it starts from a raw, untainted base model (flux-dev-raw) rich in world knowledge and employs a two-phase post-training pipeline, supervised fine-tuning on a hand-curated mix of high-quality and synthetic samples, followed by reinforcement learning from human feedback using opinionated preference data, to bias outputs toward a distinct style. By leveraging negative prompts during pre-training, custom loss functions for classifier-free guidance, and targeted preference labels, it achieves significant quality improvements with under one million examples, all without extensive prompting or additional LoRA modules.Starting Price: Free -
20
Devstral
Mistral AI
Devstral is an open source, agentic large language model (LLM) developed by Mistral AI in collaboration with All Hands AI, specifically designed for software engineering tasks. It excels at navigating complex codebases, editing multiple files, and resolving real-world issues, outperforming all open source models on the SWE-Bench Verified benchmark with a score of 46.8%. Devstral is fine-tuned from Mistral-Small-3.1 and features a long context window of up to 128,000 tokens. It is optimized for local deployment on high-end hardware, such as a Mac with 32GB RAM or an Nvidia RTX 4090 GPU, and is compatible with inference frameworks like vLLM, Transformers, and Ollama. Released under the Apache 2.0 license, Devstral is available for free and can be accessed via Hugging Face, Ollama, Kaggle, Unsloth, and LM Studio.Starting Price: $0.1 per million input tokens -
21
Qwen3-Coder
Qwen
Qwen3‑Coder is an agentic code model available in multiple sizes, led by the 480B‑parameter Mixture‑of‑Experts variant (35B active) that natively supports 256K‑token contexts (extendable to 1M) and achieves state‑of‑the‑art results comparable to Claude Sonnet 4. Pre‑training on 7.5T tokens (70 % code) and synthetic data cleaned via Qwen2.5‑Coder optimized both coding proficiency and general abilities, while post‑training employs large‑scale, execution‑driven reinforcement learning, scaling test‑case generation for diverse coding challenges, and long‑horizon RL across 20,000 parallel environments to excel on multi‑turn software‑engineering benchmarks like SWE‑Bench Verified without test‑time scaling. Alongside the model, the open source Qwen Code CLI (forked from Gemini Code) unleashes Qwen3‑Coder in agentic workflows with customized prompts, function calling protocols, and seamless integration with Node.js, OpenAI SDKs, and environment variables.Starting Price: Free -
22
NVIDIA NeMo Megatron
NVIDIA
NVIDIA NeMo Megatron is an end-to-end framework for training and deploying LLMs with billions and trillions of parameters. NVIDIA NeMo Megatron, part of the NVIDIA AI platform, offers an easy, efficient, and cost-effective containerized framework to build and deploy LLMs. Designed for enterprise application development, it builds upon the most advanced technologies from NVIDIA research and provides an end-to-end workflow for automated distributed data processing, training large-scale customized GPT-3, T5, and multilingual T5 (mT5) models, and deploying models for inference at scale. Harnessing the power of LLMs is made easy through validated and converged recipes with predefined configurations for training and inference. Customizing models is simplified by the hyperparameter tool, which automatically searches for the best hyperparameter configurations and performance for training and inference on any given distributed GPU cluster configuration. -
23
Ministral 8B
Mistral AI
Mistral AI has introduced two advanced models for on-device computing and edge applications, named "les Ministraux": Ministral 3B and Ministral 8B. These models excel in knowledge, commonsense reasoning, function-calling, and efficiency within the sub-10B parameter range. They support up to 128k context length and are designed for various applications, including on-device translation, offline smart assistants, local analytics, and autonomous robotics. Ministral 8B features an interleaved sliding-window attention pattern for faster and more memory-efficient inference. Both models can function as intermediaries in multi-step agentic workflows, handling tasks like input parsing, task routing, and API calls based on user intent with low latency and cost. Benchmark evaluations indicate that les Ministraux consistently outperforms comparable models across multiple tasks. As of October 16, 2024, both models are available, with Ministral 8B priced at $0.1 per million tokens.Starting Price: Free -
24
Command A Translate
Cohere AI
Command A Translate is Cohere’s enterprise-grade machine translation model crafted to deliver secure, high-quality translation across 23 business-relevant languages. Built on a powerful 111-billion-parameter architecture with an 8K-input / 8K-output context window, it achieves industry-leading performance that surpasses models like GPT-5, DeepSeek-V3, DeepL Pro, and Google Translate across a broad suite of benchmarks. The model supports private deployments for sensitive workflows, allowing enterprises full control over their data, and introduces an innovative “Deep Translation” workflow, an agentic, multi-step refinement process that iteratively enhances translation quality for complex use cases. External validation from RWS Group confirms its excellence in challenging translation tasks. Additionally, the model’s weights are available for research via Hugging Face under a CC-BY-NC license, enabling deep customization, fine-tuning, and private deployment flexibility. -
25
Syn
Upstage AI
Syn is a next‑generation Japanese large language model co‑developed by Upstage and Karakuri, featuring under 14 billion parameters and optimized for enterprise use in finance, manufacturing, legal, and healthcare. It delivers top‑tier benchmark performance on the Weights & Biases Nejumi Leaderboard, achieving industry‑leading scores for accuracy and alignment, while maintaining cost efficiency through a lightweight architecture derived from Solar Mini. Syn excels in Japanese “truthfulness” and safety, understanding nuanced expressions and industry‑specific terminology, and offers flexible fine‑tuning to integrate proprietary data and domain knowledge. Built for scalable deployment, it supports on‑premises, AWS Marketplace, and cloud environments, with security and compliance safeguards tailored to enterprise requirements. Leveraging AWS Trainium, Syn reduces training costs by approximately 50 percent compared to traditional GPU setups, enabling rapid customization of use cases.Starting Price: $0.1 per 1M tokens -
26
Voxtral
Mistral AI
Voxtral models are frontier open source speech‑understanding systems available in two sizes—a 24 B variant for production‑scale applications and a 3 B variant for local and edge deployments, both released under the Apache 2.0 license. They combine high‑accuracy transcription with native semantic understanding, supporting long‑form context (up to 32 K tokens), built‑in Q&A and structured summarization, automatic language detection across major languages, and direct function‑calling to trigger backend workflows from voice. Retaining the text capabilities of their Mistral Small 3.1 backbone, Voxtral handles audio up to 30 minutes for transcription or 40 minutes for understanding and outperforms leading open source and proprietary models on benchmarks such as LibriSpeech, Mozilla Common Voice, and FLEURS. Accessible via download on Hugging Face, API endpoint, or private on‑premises deployment, Voxtral also offers domain‑specific fine‑tuning and advanced enterprise features. -
27
Command R+
Cohere AI
Command R+ is Cohere's newest large language model, optimized for conversational interaction and long-context tasks. It aims at being extremely performant, enabling companies to move beyond proof of concept and into production. We recommend using Command R+ for those workflows that lean on complex RAG functionality and multi-step tool use (agents). Command R, on the other hand, is great for simpler retrieval augmented generation (RAG) and single-step tool use tasks, as well as applications where price is a major consideration.Starting Price: Free -
28
CodeQwen
Alibaba
CodeQwen is the code version of Qwen, the large language model series developed by the Qwen team, Alibaba Cloud. It is a transformer-based decoder-only language model pre-trained on a large amount of data of codes. Strong code generation capabilities and competitive performance across a series of benchmarks. Supporting long context understanding and generation with the context length of 64K tokens. CodeQwen supports 92 coding languages and provides excellent performance in text-to-SQL, bug fixes, etc. You can just write several lines of code with transformers to chat with CodeQwen. Essentially, we build the tokenizer and the model from pre-trained methods, and we use the generate method to perform chatting with the help of the chat template provided by the tokenizer. We apply the ChatML template for chat models following our previous practice. The model completes the code snippets according to the given prompts, without any additional formatting.Starting Price: Free -
29
Solar Mini
Upstage AI
Solar Mini is a pre‑trained large language model that delivers GPT‑3.5‑comparable responses with 2.5× faster inference while staying under 30 billion parameters. It achieved first place on the Hugging Face Open LLM Leaderboard in December 2023 by combining a 32‑layer Llama 2 architecture, initialized with high‑quality Mistral 7B weights, with an innovative “depth up‑scaling” (DUS) approach that deepens the model efficiently without adding complex modules. After DUS, continued pretraining restores and enhances performance, and instruction tuning in a QA format, especially for Korean, refines its ability to follow user prompts, while alignment tuning ensures its outputs meet human or advanced AI preferences. Solar Mini outperforms competitors such as Llama 2, Mistral 7B, Ko‑Alpaca, and KULLM across a variety of benchmarks, proving that compact size need not sacrifice capability.Starting Price: $0.1 per 1M tokens -
30
Qwen2.5-1M
Alibaba
Qwen2.5-1M is an open-source language model developed by the Qwen team, designed to handle context lengths of up to one million tokens. This release includes two model variants, Qwen2.5-7B-Instruct-1M and Qwen2.5-14B-Instruct-1M, marking the first time Qwen models have been upgraded to support such extensive context lengths. To facilitate efficient deployment, the team has also open-sourced an inference framework based on vLLM, integrated with sparse attention methods, enabling processing of 1M-token inputs with a 3x to 7x speed improvement. Comprehensive technical details, including design insights and ablation experiments, are available in the accompanying technical report.Starting Price: Free -
31
Baichuan-13B
Baichuan Intelligent Technology
Baichuan-13B is an open source and commercially available large-scale language model containing 13 billion parameters developed by Baichuan Intelligent following Baichuan -7B . It has achieved the best results of the same size on authoritative Chinese and English benchmarks. This release contains two versions of pre-training ( Baichuan-13B-Base ) and alignment ( Baichuan-13B-Chat ). Larger size, more data : Baichuan-13B further expands the number of parameters to 13 billion on the basis of Baichuan -7B , and trains 1.4 trillion tokens on high-quality corpus, which is 40% more than LLaMA-13B. It is currently open source The model with the largest amount of training data in the 13B size. Support Chinese and English bilingual, use ALiBi position code, context window length is 4096.Starting Price: Free -
32
Phi-4-reasoning
Microsoft
Phi-4-reasoning is a 14-billion parameter transformer-based language model optimized for complex reasoning tasks, including math, coding, algorithmic problem solving, and planning. Trained via supervised fine-tuning of Phi-4 on carefully curated "teachable" prompts and reasoning demonstrations generated using o3-mini, it generates detailed reasoning chains that effectively leverage inference-time compute. Phi-4-reasoning incorporates outcome-based reinforcement learning to produce longer reasoning traces. It outperforms significantly larger open-weight models such as DeepSeek-R1-Distill-Llama-70B and approaches the performance levels of the full DeepSeek-R1 model across a wide range of reasoning tasks. Phi-4-reasoning is designed for environments with constrained computing or latency. Fine-tuned with synthetic data generated by DeepSeek-R1, it provides high-quality, step-by-step problem solving. -
33
AudioLM
Google
AudioLM is a pure audio language model that generates high‑fidelity, long‑term coherent speech and piano music by learning from raw audio alone, without requiring any text transcripts or symbolic representations. It represents audio hierarchically using two types of discrete tokens, semantic tokens extracted from a self‑supervised model to capture phonetic or melodic structure and global context, and acoustic tokens from a neural codec to preserve speaker characteristics and fine waveform details, and chains three Transformer stages to predict first semantic tokens for high‑level structure, then coarse and finally fine acoustic tokens for detailed synthesis. The resulting pipeline allows AudioLM to condition on a few seconds of input audio and produce seamless continuations that retain voice identity, prosody, and recording conditions in speech or melody, harmony, and rhythm in music. Human evaluations show that synthetic continuations are nearly indistinguishable from real recordings. -
34
Falcon-40B
Technology Innovation Institute (TII)
Falcon-40B is a 40B parameters causal decoder-only model built by TII and trained on 1,000B tokens of RefinedWeb enhanced with curated corpora. It is made available under the Apache 2.0 license. Why use Falcon-40B? It is the best open-source model currently available. Falcon-40B outperforms LLaMA, StableLM, RedPajama, MPT, etc. See the OpenLLM Leaderboard. It features an architecture optimized for inference, with FlashAttention and multiquery. It is made available under a permissive Apache 2.0 license allowing for commercial use, without any royalties or restrictions. ⚠️ This is a raw, pretrained model, which should be further finetuned for most usecases. If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at Falcon-40B-Instruct.Starting Price: Free -
35
Phi-2
Microsoft
We are now releasing Phi-2, a 2.7 billion-parameter language model that demonstrates outstanding reasoning and language understanding capabilities, showcasing state-of-the-art performance among base language models with less than 13 billion parameters. On complex benchmarks Phi-2 matches or outperforms models up to 25x larger, thanks to new innovations in model scaling and training data curation. With its compact size, Phi-2 is an ideal playground for researchers, including for exploration around mechanistic interpretability, safety improvements, or fine-tuning experimentation on a variety of tasks. We have made Phi-2 available in the Azure AI Studio model catalog to foster research and development on language models. -
36
Gemini 2.5 Pro Deep Think
Google
Gemini 2.5 Pro Deep Think is a cutting-edge AI model designed to enhance the reasoning capabilities of machine learning models, offering improved performance and accuracy. This advanced version of the Gemini 2.5 series incorporates a feature called "Deep Think," allowing the model to reason through its thoughts before responding. It excels in coding, handling complex prompts, and multimodal tasks, offering smarter, more efficient execution. Whether for coding tasks, visual reasoning, or handling long-context input, Gemini 2.5 Pro Deep Think provides unparalleled performance. It also introduces features like native audio for more expressive conversations and optimizations that make it faster and more accurate than previous versions. -
37
SWE-1.5
Cognition
SWE-1.5 is the latest agent-model release by Cognition, purpose-built for software engineering and characterized by a “frontier-size” architecture comprising hundreds of billions of parameters and optimized end-to-end (model, inference engine, and agent harness) for both speed and intelligence. It achieves near-state-of-the-art coding performance and sets a new benchmark in latency, delivering inference speeds up to 950 tokens/second, roughly six times faster than its predecessor Haiku 4.5 and thirteen times faster than Sonnet 4.5. The model was trained using extensive reinforcement learning in realistic coding-agent environments with multi-turn workflows, unit tests, quality rubrics, and browser-based agentic execution; it also benefits from tightly integrated software tooling and high-throughput hardware (including thousands of GB200 NVL72 chips and a custom hypervisor infrastructure). -
38
Codestral Mamba
Mistral AI
As a tribute to Cleopatra, whose glorious destiny ended in tragic snake circumstances, we are proud to release Codestral Mamba, a Mamba2 language model specialized in code generation, available under an Apache 2.0 license. Codestral Mamba is another step in our effort to study and provide new architectures. It is available for free use, modification, and distribution, and we hope it will open new perspectives in architecture research. Mamba models offer the advantage of linear time inference and the theoretical ability to model sequences of infinite length. It allows users to engage with the model extensively with quick responses, irrespective of the input length. This efficiency is especially relevant for code productivity use cases, this is why we trained this model with advanced code and reasoning capabilities, enabling it to perform on par with SOTA transformer-based models.Starting Price: Free -
39
QwQ-32B
Alibaba
QwQ-32B is an advanced reasoning model developed by Alibaba Cloud's Qwen team, designed to enhance AI's problem-solving capabilities. With 32 billion parameters, it achieves performance comparable to state-of-the-art models like DeepSeek's R1, which has 671 billion parameters. This efficiency is achieved through optimized parameter utilization, allowing QwQ-32B to perform complex tasks such as mathematical reasoning, coding, and general problem-solving with fewer resources. The model supports a context length of up to 32,000 tokens, enabling it to process extensive input data effectively. QwQ-32B is accessible via Alibaba's chatbot service, Qwen Chat, and is open sourced under the Apache 2.0 license, promoting collaboration and further development within the AI community.Starting Price: Free -
40
MPT-7B
MosaicML
Introducing MPT-7B, the latest entry in our MosaicML Foundation Series. MPT-7B is a transformer trained from scratch on 1T tokens of text and code. It is open source, available for commercial use, and matches the quality of LLaMA-7B. MPT-7B was trained on the MosaicML platform in 9.5 days with zero human intervention at a cost of ~$200k. Now you can train, finetune, and deploy your own private MPT models, either starting from one of our checkpoints or training from scratch. For inspiration, we are also releasing three finetuned models in addition to the base MPT-7B: MPT-7B-Instruct, MPT-7B-Chat, and MPT-7B-StoryWriter-65k+, the last of which uses a context length of 65k tokens!Starting Price: Free -
41
OPT
Meta
Large language models, which are often trained for hundreds of thousands of compute days, have shown remarkable capabilities for zero- and few-shot learning. Given their computational cost, these models are difficult to replicate without significant capital. For the few that are available through APIs, no access is granted to the full model weights, making them difficult to study. We present Open Pre-trained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M to 175B parameters, which we aim to fully and responsibly share with interested researchers. We show that OPT-175B is comparable to GPT-3, while requiring only 1/7th the carbon footprint to develop. We are also releasing our logbook detailing the infrastructure challenges we faced, along with code for experimenting with all of the released models. -
42
Gemini Enterprise
Google
Gemini Enterprise is a comprehensive AI platform built by Google Cloud designed to bring the full power of Google’s advanced AI models, agent-creation tools, and enterprise-grade data access into everyday workflows. The solution offers a unified chat interface that lets employees interact with internal documents, applications, data sources, and custom AI agents. At its core, Gemini Enterprise comprises six key components: the Gemini family of large multimodal models, an agent orchestration workbench (formerly Google Agentspace), pre-built starter agents, robust data-integration connectors to business systems, extensive security and governance controls, and a partner ecosystem for tailored integrations. It is engineered to scale across departments and enterprises, enabling users to build no-code or low-code agents that automate tasks, such as research synthesis, customer support response, code assist, contract analysis, and more, while operating within corporate compliance standards.Starting Price: $21 per month -
43
Wan2.2
Alibaba
Wan2.2 is a major upgrade to the Wan suite of open video foundation models, introducing a Mixture‑of‑Experts (MoE) architecture that splits the diffusion denoising process across high‑noise and low‑noise expert paths to dramatically increase model capacity without raising inference cost. It harnesses meticulously labeled aesthetic data, covering lighting, composition, contrast, and color tone, to enable precise, controllable cinematic‑style video generation. Trained on over 65 % more images and 83 % more videos than its predecessor, Wan2.2 delivers top performance in motion, semantic, and aesthetic generalization. The release includes a compact, high‑compression TI2V‑5B model built on an advanced VAE with a 16×16×4 compression ratio, capable of text‑to‑video and image‑to‑video synthesis at 720p/24 fps on consumer GPUs such as the RTX 4090. Prebuilt checkpoints for T2V‑A14B, I2V‑A14B, and TI2V‑5B stack enable seamless integration.Starting Price: Free -
44
StarCoder
BigCode
StarCoder and StarCoderBase are Large Language Models for Code (Code LLMs) trained on permissively licensed data from GitHub, including from 80+ programming languages, Git commits, GitHub issues, and Jupyter notebooks. Similar to LLaMA, we trained a ~15B parameter model for 1 trillion tokens. We fine-tuned StarCoderBase model for 35B Python tokens, resulting in a new model that we call StarCoder. We found that StarCoderBase outperforms existing open Code LLMs on popular programming benchmarks and matches or surpasses closed models such as code-cushman-001 from OpenAI (the original Codex model that powered early versions of GitHub Copilot). With a context length of over 8,000 tokens, the StarCoder models can process more input than any other open LLM, enabling a wide range of interesting applications. For example, by prompting the StarCoder models with a series of dialogues, we enabled them to act as a technical assistant.Starting Price: Free -
45
Gemini 2.5 Flash-Lite
Google
Gemini 2.5 is Google DeepMind’s latest generation AI model family, designed to deliver advanced reasoning and native multimodality with a long context window. It improves performance and accuracy by reasoning through its thoughts before responding. The model offers different versions tailored for complex coding tasks, fast everyday performance, and cost-efficient high-volume workloads. Gemini 2.5 supports multiple data types including text, images, video, audio, and PDFs, enabling versatile AI applications. It features adaptive thinking budgets and fine-grained control for developers to balance cost and output quality. Available via Google AI Studio and Gemini API, Gemini 2.5 powers next-generation AI experiences. -
46
DeepSeekMath
DeepSeek
DeepSeekMath is a specialized 7B parameter language model developed by DeepSeek-AI, designed to push the boundaries of mathematical reasoning in open-source language models. It starts from the DeepSeek-Coder-v1.5 7B model and undergoes further pre-training with 120B math-related tokens sourced from Common Crawl, alongside natural language and code data. DeepSeekMath has demonstrated remarkable performance, achieving a 51.7% score on the competition-level MATH benchmark without external tools or voting techniques, closely competing with the likes of Gemini-Ultra and GPT-4. The model's capabilities are enhanced by a meticulous data selection pipeline and the introduction of Group Relative Policy Optimization (GRPO), which optimizes both mathematical reasoning and memory usage. DeepSeekMath is available in base, instruct, and RL versions, supporting both research and commercial use, and is aimed at those looking to explore or apply advanced mathematical problem-solving in AI contexts.Starting Price: Free -
47
Megatron-Turing
NVIDIA
Megatron-Turing Natural Language Generation model (MT-NLG), is the largest and the most powerful monolithic transformer English language model with 530 billion parameters. This 105-layer, transformer-based MT-NLG improves upon the prior state-of-the-art models in zero-, one-, and few-shot settings. It demonstrates unmatched accuracy in a broad set of natural language tasks such as, Completion prediction, Reading comprehension, Commonsense reasoning, Natural language inferences, Word sense disambiguation, etc. With the intent of accelerating research on the largest English language model till date and enabling customers to experiment, employ and apply such a large language model on downstream language tasks - NVIDIA is pleased to announce an Early Access program for its managed API service to MT-NLG mode. -
48
DeepSeek-V3
DeepSeek
DeepSeek-V3 is a state-of-the-art AI model designed to deliver unparalleled performance in natural language understanding, advanced reasoning, and decision-making tasks. Leveraging next-generation neural architectures, it integrates extensive datasets and fine-tuned algorithms to tackle complex challenges across diverse domains such as research, development, business intelligence, and automation. With a focus on scalability and efficiency, DeepSeek-V3 provides developers and enterprises with cutting-edge tools to accelerate innovation and achieve transformative outcomes.Starting Price: Free -
49
Gemini 1.5 Pro
Google
The Gemini 1.5 Pro AI model is a state-of-the-art language model designed to deliver highly accurate, context-aware, and human-like responses across a variety of applications. Built with cutting-edge neural architecture, it excels in natural language understanding, generation, and reasoning tasks. The model is fine-tuned for versatility, supporting tasks like content creation, code generation, data analysis, and complex problem-solving. Its advanced algorithms ensure nuanced comprehension, enabling it to adapt to different domains and conversational styles seamlessly. With a focus on scalability and efficiency, the Gemini 1.5 Pro is optimized for both small-scale implementations and enterprise-level integrations, making it a powerful tool for enhancing productivity and innovation. -
50
Mistral Small 3.1
Mistral
Mistral Small 3.1 is a state-of-the-art, multimodal, and multilingual AI model released under the Apache 2.0 license. Building upon Mistral Small 3, this enhanced version offers improved text performance, and advanced multimodal understanding, and supports an expanded context window of up to 128,000 tokens. It outperforms comparable models like Gemma 3 and GPT-4o Mini, delivering inference speeds of 150 tokens per second. Designed for versatility, Mistral Small 3.1 excels in tasks such as instruction following, conversational assistance, image understanding, and function calling, making it suitable for both enterprise and consumer-grade AI applications. Its lightweight architecture allows it to run efficiently on a single RTX 4090 or a Mac with 32GB RAM, facilitating on-device deployments. It is available for download on Hugging Face, accessible via Mistral AI's developer playground, and integrated into platforms like Google Cloud Vertex AI, with availability on NVIDIA NIM andStarting Price: Free