Alternatives to NVIDIA Nemotron

Compare NVIDIA Nemotron alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to NVIDIA Nemotron in 2026. Compare features, ratings, user reviews, pricing, and more from NVIDIA Nemotron competitors and alternatives in order to make an informed decision for your business.

  • 1
    Llama 3.3
    Llama 3.3 is the latest iteration in the Llama series of language models, developed to push the boundaries of AI-powered understanding and communication. With enhanced contextual reasoning, improved language generation, and advanced fine-tuning capabilities, Llama 3.3 is designed to deliver highly accurate, human-like responses across diverse applications. This version features a larger training dataset, refined algorithms for nuanced comprehension, and reduced biases compared to its predecessors. Llama 3.3 excels in tasks such as natural language understanding, creative writing, technical explanation, and multilingual communication, making it an indispensable tool for businesses, developers, and researchers. Its modular architecture allows for customizable deployment in specialized domains, ensuring versatility and performance at scale.
  • 2
    Nemotron 3 Super
    Nemotron-3 Super is part of NVIDIA’s Nemotron 3 family of open models designed to enable advanced agentic AI systems that can reason, plan, and execute multi-step workflows across complex environments. The model introduces a hybrid Mamba-Transformer Mixture-of-Experts architecture that combines the efficiency of state-space Mamba layers with the contextual understanding of transformer attention, allowing it to process long sequences and complex reasoning tasks with high accuracy and throughput. This architecture activates only a subset of model parameters for each token, improving computational efficiency while maintaining strong reasoning capabilities and enabling scalable inference for large workloads. Nemotron-3 Super contains roughly 120 billion parameters with around 12 billion active during inference, accelerating multi-step reasoning and collaborative agent interactions across large contexts.
  • 3
    Nemotron 3 Ultra
    Nemotron 3 Nano is a compact, open large language model in NVIDIA’s Nemotron 3 family, designed for efficient agentic reasoning, conversational AI, and coding tasks. It uses a hybrid Mixture-of-Experts Mamba-Transformer architecture that activates only a small subset of parameters per token, enabling low-latency inference while maintaining strong accuracy and reasoning performance. It has approximately 31.6 billion total parameters with around 3.2 billion active (3.6 billion including embeddings), allowing it to achieve higher accuracy than previous Nemotron 2 Nano while using less computation per forward pass. Nemotron 3 Nano supports long-context processing of up to one million tokens, enabling it to handle large documents, multi-step workflows, and extended reasoning chains in a single pass. It is designed for high-throughput, real-time execution, excelling in multi-turn conversations, tool calling, and agent-based workflows where tasks require planning, reasoning, and more.
  • 4
    Nemotron 3
    NVIDIA Nemotron 3 is a family of open large language models developed by NVIDIA to power advanced reasoning, conversational AI, and autonomous AI agents. The Nemotron 3 series includes three models designed for different scales of AI workloads while maintaining high efficiency and accuracy. These models focus on “agentic AI” capabilities, meaning they can perform multi-step reasoning, coordinate with tools, and operate as components within multi-agent systems used in automation, research, and enterprise applications. The architecture uses a hybrid mixture-of-experts (MoE) design combined with transformer-based techniques, allowing the model to activate only a subset of parameters for each task, which improves performance while reducing computational cost. Nemotron 3 models are built to deliver strong reasoning, conversational, and planning abilities while maintaining high throughput for large-scale deployment.
  • 5
    NVIDIA Llama Nemotron
    ​NVIDIA Llama Nemotron is a family of advanced language models optimized for reasoning and a diverse set of agentic AI tasks. These models excel in graduate-level scientific reasoning, advanced mathematics, coding, instruction following, and tool calls. Designed for deployment across various platforms, from data centers to PCs, they offer the flexibility to toggle reasoning capabilities on or off, reducing inference costs when deep reasoning isn't required. The Llama Nemotron family includes models tailored for different deployment needs. Built upon Llama models and enhanced by NVIDIA through post-training, these models demonstrate improved accuracy, up to 20% over base models, and optimized inference speeds, achieving up to five times the performance of other leading open reasoning models. This efficiency enables handling more complex reasoning tasks, enhances decision-making capabilities, and reduces operational costs for enterprises. ​
  • 6
    Nemotron 3 Nano
    Nemotron 3 Nano is the smallest model in the NVIDIA Nemotron 3 family, built for agentic AI applications with strong reasoning, conversational ability, and cost-efficient inference. It is a hybrid Mamba-Transformer Mixture-of-Experts model with 3.2 billion active parameters, 3.6 billion including embeddings, and 31.6 billion total parameters. NVIDIA describes it as more accurate than the previous Nemotron 2 Nano while activating less than half of the parameters per forward pass, improving efficiency without sacrificing performance. The model is positioned as more accurate than GPT-OSS-20B and Qwen3-30B-A3B-Thinking-2507 on popular benchmarks across different categories. On an 8K input and 16K output setting using a single H200, it delivers inference throughput 3.3 times higher than Qwen3-30B-A3B and 2.2 times higher than GPT-OSS-20B. Nemotron 3 Nano supports context lengths up to 1 million tokens and is reported to outperform GPT-OSS-20B and Qwen3-30B-A3B-Instruct-2507.
  • 7
    NVIDIA NeMo
    NVIDIA NeMo LLM is a service that provides a fast path to customizing and using large language models trained on several frameworks. Developers can deploy enterprise AI applications using NeMo LLM on private and public clouds. They can also experience Megatron 530B—one of the largest language models—through the cloud API or experiment via the LLM service. Customize your choice of various NVIDIA or community-developed models that work best for your AI applications. Within minutes to hours, get better responses by providing context for specific use cases using prompt learning techniques. Leverage the power of NVIDIA Megatron 530B, one of the largest language models, through the NeMo LLM Service or the cloud API. Take advantage of models for drug discovery, including in the cloud API and NVIDIA BioNeMo framework.
  • 8
    NVIDIA NeMo Megatron
    NVIDIA NeMo Megatron is an end-to-end framework for training and deploying LLMs with billions and trillions of parameters. NVIDIA NeMo Megatron, part of the NVIDIA AI platform, offers an easy, efficient, and cost-effective containerized framework to build and deploy LLMs. Designed for enterprise application development, it builds upon the most advanced technologies from NVIDIA research and provides an end-to-end workflow for automated distributed data processing, training large-scale customized GPT-3, T5, and multilingual T5 (mT5) models, and deploying models for inference at scale. Harnessing the power of LLMs is made easy through validated and converged recipes with predefined configurations for training and inference. Customizing models is simplified by the hyperparameter tool, which automatically searches for the best hyperparameter configurations and performance for training and inference on any given distributed GPU cluster configuration.
  • 9
    GPT-NeoX

    GPT-NeoX

    EleutherAI

    An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library. This repository records EleutherAI's library for training large-scale language models on GPUs. Our current framework is based on NVIDIA's Megatron Language Model and has been augmented with techniques from DeepSpeed as well as some novel optimizations. We aim to make this repo a centralized and accessible place to gather techniques for training large-scale autoregressive language models, and accelerate research into large-scale training.
  • 10
    NLP Cloud

    NLP Cloud

    NLP Cloud

    Fast and accurate AI models suited for production. Highly-available inference API leveraging the most advanced NVIDIA GPUs. We selected the best open-source natural language processing (NLP) models from the community and deployed them for you. Fine-tune your own models - including GPT-J - or upload your in-house custom models, and deploy them easily to production. Upload or Train/Fine-Tune your own AI models - including GPT-J - from your dashboard, and use them straight away in production without worrying about deployment considerations like RAM usage, high-availability, scalability... You can upload and deploy as many models as you want to production.
    Starting Price: $29 per month
  • 11
    Mercury Coder

    Mercury Coder

    Inception Labs

    Mercury, the latest innovation from Inception Labs, is the first commercial-scale diffusion large language model (dLLM), offering a 10x speed increase and significantly lower costs compared to traditional autoregressive models. Built for high-performance reasoning, coding, and structured text generation, Mercury processes over 1000 tokens per second on NVIDIA H100 GPUs, making it one of the fastest LLMs available. Unlike conventional models that generate text one token at a time, Mercury refines responses using a coarse-to-fine diffusion approach, improving accuracy and reducing hallucinations. With Mercury Coder, a specialized coding model, developers can experience cutting-edge AI-driven code generation with superior speed and efficiency.
  • 12
    MAI-1-preview

    MAI-1-preview

    Microsoft

    MAI-1 Preview is Microsoft AI’s first end-to-end trained foundation model, built entirely in-house as a mixture-of-experts architecture. Pre-trained and post-trained on approximately 15,000 NVIDIA H100 GPUs, it is designed to follow instructions and generate helpful, responsive text for everyday user queries, representing a prototype of future Copilot capabilities. Now available for public testing on LMArena, MAI-1 Preview delivers an early glimpse into the platform’s trajectory, with plans to roll out select text-based applications within Copilot over the coming weeks to gather user feedback and refine performance. Microsoft reinforces that it will continue combining its own models, partner models, and developments from the open-source community to flexibly power experiences across millions of unique interactions each day.
  • 13
    Grok 3
    Grok-3, developed by xAI, represents a significant advancement in the field of artificial intelligence, aiming to set new benchmarks in AI capabilities. It is designed to be a multimodal AI, capable of processing and understanding data from various sources including text, images, and audio, which allows for a more integrated and comprehensive interaction with users. Grok-3 is built on an unprecedented scale, with training involving ten times more computational resources than its predecessor, leveraging 100,000 Nvidia H100 GPUs on the Colossus supercomputer. This extensive computational power is expected to enhance Grok-3's performance in areas like reasoning, coding, and real-time analysis of current events through direct access to X posts. The model is anticipated to outperform not only its earlier versions but also compete with other leading AI models in the generative AI landscape.
  • 14
    Megatron-Turing
    Megatron-Turing Natural Language Generation model (MT-NLG), is the largest and the most powerful monolithic transformer English language model with 530 billion parameters. This 105-layer, transformer-based MT-NLG improves upon the prior state-of-the-art models in zero-, one-, and few-shot settings. It demonstrates unmatched accuracy in a broad set of natural language tasks such as, Completion prediction, Reading comprehension, Commonsense reasoning, Natural language inferences, Word sense disambiguation, etc. With the intent of accelerating research on the largest English language model till date and enabling customers to experiment, employ and apply such a large language model on downstream language tasks - NVIDIA is pleased to announce an Early Access program for its managed API service to MT-NLG mode.
  • 15
    Mistral NeMo

    Mistral NeMo

    Mistral AI

    Mistral NeMo, our new best small model. A state-of-the-art 12B model with 128k context length, and released under the Apache 2.0 license. Mistral NeMo is a 12B model built in collaboration with NVIDIA. Mistral NeMo offers a large context window of up to 128k tokens. Its reasoning, world knowledge, and coding accuracy are state-of-the-art in its size category. As it relies on standard architecture, Mistral NeMo is easy to use and a drop-in replacement in any system using Mistral 7B. We have released pre-trained base and instruction-tuned checkpoints under the Apache 2.0 license to promote adoption for researchers and enterprises. Mistral NeMo was trained with quantization awareness, enabling FP8 inference without any performance loss. The model is designed for global, multilingual applications. It is trained on function calling and has a large context window. Compared to Mistral 7B, it is much better at following precise instructions, reasoning, and handling multi-turn conversations.
  • 16
    NVIDIA Morpheus
    NVIDIA Morpheus is a GPU-accelerated, end-to-end AI framework that enables developers to create optimized applications for filtering, processing, and classifying large volumes of streaming cybersecurity data. Morpheus incorporates AI to reduce the time and cost associated with identifying, capturing, and acting on threats, bringing a new level of security to the data center, cloud, and edge. Morpheus also extends human analysts’ capabilities with generative AI by automating real-time analysis and responses, producing synthetic data to train AI models that identify risks accurately and run what-if scenarios. Morpheus is available as open-source software on GitHub for developers interested in using the latest pre-release features and who want to build from source. Get unlimited usage on all clouds, access to NVIDIA AI experts, and long-term support for production deployments with a purchase of NVIDIA AI Enterprise.
  • 17
    Mistral Large 3
    Mistral Large 3 is a next-generation, open multimodal AI model built with a powerful sparse Mixture-of-Experts architecture featuring 41B active parameters out of 675B total. Designed from scratch on NVIDIA H200 GPUs, it delivers frontier-level reasoning, multilingual performance, and advanced image understanding while remaining fully open-weight under the Apache 2.0 license. The model achieves top-tier results on modern instruction benchmarks, positioning it among the strongest permissively licensed foundation models available today. With native support across vLLM, TensorRT-LLM, and major cloud providers, Mistral Large 3 offers exceptional accessibility and performance efficiency. Its design enables enterprise-grade customization, letting teams fine-tune or adapt the model for domain-specific workflows and proprietary applications. Mistral Large 3 represents a major advancement in open AI, offering frontier intelligence without sacrificing transparency or control.
  • 18
    Smaug-72B
    Smaug-72B is a powerful open-source large language model (LLM) known for several key features: High Performance: It currently holds the top spot on the Hugging Face Open LLM leaderboard, surpassing models like GPT-3.5 in various benchmarks. This means it excels at tasks like understanding, responding to, and generating human-like text. Open Source: Unlike many other advanced LLMs, Smaug-72B is freely available for anyone to use and modify, fostering collaboration and innovation in the AI community. Focus on Reasoning and Math: It specifically shines in handling reasoning and mathematical tasks, attributing this strength to unique fine-tuning techniques developed by Abacus AI, the creators of Smaug-72B. Based on Qwen-72B: It's technically a fine-tuned version of another powerful LLM called Qwen-72B, released by Alibaba, further improving upon its capabilities. Overall, Smaug-72B represents a significant step forward in open-source AI.
  • 19
    NVIDIA Isaac GR00T
    NVIDIA Isaac GR00T (Generalist Robot 00 Technology) is a research-driven platform for developing general-purpose humanoid robot foundation models and data pipelines. It includes models like Isaac GR00T-N, and synthetic motion blueprints, GR00T-Mimic for augmenting demonstrations, and GR00T-Dreams for generating novel synthetic trajectories, to accelerate humanoid robotics development. Recently, the open source Isaac GR00T N1 foundation model debuted, featuring a dual-system cognitive architecture, a fast-reacting “System 1” action model, and a deliberative, language-enabled “System 2” reasoning model. The updated GR00T N1.5 introduces enhancements such as improved vision-language grounding, better language command following, few-shot adaptability, and new robot embodiment support. Together with tools like Isaac Sim, Lab, and Omniverse, GR00T empowers developers to train, simulate, post-train, and deploy adaptable humanoid agents using both real and synthetic data.
  • 20
    Mistral Small 3.1
    ​Mistral Small 3.1 is a state-of-the-art, multimodal, and multilingual AI model released under the Apache 2.0 license. Building upon Mistral Small 3, this enhanced version offers improved text performance, and advanced multimodal understanding, and supports an expanded context window of up to 128,000 tokens. It outperforms comparable models like Gemma 3 and GPT-4o Mini, delivering inference speeds of 150 tokens per second. Designed for versatility, Mistral Small 3.1 excels in tasks such as instruction following, conversational assistance, image understanding, and function calling, making it suitable for both enterprise and consumer-grade AI applications. Its lightweight architecture allows it to run efficiently on a single RTX 4090 or a Mac with 32GB RAM, facilitating on-device deployments. It is available for download on Hugging Face, accessible via Mistral AI's developer playground, and integrated into platforms like Google Cloud Vertex AI, with availability on NVIDIA NIM and
  • 21
    IBM Granite
    IBM® Granite™ is a family of artificial intelligence (AI) models purpose-built for business, engineered from scratch to help ensure trust and scalability in AI-driven applications. Open source Granite models are available today. We make AI as accessible as possible for as many developers as possible. That’s why we have open-sourced core Granite Code, Time Series, Language, and GeoSpatial models and made them available on Hugging Face under permissive Apache 2.0 license that enables broad, unencumbered commercial usage. All Granite models are trained on carefully curated data, with industry-leading levels of transparency about the data that went into them. We have also open-sourced the tools we use to ensure the data is high quality and up to the standards that enterprise-grade applications demand.
  • 22
    Cerebras-GPT
    State-of-the-art language models are extremely challenging to train; they require huge compute budgets, complex distributed compute techniques and deep ML expertise. As a result, few organizations train large language models (LLMs) from scratch. And increasingly those that have the resources and expertise are not open sourcing the results, marking a significant change from even a few months back. At Cerebras, we believe in fostering open access to the most advanced models. With this in mind, we are proud to announce the release to the open source community of Cerebras-GPT, a family of seven GPT models ranging from 111 million to 13 billion parameters. Trained using the Chinchilla formula, these models provide the highest accuracy for a given compute budget. Cerebras-GPT has faster training times, lower training costs, and consumes less energy than any publicly available model to date.
  • 23
    Open R1

    Open R1

    Open R1

    Open R1 is a community-driven, open-source initiative aimed at replicating the advanced AI capabilities of DeepSeek-R1 through transparent methodologies. You can try Open R1 AI model or DeepSeek R1 free online chat on Open R1. The project offers a comprehensive implementation of DeepSeek-R1's reasoning-optimized training pipeline, including tools for GRPO training, SFT fine-tuning, and synthetic data generation, all under the MIT license. While the original training data remains proprietary, Open R1 provides the complete toolchain for users to develop and fine-tune their own models.
  • 24
    Reka Flash 3
    ​Reka Flash 3 is a 21-billion-parameter multimodal AI model developed by Reka AI, designed to excel in general chat, coding, instruction following, and function calling. It processes and reasons with text, images, video, and audio inputs, offering a compact, general-purpose solution for various applications. Trained from scratch on diverse datasets, including publicly accessible and synthetic data, Reka Flash 3 underwent instruction tuning on curated, high-quality data to optimize performance. The final training stage involved reinforcement learning using REINFORCE Leave One-Out (RLOO) with both model-based and rule-based rewards, enhancing its reasoning capabilities. With a context length of 32,000 tokens, Reka Flash 3 performs competitively with proprietary models like OpenAI's o1-mini, making it suitable for low-latency or on-device deployments. The model's full precision requires 39GB (fp16), but it can be compressed to as small as 11GB using 4-bit quantization.
  • 25
    GigaChat 3 Ultra
    GigaChat 3 Ultra is a 702-billion-parameter Mixture-of-Experts model built from scratch to deliver frontier-level reasoning, multilingual capability, and deep Russian-language fluency. It activates just 36 billion parameters per token, enabling massive scale with practical inference speeds. The model was trained on a 14-trillion-token corpus combining natural, multilingual, and high-quality synthetic data to strengthen reasoning, math, coding, and linguistic performance. Unlike modified foreign checkpoints, GigaChat 3 Ultra is entirely original—giving developers full control, modern alignment, and a dataset free of inherited limitations. Its architecture leverages MoE, MTP, and MLA to match open-source ecosystems and integrate easily with popular inference and fine-tuning tools. With leading results on Russian benchmarks and competitive performance on global tasks, GigaChat 3 Ultra represents one of the largest and most capable open-source LLMs in the world.
  • 26
    OpenELM

    OpenELM

    Apple

    OpenELM is an open-source language model family developed by Apple. It uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy compared to existing open language models of similar size. OpenELM is trained on publicly available datasets and achieves state-of-the-art performance for its size.
  • 27
    OpenGPT-X

    OpenGPT-X

    OpenGPT-X

    OpenGPT-X is a German initiative focused on developing large AI language models tailored to European needs, emphasizing versatility, trustworthiness, multilingual capabilities, and open-source accessibility. The project brings together a consortium of partners to cover the entire generative AI value chain, from scalable, GPU-based infrastructure and data for training large language models to model design and practical applications through prototypes and proofs of concept. OpenGPT-X aims to advance cutting-edge research with a strong focus on business applications, thereby accelerating the adoption of generative AI in the German economy. The project also emphasizes responsible AI development, ensuring that the models are trustworthy and align with European values and regulations. The project provides resources such as the LLM Workbook, and a three-part reference guide with resources and examples to help users understand the key features of large AI language models.
  • 28
    R1 1776

    R1 1776

    Perplexity AI

    Perplexity AI has open-sourced R1 1776, a large language model (LLM) based on DeepSeek R1 designed to enhance transparency and foster community collaboration in AI development. This release allows researchers and developers to access the model's architecture and codebase, enabling them to contribute to its improvement and adaptation for various applications. By sharing R1 1776 openly, Perplexity AI aims to promote innovation and ethical practices within the AI community.
  • 29
    Linker Vision

    Linker Vision

    Linker Vision

    Linker VisionAI Platform is a comprehensive, end-to-end solution for vision AI, encompassing simulation, training, and deployment to empower smart cities and enterprises. It comprises three core components, Mirra, for synthetic data generation using NVIDIA Omniverse and NVIDIA Cosmos; DataVerse, facilitating data curation, annotation, and model training with NVIDIA NeMo and NVIDIA TAO; and Observ, enabling large-scale Vision Language Model (VLM) deployment with NVIDIA NIM. This integrated approach allows for the seamless transition from data simulation to real-world application, ensuring that AI models are robust and adaptable. Linker VisionAI Platform supports a range of applications, including traffic and transportation management, worker safety, disaster response, and more, by leveraging urban camera networks and AI to drive responsive decisions.
  • 30
    Baichuan-13B

    Baichuan-13B

    Baichuan Intelligent Technology

    Baichuan-13B is an open source and commercially available large-scale language model containing 13 billion parameters developed by Baichuan Intelligent following Baichuan -7B . It has achieved the best results of the same size on authoritative Chinese and English benchmarks. This release contains two versions of pre-training ( Baichuan-13B-Base ) and alignment ( Baichuan-13B-Chat ). Larger size, more data : Baichuan-13B further expands the number of parameters to 13 billion on the basis of Baichuan -7B , and trains 1.4 trillion tokens on high-quality corpus, which is 40% more than LLaMA-13B. It is currently open source The model with the largest amount of training data in the 13B size. Support Chinese and English bilingual, use ALiBi position code, context window length is 4096.
  • 31
    Olmo 2
    Olmo 2 is a family of fully open language models developed by the Allen Institute for AI (AI2), designed to provide researchers and developers with transparent access to training data, open-source code, reproducible training recipes, and comprehensive evaluations. These models are trained on up to 5 trillion tokens and are competitive with leading open-weight models like Llama 3.1 on English academic benchmarks. Olmo 2 emphasizes training stability, implementing techniques to prevent loss spikes during long training runs, and utilizes staged training interventions during late pretraining to address capability deficiencies. The models incorporate state-of-the-art post-training methodologies from AI2's Tülu 3, resulting in the creation of Olmo 2-Instruct models. An actionable evaluation framework, the Open Language Modeling Evaluation System (OLMES), was established to guide improvements through development stages, consisting of 20 evaluation benchmarks assessing core capabilities.
  • 32
    NVIDIA Cosmos
    NVIDIA Cosmos is a developer-first platform of state-of-the-art generative World Foundation Models (WFMs), advanced video tokenizers, guardrails, and an accelerated data processing and curation pipeline designed to supercharge physical AI development. It enables developers working on autonomous vehicles, robotics, and video analytics AI agents to generate photorealistic, physics-aware synthetic video data, trained on an immense dataset including 20 million hours of real-world and simulated video, to rapidly simulate future scenarios, train world models, and fine‑tune custom behaviors. It includes three core WFM types; Cosmos Predict, capable of generating up to 30 seconds of continuous video from multimodal inputs; Cosmos Transfer, which adapts simulations across environments and lighting for versatile domain augmentation; and Cosmos Reason, a vision-language model that applies structured reasoning to interpret spatial-temporal data for planning and decision-making.
  • 33
    Stable LM

    Stable LM

    Stability AI

    Stable LM: Stability AI Language Models. The release of Stable LM builds on our experience in open-sourcing earlier language models with EleutherAI, a nonprofit research hub. These language models include GPT-J, GPT-NeoX, and the Pythia suite, which were trained on The Pile open-source dataset. Many recent open-source language models continue to build on these efforts, including Cerebras-GPT and Dolly-2. Stable LM is trained on a new experimental dataset built on The Pile, but three times larger with 1.5 trillion tokens of content. We will release details on the dataset in due course. The richness of this dataset gives Stable LM surprisingly high performance in conversational and coding tasks, despite its small size of 3 to 7 billion parameters (by comparison, GPT-3 has 175 billion parameters). Stable LM 3B is a compact language model designed to operate on portable digital devices like handhelds and laptops, and we’re excited about its capabilities and portability.
  • 34
    Phi-4

    Phi-4

    Microsoft

    Phi-4 is a 14B parameter state-of-the-art small language model (SLM) that excels at complex reasoning in areas such as math, in addition to conventional language processing. Phi-4 is the latest member of our Phi family of small language models and demonstrates what’s possible as we continue to probe the boundaries of SLMs. Phi-4 is currently available on Azure AI Foundry under a Microsoft Research License Agreement (MSRLA) and will be available on Hugging Face. Phi-4 outperforms comparable and larger models on math related reasoning due to advancements throughout the processes, including the use of high-quality synthetic datasets, curation of high-quality organic data, and post-training innovations. Phi-4 continues to push the frontier of size vs quality.
  • 35
    Aya

    Aya

    Cohere AI

    Aya is a new state-of-the-art, open-source, massively multilingual, generative large language research model (LLM) covering 101 different languages — more than double the number of languages covered by existing open-source models. Aya helps researchers unlock the powerful potential of LLMs for dozens of languages and cultures largely ignored by most advanced models on the market today. We are open-sourcing both the Aya model, as well as the largest multilingual instruction fine-tuned dataset to-date with a size of 513 million covering 114 languages. This data collection includes rare annotations from native and fluent speakers all around the world, ensuring that AI technology can effectively serve a broad global audience that have had limited access to-date.
  • 36
    OpenEuroLLM

    OpenEuroLLM

    OpenEuroLLM

    OpenEuroLLM is a collaborative initiative among Europe's leading AI companies and research institutions to develop a series of open-source foundation models for transparent AI in Europe. The project emphasizes transparency by openly sharing data, documentation, training, testing code, and evaluation metrics, fostering community involvement. It ensures compliance with EU regulations, aiming to provide performant large language models that align with European standards. A key focus is on linguistic and cultural diversity, extending multilingual capabilities to encompass all EU official languages and beyond. The initiative seeks to enhance access to foundational models ready for fine-tuning across various applications, expand evaluation results in multiple languages, and increase the availability of training datasets and benchmarks. Transparency is maintained throughout the training processes by sharing tools, methodologies, and intermediate results.
  • 37
    NVIDIA Isaac Sim
    NVIDIA Isaac Sim is an open source reference robotics simulation application built on NVIDIA Omniverse, enabling developers to design, simulate, test, and train AI-driven robots in physically realistic virtual environments. It is built atop Universal Scene Description (OpenUSD), offering full extensibility so developers can create custom simulators or seamlessly integrate Isaac Sim's capabilities into existing validation pipelines. The platform supports three essential workflows; large-scale synthetic data generation for training foundation models with photorealistic rendering and automatic ground truth labeling; software-in-the-loop testing, which connects actual robot software with simulated hardware to validate control and perception systems; and robot learning through NVIDIA’s Isaac Lab, which accelerates training of behaviors in simulation before real-world deployment. Isaac Sim delivers GPU-accelerated physics (via NVIDIA PhysX) and RTX-enabled sensor simulation.
  • 38
    GPT4All

    GPT4All

    Nomic AI

    GPT4All is an ecosystem to train and deploy powerful and customized large language models that run locally on consumer-grade CPUs. The goal is simple - be the best instruction-tuned assistant-style language model that any person or enterprise can freely use, distribute and build on. A GPT4All model is a 3GB - 8GB file that you can download and plug into the GPT4All open-source ecosystem software. Nomic AI supports and maintains this software ecosystem to enforce quality and security alongside spearheading the effort to allow any person or enterprise to easily train and deploy their own on-edge large language models. Data is one the most important ingredients to successfully building a powerful, general-purpose large language model. The GPT4All community has built the GPT4All open source data lake as a staging ground for contributing instruction and assistant tuning data for future GPT4All model trains.
  • 39
    Tülu 3
    Tülu 3 is an advanced instruction-following language model developed by the Allen Institute for AI (Ai2), designed to enhance capabilities in areas such as knowledge, reasoning, mathematics, coding, and safety. Built upon the Llama 3 Base, Tülu 3 employs a comprehensive four-stage post-training process: meticulous prompt curation and synthesis, supervised fine-tuning on a diverse set of prompts and completions, preference tuning using both off- and on-policy data, and a novel reinforcement learning approach to bolster specific skills with verifiable rewards. This open-source model distinguishes itself by providing full transparency, including access to training data, code, and evaluation tools, thereby closing the performance gap between open and proprietary fine-tuning methods. Evaluations indicate that Tülu 3 outperforms other open-weight models of similar size, such as Llama 3.1-Instruct and Qwen2.5-Instruct, across various benchmarks.
  • 40
    RedPajama

    RedPajama

    RedPajama

    Foundation models such as GPT-4 have driven rapid improvement in AI. However, the most powerful models are closed commercial models or only partially open. RedPajama is a project to create a set of leading, fully open-source models. Today, we are excited to announce the completion of the first step of this project: the reproduction of the LLaMA training dataset of over 1.2 trillion tokens. The most capable foundation models today are closed behind commercial APIs, which limits research, customization, and their use with sensitive data. Fully open-source models hold the promise of removing these limitations, if the open community can close the quality gap between open and closed models. Recently, there has been much progress along this front. In many ways, AI is having its Linux moment. Stable Diffusion showed that open-source can not only rival the quality of commercial offerings like DALL-E but can also lead to incredible creativity from broad participation by communities.
  • 41
    OPT

    OPT

    Meta

    Large language models, which are often trained for hundreds of thousands of compute days, have shown remarkable capabilities for zero- and few-shot learning. Given their computational cost, these models are difficult to replicate without significant capital. For the few that are available through APIs, no access is granted to the full model weights, making them difficult to study. We present Open Pre-trained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M to 175B parameters, which we aim to fully and responsibly share with interested researchers. We show that OPT-175B is comparable to GPT-3, while requiring only 1/7th the carbon footprint to develop. We are also releasing our logbook detailing the infrastructure challenges we faced, along with code for experimenting with all of the released models.
  • 42
    PygmalionAI

    PygmalionAI

    PygmalionAI

    PygmalionAI is a community dedicated to creating open-source projects based on EleutherAI's GPT-J 6B and Meta's LLaMA models. In simple terms, Pygmalion makes AI fine-tuned for chatting and roleplaying purposes. The current actively supported Pygmalion AI model is the 7B variant, based on Meta AI's LLaMA model. With only 18GB (or less) VRAM required, Pygmalion offers better chat capability than much larger language models with relatively minimal resources. Our curated dataset of high-quality roleplaying data ensures that your bot will be the optimal RP partner. Both the model weights and the code used to train it are completely open-source, and you can modify/re-distribute it for whatever purpose you want. Language models, including Pygmalion, generally run on GPUs since they need access to fast memory and massive processing power in order to output coherent text at an acceptable speed.
  • 43
    Cohere

    Cohere

    Cohere AI

    Cohere is an enterprise AI platform that enables developers and businesses to build powerful language-based applications. Specializing in large language models (LLMs), Cohere provides solutions for text generation, summarization, and semantic search. Their model offerings include the Command family for high-performance language tasks and Aya Expanse for multilingual applications across 23 languages. Focused on security and customization, Cohere allows flexible deployment across major cloud providers, private cloud environments, or on-premises setups to meet diverse enterprise needs. The company collaborates with industry leaders like Oracle and Salesforce to integrate generative AI into business applications, improving automation and customer engagement. Additionally, Cohere For AI, their research lab, advances machine learning through open-source projects and a global research community.
  • 44
    Hermes 3

    Hermes 3

    Nous Research

    Experiment, and push the boundaries of individual alignment, artificial consciousness, open-source software, and decentralization, in ways that monolithic companies and governments are too afraid to try. Hermes 3 contains advanced long-term context retention and multi-turn conversation capability, complex roleplaying and internal monologue abilities, and enhanced agentic function-calling. Our training data aggressively encourages the model to follow the system and instruction prompts exactly and in an adaptive manner. Hermes 3 was created by fine-tuning Llama 3.1 8B, 70B, and 405B, and training on a dataset of primarily synthetically generated responses. The model boasts comparable and superior performance to Llama 3.1 while unlocking deeper capabilities in reasoning and creativity. Hermes 3 is a series of instruct and tool-use models with strong reasoning and creative abilities.
  • 45
    Qwen-7B

    Qwen-7B

    Alibaba

    Qwen-7B is the 7B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-7B is a Transformer-based large language model, which is pretrained on a large volume of data, including web texts, books, codes, etc. Additionally, based on the pretrained Qwen-7B, we release Qwen-7B-Chat, a large-model-based AI assistant, which is trained with alignment techniques. The features of the Qwen-7B series include: Trained with high-quality pretraining data. We have pretrained Qwen-7B on a self-constructed large-scale high-quality dataset of over 2.2 trillion tokens. The dataset includes plain texts and codes, and it covers a wide range of domains, including general domain data and professional domain data. Strong performance. In comparison with the models of the similar model size, we outperform the competitors on a series of benchmark datasets, which evaluates natural language understanding, mathematics, coding, etc. And more.
  • 46
    Llama 2
    The next generation of our open source large language model. This release includes model weights and starting code for pretrained and fine-tuned Llama language models — ranging from 7B to 70B parameters. Llama 2 pretrained models are trained on 2 trillion tokens, and have double the context length than Llama 1. Its fine-tuned models have been trained on over 1 million human annotations. Llama 2 outperforms other open source language models on many external benchmarks, including reasoning, coding, proficiency, and knowledge tests. Llama 2 was pretrained on publicly available online data sources. The fine-tuned model, Llama-2-chat, leverages publicly available instruction datasets and over 1 million human annotations. We have a broad range of supporters around the world who believe in our open approach to today’s AI — companies that have given early feedback and are excited to build with Llama 2.
  • 47
    Orpheus TTS

    Orpheus TTS

    Canopy Labs

    Canopy Labs has introduced Orpheus, a family of state-of-the-art speech large language models (LLMs) designed for human-level speech generation. These models are built on the Llama-3 architecture and are trained on over 100,000 hours of English speech data, enabling them to produce natural intonation, emotion, and rhythm that surpasses current state-of-the-art closed source models. Orpheus supports zero-shot voice cloning, allowing users to replicate voices without prior fine-tuning, and offers guided emotion and intonation control through simple tags. The models achieve low latency, with approximately 200ms streaming latency for real-time applications, reducible to around 100ms with input streaming. Canopy Labs has released both pre-trained and fine-tuned 3B-parameter models under the permissive Apache 2.0 license, with plans to release smaller models of 1B, 400M, and 150M parameters for use on resource-constrained devices.
  • 48
    GLM-5

    GLM-5

    Zhipu AI

    GLM-5 is Z.ai’s latest large language model built for complex systems engineering and long-horizon agentic tasks. It scales significantly beyond GLM-4.5, increasing total parameters and training data while integrating DeepSeek Sparse Attention to reduce deployment costs without sacrificing long-context capacity. The model combines enhanced pre-training with a new asynchronous reinforcement learning infrastructure called slime, improving training efficiency and post-training refinement. GLM-5 achieves best-in-class performance among open-source models across reasoning, coding, and agent benchmarks, narrowing the gap with leading frontier models. It ranks highly on evaluations such as Vending Bench 2, demonstrating strong long-term planning and operational capabilities. The model is open-sourced under the MIT License.
  • 49
    NVIDIA Picasso
    NVIDIA Picasso is a cloud service for building generative AI–powered visual applications. Enterprises, software creators, and service providers can run inference on their models, train NVIDIA Edify foundation models on proprietary data, or start from pre-trained models to generate image, video, and 3D content from text prompts. Picasso service is fully optimized for GPUs and streamlines training, optimization, and inference on NVIDIA DGX Cloud. Organizations and developers can train NVIDIA’s Edify models on their proprietary data or get started with models pre-trained with our premier partners. Expert denoising network to generate photorealistic 4K images. Temporal layers and novel video denoiser generate high-fidelity videos with temporal consistency. A novel optimization framework for generating 3D objects and meshes with high-quality geometry. Cloud service for building and deploying generative AI-powered image, video, and 3D applications.
  • 50
    Sarvam AI

    Sarvam AI

    Sarvam AI

    We are developing efficient large language models for India's diverse linguistic culture and enabling new GenAI applications through bespoke enterprise models. We are building an enterprise-grade platform that lets you develop and evaluate your company’s GenAI apps. We believe in the power of open-source to accelerate AI innovation and will be contributing to open-source models and datasets, as well be leading efforts for large-scale data curation in public-good space. We are a dynamic and close-knit team of AI pioneers, blending expertise in research, engineering, product design, and business operations. Our diverse backgrounds unite under a shared commitment to excellence in science and the creation of societal impact. We foster an environment where tackling complex tech challenges is not just a job, but a passion.