Alternatives to OpenELM

Compare OpenELM alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to OpenELM in 2025. Compare features, ratings, user reviews, pricing, and more from OpenELM competitors and alternatives in order to make an informed decision for your business.

  • 1
    Cerebras-GPT
    State-of-the-art language models are extremely challenging to train; they require huge compute budgets, complex distributed compute techniques and deep ML expertise. As a result, few organizations train large language models (LLMs) from scratch. And increasingly those that have the resources and expertise are not open sourcing the results, marking a significant change from even a few months back. At Cerebras, we believe in fostering open access to the most advanced models. With this in mind, we are proud to announce the release to the open source community of Cerebras-GPT, a family of seven GPT models ranging from 111 million to 13 billion parameters. Trained using the Chinchilla formula, these models provide the highest accuracy for a given compute budget. Cerebras-GPT has faster training times, lower training costs, and consumes less energy than any publicly available model to date.
    Starting Price: Free
  • 2
    Qwen

    Qwen

    Alibaba

    Qwen LLM refers to a family of large language models (LLMs) developed by Alibaba Cloud's Damo Academy. These models are trained on a massive dataset of text and code, allowing them to understand and generate human-like text, translate languages, write different kinds of creative content, and answer your questions in an informative way. Here are some key features of Qwen LLMs: Variety of sizes: The Qwen series ranges from 1.8 billion to 72 billion parameters, offering options for different needs and performance levels. Open source: Some versions of Qwen are open-source, which means their code is publicly available for anyone to use and modify. Multilingual support: Qwen can understand and translate multiple languages, including English, Chinese, and French. Diverse capabilities: Besides generation and translation, Qwen models can be used for tasks like question answering, text summarization, and code generation.
  • 3
    Phi-4

    Phi-4

    Microsoft

    Phi-4 is a 14B parameter state-of-the-art small language model (SLM) that excels at complex reasoning in areas such as math, in addition to conventional language processing. Phi-4 is the latest member of our Phi family of small language models and demonstrates what’s possible as we continue to probe the boundaries of SLMs. Phi-4 is currently available on Azure AI Foundry under a Microsoft Research License Agreement (MSRLA) and will be available on Hugging Face. Phi-4 outperforms comparable and larger models on math related reasoning due to advancements throughout the processes, including the use of high-quality synthetic datasets, curation of high-quality organic data, and post-training innovations. Phi-4 continues to push the frontier of size vs quality.
  • 4
    Gemma 2

    Gemma 2

    Google

    A family of state-of-the-art, light-open models created from the same research and technology that were used to create Gemini models. These models incorporate comprehensive security measures and help ensure responsible and reliable AI solutions through selected data sets and rigorous adjustments. Gemma models achieve exceptional comparative results in their 2B, 7B, 9B, and 27B sizes, even outperforming some larger open models. With Keras 3.0, enjoy seamless compatibility with JAX, TensorFlow, and PyTorch, allowing you to effortlessly choose and change frameworks based on task. Redesigned to deliver outstanding performance and unmatched efficiency, Gemma 2 is optimized for incredibly fast inference on various hardware. The Gemma family of models offers different models that are optimized for specific use cases and adapt to your needs. Gemma models are large text-to-text lightweight language models with a decoder, trained in a huge set of text data, code, and mathematical content.
  • 5
    Aya

    Aya

    Cohere AI

    Aya is a new state-of-the-art, open-source, massively multilingual, generative large language research model (LLM) covering 101 different languages — more than double the number of languages covered by existing open-source models. Aya helps researchers unlock the powerful potential of LLMs for dozens of languages and cultures largely ignored by most advanced models on the market today. We are open-sourcing both the Aya model, as well as the largest multilingual instruction fine-tuned dataset to-date with a size of 513 million covering 114 languages. This data collection includes rare annotations from native and fluent speakers all around the world, ensuring that AI technology can effectively serve a broad global audience that have had limited access to-date.
  • 6
    Stable LM

    Stable LM

    Stability AI

    Stable LM: Stability AI Language Models. The release of Stable LM builds on our experience in open-sourcing earlier language models with EleutherAI, a nonprofit research hub. These language models include GPT-J, GPT-NeoX, and the Pythia suite, which were trained on The Pile open-source dataset. Many recent open-source language models continue to build on these efforts, including Cerebras-GPT and Dolly-2. Stable LM is trained on a new experimental dataset built on The Pile, but three times larger with 1.5 trillion tokens of content. We will release details on the dataset in due course. The richness of this dataset gives Stable LM surprisingly high performance in conversational and coding tasks, despite its small size of 3 to 7 billion parameters (by comparison, GPT-3 has 175 billion parameters). Stable LM 3B is a compact language model designed to operate on portable digital devices like handhelds and laptops, and we’re excited about its capabilities and portability.
    Starting Price: Free
  • 7
    Megatron-Turing
    Megatron-Turing Natural Language Generation model (MT-NLG), is the largest and the most powerful monolithic transformer English language model with 530 billion parameters. This 105-layer, transformer-based MT-NLG improves upon the prior state-of-the-art models in zero-, one-, and few-shot settings. It demonstrates unmatched accuracy in a broad set of natural language tasks such as, Completion prediction, Reading comprehension, Commonsense reasoning, Natural language inferences, Word sense disambiguation, etc. With the intent of accelerating research on the largest English language model till date and enabling customers to experiment, employ and apply such a large language model on downstream language tasks - NVIDIA is pleased to announce an Early Access program for its managed API service to MT-NLG mode.
  • 8
    LLaVA

    LLaVA

    LLaVA

    LLaVA (Large Language-and-Vision Assistant) is an innovative multimodal model that integrates a vision encoder with the Vicuna language model to facilitate comprehensive visual and language understanding. Through end-to-end training, LLaVA exhibits impressive chat capabilities, emulating the multimodal functionalities of models like GPT-4. Notably, LLaVA-1.5 has achieved state-of-the-art performance across 11 benchmarks, utilizing publicly available data and completing training in approximately one day on a single 8-A100 node, surpassing methods that rely on billion-scale datasets. The development of LLaVA involved the creation of a multimodal instruction-following dataset, generated using language-only GPT-4. This dataset comprises 158,000 unique language-image instruction-following samples, including conversations, detailed descriptions, and complex reasoning tasks. This data has been instrumental in training LLaVA to perform a wide array of visual and language tasks effectively.
    Starting Price: Free
  • 9
    InstructGPT
    InstructGPT is an open-source framework for training language models to generate natural language instructions from visual input. It uses a generative pre-trained transformer (GPT) model and the state-of-the-art object detector, Mask R-CNN, to detect objects in images and generate natural language sentences that describe the image. InstructGPT is designed to be effective across domains such as robotics, gaming and education; it can assist robots in navigating complex tasks with natural language instructions, or help students learn by providing descriptive explanations of processes or events.
    Starting Price: $0.0200 per 1000 tokens
  • 10
    Gemma

    Gemma

    Google

    Gemma is a family of lightweight, state-of-the-art open models built from the same research and technology used to create the Gemini models. Developed by Google DeepMind and other teams across Google, Gemma is inspired by Gemini, and the name reflects the Latin gemma, meaning “precious stone.” Accompanying our model weights, we’re also releasing tools to support developer innovation, foster collaboration, and guide the responsible use of Gemma models. Gemma models share technical and infrastructure components with Gemini, our largest and most capable AI model widely available today. This enables Gemma 2B and 7B to achieve best-in-class performance for their sizes compared to other open models. And Gemma models are capable of running directly on a developer laptop or desktop computer. Notably, Gemma surpasses significantly larger models on key benchmarks while adhering to our rigorous standards for safe and responsible outputs.
  • 11
    OLMo 2
    OLMo 2 is a family of fully open language models developed by the Allen Institute for AI (AI2), designed to provide researchers and developers with transparent access to training data, open-source code, reproducible training recipes, and comprehensive evaluations. These models are trained on up to 5 trillion tokens and are competitive with leading open-weight models like Llama 3.1 on English academic benchmarks. OLMo 2 emphasizes training stability, implementing techniques to prevent loss spikes during long training runs, and utilizes staged training interventions during late pretraining to address capability deficiencies. The models incorporate state-of-the-art post-training methodologies from AI2's Tülu 3, resulting in the creation of OLMo 2-Instruct models. An actionable evaluation framework, the Open Language Modeling Evaluation System (OLMES), was established to guide improvements through development stages, consisting of 20 evaluation benchmarks assessing core capabilities.
  • 12
    Qwen-7B

    Qwen-7B

    Alibaba

    Qwen-7B is the 7B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-7B is a Transformer-based large language model, which is pretrained on a large volume of data, including web texts, books, codes, etc. Additionally, based on the pretrained Qwen-7B, we release Qwen-7B-Chat, a large-model-based AI assistant, which is trained with alignment techniques. The features of the Qwen-7B series include: Trained with high-quality pretraining data. We have pretrained Qwen-7B on a self-constructed large-scale high-quality dataset of over 2.2 trillion tokens. The dataset includes plain texts and codes, and it covers a wide range of domains, including general domain data and professional domain data. Strong performance. In comparison with the models of the similar model size, we outperform the competitors on a series of benchmark datasets, which evaluates natural language understanding, mathematics, coding, etc. And more.
    Starting Price: Free
  • 13
    Orpheus TTS

    Orpheus TTS

    Canopy Labs

    Canopy Labs has introduced Orpheus, a family of state-of-the-art speech large language models (LLMs) designed for human-level speech generation. These models are built on the Llama-3 architecture and are trained on over 100,000 hours of English speech data, enabling them to produce natural intonation, emotion, and rhythm that surpasses current state-of-the-art closed source models. Orpheus supports zero-shot voice cloning, allowing users to replicate voices without prior fine-tuning, and offers guided emotion and intonation control through simple tags. The models achieve low latency, with approximately 200ms streaming latency for real-time applications, reducible to around 100ms with input streaming. Canopy Labs has released both pre-trained and fine-tuned 3B-parameter models under the permissive Apache 2.0 license, with plans to release smaller models of 1B, 400M, and 150M parameters for use on resource-constrained devices.
  • 14
    DeepSeek-V2

    DeepSeek-V2

    DeepSeek

    DeepSeek-V2 is a state-of-the-art Mixture-of-Experts (MoE) language model introduced by DeepSeek-AI, characterized by its economical training and efficient inference capabilities. With a total of 236 billion parameters, of which only 21 billion are active per token, it supports a context length of up to 128K tokens. DeepSeek-V2 employs innovative architectures like Multi-head Latent Attention (MLA) for efficient inference by compressing the Key-Value (KV) cache and DeepSeekMoE for cost-effective training through sparse computation. This model significantly outperforms its predecessor, DeepSeek 67B, by saving 42.5% in training costs, reducing the KV cache by 93.3%, and enhancing generation throughput by 5.76 times. Pretrained on an 8.1 trillion token corpus, DeepSeek-V2 excels in language understanding, coding, and reasoning tasks, making it a top-tier performer among open-source models.
    Starting Price: Free
  • 15
    Granite Code
    We introduce the Granite series of decoder-only code models for code generative tasks (e.g., fixing bugs, explaining code, documenting code), trained with code written in 116 programming languages. A comprehensive evaluation of the Granite Code model family on diverse tasks demonstrates that our models consistently reach state-of-the-art performance among available open source code LLMs. The key advantages of Granite Code models include: All-rounder Code LLM: Granite Code models achieve competitive or state-of-the-art performance on different kinds of code-related tasks, including code generation, explanation, fixing, editing, translation, and more. Demonstrating their ability to solve diverse coding tasks. Trustworthy Enterprise-Grade LLM: All our models are trained on license-permissible data collected following IBM's AI Ethics principles and guided by IBM’s Corporate Legal team for trustworthy enterprise usage.
    Starting Price: Free
  • 16
    Jamba

    Jamba

    AI21 Labs

    Jamba is the most powerful & efficient long context model, open for builders and built for the enterprise. Jamba's latency outperforms all leading models of comparable sizes. Jamba's 256k context window is the longest openly available. Jamba's Mamba-Transformer MoE architecture is designed for cost & efficiency gains. Jamba offers key features of OOTB including function calls, JSON mode output, document objects, and citation mode. Jamba 1.5 models maintain high performance across the full length of their context window. Jamba 1.5 models achieve top scores across common quality benchmarks. Secure deployment that suits your enterprise. Seamlessly start using Jamba on our production-grade SaaS platform. The Jamba model family is available for deployment across our strategic partners. We offer VPC & on-prem deployments for enterprises that require custom solutions. For enterprises that have unique, bespoke requirements, we offer hands-on management, continuous pre-training, etc.
  • 17
    Llama 2
    The next generation of our open source large language model. This release includes model weights and starting code for pretrained and fine-tuned Llama language models — ranging from 7B to 70B parameters. Llama 2 pretrained models are trained on 2 trillion tokens, and have double the context length than Llama 1. Its fine-tuned models have been trained on over 1 million human annotations. Llama 2 outperforms other open source language models on many external benchmarks, including reasoning, coding, proficiency, and knowledge tests. Llama 2 was pretrained on publicly available online data sources. The fine-tuned model, Llama-2-chat, leverages publicly available instruction datasets and over 1 million human annotations. We have a broad range of supporters around the world who believe in our open approach to today’s AI — companies that have given early feedback and are excited to build with Llama 2.
    Starting Price: Free
  • 18
    ERNIE 3.0 Titan
    Pre-trained language models have achieved state-of-the-art results in various Natural Language Processing (NLP) tasks. GPT-3 has shown that scaling up pre-trained language models can further exploit their enormous potential. A unified framework named ERNIE 3.0 was recently proposed for pre-training large-scale knowledge enhanced models and trained a model with 10 billion parameters. ERNIE 3.0 outperformed the state-of-the-art models on various NLP tasks. In order to explore the performance of scaling up ERNIE 3.0, we train a hundred-billion-parameter model called ERNIE 3.0 Titan with up to 260 billion parameters on the PaddlePaddle platform. Furthermore, We design a self-supervised adversarial loss and a controllable language modeling loss to make ERNIE 3.0 Titan generate credible and controllable texts.
  • 19
    ChatGLM

    ChatGLM

    Zhipu AI

    ChatGLM-6B is an open-source, Chinese-English bilingual dialogue language model based on the General Language Model (GLM) architecture with 6.2 billion parameters. Combined with model quantization technology, users can deploy locally on consumer-grade graphics cards (only 6GB of video memory is required at the INT4 quantization level). ChatGLM-6B uses technology similar to ChatGPT, optimized for Chinese Q&A and dialogue. After about 1T identifiers of Chinese and English bilingual training, supplemented by supervision and fine-tuning, feedback self-help, human feedback reinforcement learning and other technologies, ChatGLM-6B with 6.2 billion parameters has been able to generate answers that are quite in line with human preferences.
    Starting Price: Free
  • 20
    Phi-2

    Phi-2

    Microsoft

    We are now releasing Phi-2, a 2.7 billion-parameter language model that demonstrates outstanding reasoning and language understanding capabilities, showcasing state-of-the-art performance among base language models with less than 13 billion parameters. On complex benchmarks Phi-2 matches or outperforms models up to 25x larger, thanks to new innovations in model scaling and training data curation. With its compact size, Phi-2 is an ideal playground for researchers, including for exploration around mechanistic interpretability, safety improvements, or fine-tuning experimentation on a variety of tasks. We have made Phi-2 available in the Azure AI Studio model catalog to foster research and development on language models.
  • 21
    Tülu 3
    Tülu 3 is an advanced instruction-following language model developed by the Allen Institute for AI (Ai2), designed to enhance capabilities in areas such as knowledge, reasoning, mathematics, coding, and safety. Built upon the Llama 3 Base, Tülu 3 employs a comprehensive four-stage post-training process: meticulous prompt curation and synthesis, supervised fine-tuning on a diverse set of prompts and completions, preference tuning using both off- and on-policy data, and a novel reinforcement learning approach to bolster specific skills with verifiable rewards. This open-source model distinguishes itself by providing full transparency, including access to training data, code, and evaluation tools, thereby closing the performance gap between open and proprietary fine-tuning methods. Evaluations indicate that Tülu 3 outperforms other open-weight models of similar size, such as Llama 3.1-Instruct and Qwen2.5-Instruct, across various benchmarks.
    Starting Price: Free
  • 22
    QwQ-32B

    QwQ-32B

    Alibaba

    ​QwQ-32B is an advanced reasoning model developed by Alibaba Cloud's Qwen team, designed to enhance AI's problem-solving capabilities. With 32 billion parameters, it achieves performance comparable to state-of-the-art models like DeepSeek's R1, which has 671 billion parameters. This efficiency is achieved through optimized parameter utilization, allowing QwQ-32B to perform complex tasks such as mathematical reasoning, coding, and general problem-solving with fewer resources. The model supports a context length of up to 32,000 tokens, enabling it to process extensive input data effectively. QwQ-32B is accessible via Alibaba's chatbot service, Qwen Chat, and is open sourced under the Apache 2.0 license, promoting collaboration and further development within the AI community.
    Starting Price: Free
  • 23
    Baichuan-13B

    Baichuan-13B

    Baichuan Intelligent Technology

    Baichuan-13B is an open source and commercially available large-scale language model containing 13 billion parameters developed by Baichuan Intelligent following Baichuan -7B . It has achieved the best results of the same size on authoritative Chinese and English benchmarks. This release contains two versions of pre-training ( Baichuan-13B-Base ) and alignment ( Baichuan-13B-Chat ). Larger size, more data : Baichuan-13B further expands the number of parameters to 13 billion on the basis of Baichuan -7B , and trains 1.4 trillion tokens on high-quality corpus, which is 40% more than LLaMA-13B. It is currently open source The model with the largest amount of training data in the 13B size. Support Chinese and English bilingual, use ALiBi position code, context window length is 4096.
    Starting Price: Free
  • 24
    DBRX

    DBRX

    Databricks

    Today, we are excited to introduce DBRX, an open, general-purpose LLM created by Databricks. Across a range of standard benchmarks, DBRX sets a new state-of-the-art for established open LLMs. Moreover, it provides the open community and enterprises building their own LLMs with capabilities that were previously limited to closed model APIs; according to our measurements, it surpasses GPT-3.5, and it is competitive with Gemini 1.0 Pro. It is an especially capable code model, surpassing specialized models like CodeLLaMA-70B in programming, in addition to its strength as a general-purpose LLM. This state-of-the-art quality comes with marked improvements in training and inference performance. DBRX advances the state-of-the-art in efficiency among open models thanks to its fine-grained mixture-of-experts (MoE) architecture. Inference is up to 2x faster than LLaMA2-70B, and DBRX is about 40% of the size of Grok-1 in terms of both total and active parameter counts.
  • 25
    PanGu-Σ

    PanGu-Σ

    Huawei

    Significant advancements in the field of natural language processing, understanding, and generation have been achieved through the expansion of large language models. This study introduces a system which utilizes Ascend 910 AI processors and the MindSpore framework to train a language model with over a trillion parameters, specifically 1.085T, named PanGu-{\Sigma}. This model, which builds upon the foundation laid by PanGu-{\alpha}, takes the traditionally dense Transformer model and transforms it into a sparse one using a concept known as Random Routed Experts (RRE). The model was efficiently trained on a dataset of 329 billion tokens using a technique called Expert Computation and Storage Separation (ECSS), leading to a 6.3-fold increase in training throughput via heterogeneous computing. Experimentation indicates that PanGu-{\Sigma} sets a new standard in zero-shot learning for various downstream Chinese NLP tasks.
  • 26
    Ai2 OLMoE

    Ai2 OLMoE

    The Allen Institute for Artificial Intelligence

    Ai2 OLMoE is a fully open source mixture-of-experts language model that is capable of running completely on-device, allowing you to try our model privately and securely. Our app is intended to help researchers better explore how to make on-device intelligence better and to enable developers to quickly prototype new AI experiences, all with no cloud connectivity required. OLMoE is a highly efficient mixture-of-experts version of the Ai2 OLMo family of models. Experience which real-world tasks state-of-the-art local models are capable of. Research how to improve small AI models. Test your own models locally using our open-source codebase. Integrate OLMoE into other iOS applications. The Ai2 OLMoE app provides privacy and security by operating completely on-device. Easily share the output of your conversations with friends or colleagues. The OLMoE model and the application code are fully open source.
    Starting Price: Free
  • 27
    DeepSeek-V3

    DeepSeek-V3

    DeepSeek

    DeepSeek-V3 is a state-of-the-art AI model designed to deliver unparalleled performance in natural language understanding, advanced reasoning, and decision-making tasks. Leveraging next-generation neural architectures, it integrates extensive datasets and fine-tuned algorithms to tackle complex challenges across diverse domains such as research, development, business intelligence, and automation. With a focus on scalability and efficiency, DeepSeek-V3 provides developers and enterprises with cutting-edge tools to accelerate innovation and achieve transformative outcomes.
  • 28
    Falcon 3

    Falcon 3

    Technology Innovation Institute (TII)

    Falcon 3 is an open-source large language model (LLM) developed by the Technology Innovation Institute (TII) to make advanced AI accessible to a broader audience. Designed for efficiency, it operates seamlessly on lightweight devices, including laptops, without compromising performance. The Falcon 3 ecosystem comprises four scalable models, each tailored to diverse applications, and supports multiple languages while optimizing resource usage. This latest iteration in TII's LLM series achieves state-of-the-art results in reasoning, language understanding, instruction following, code, and mathematics tasks. By combining high performance with resource efficiency, Falcon 3 aims to democratize access to AI, empowering users across various sectors to leverage advanced technology without the need for extensive computational resources.
    Starting Price: Free
  • 29
    OPT

    OPT

    Meta

    Large language models, which are often trained for hundreds of thousands of compute days, have shown remarkable capabilities for zero- and few-shot learning. Given their computational cost, these models are difficult to replicate without significant capital. For the few that are available through APIs, no access is granted to the full model weights, making them difficult to study. We present Open Pre-trained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M to 175B parameters, which we aim to fully and responsibly share with interested researchers. We show that OPT-175B is comparable to GPT-3, while requiring only 1/7th the carbon footprint to develop. We are also releasing our logbook detailing the infrastructure challenges we faced, along with code for experimenting with all of the released models.
  • 30
    Llama

    Llama

    Meta

    Llama (Large Language Model Meta AI) is a state-of-the-art foundational large language model designed to help researchers advance their work in this subfield of AI. Smaller, more performant models such as Llama enable others in the research community who don’t have access to large amounts of infrastructure to study these models, further democratizing access in this important, fast-changing field. Training smaller foundation models like Llama is desirable in the large language model space because it requires far less computing power and resources to test new approaches, validate others’ work, and explore new use cases. Foundation models train on a large set of unlabeled data, which makes them ideal for fine-tuning for a variety of tasks. We are making Llama available at several sizes (7B, 13B, 33B, and 65B parameters) and also sharing a Llama model card that details how we built the model in keeping with our approach to Responsible AI practices.
  • 31
    Mistral NeMo

    Mistral NeMo

    Mistral AI

    Mistral NeMo, our new best small model. A state-of-the-art 12B model with 128k context length, and released under the Apache 2.0 license. Mistral NeMo is a 12B model built in collaboration with NVIDIA. Mistral NeMo offers a large context window of up to 128k tokens. Its reasoning, world knowledge, and coding accuracy are state-of-the-art in its size category. As it relies on standard architecture, Mistral NeMo is easy to use and a drop-in replacement in any system using Mistral 7B. We have released pre-trained base and instruction-tuned checkpoints under the Apache 2.0 license to promote adoption for researchers and enterprises. Mistral NeMo was trained with quantization awareness, enabling FP8 inference without any performance loss. The model is designed for global, multilingual applications. It is trained on function calling and has a large context window. Compared to Mistral 7B, it is much better at following precise instructions, reasoning, and handling multi-turn conversations.
    Starting Price: Free
  • 32
    GPT-J

    GPT-J

    EleutherAI

    GPT-J is a cutting-edge language model created by the research organization EleutherAI. In terms of performance, GPT-J exhibits a level of proficiency comparable to that of OpenAI's renowned GPT-3 model in a range of zero-shot tasks. Notably, GPT-J has demonstrated the ability to surpass GPT-3 in tasks related to generating code. The latest iteration of this language model, known as GPT-J-6B, is built upon a linguistic dataset referred to as The Pile. This dataset, which is publicly available, encompasses a substantial volume of 825 gibibytes of language data, organized into 22 distinct subsets. While GPT-J shares certain capabilities with ChatGPT, it is important to note that GPT-J is not designed to operate as a chatbot; rather, its primary function is to predict text. In a significant development in March 2023, Databricks introduced Dolly, a model that follows instructions and is licensed under Apache.
    Starting Price: Free
  • 33
    Vicuna

    Vicuna

    lmsys.org

    Vicuna-13B is an open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. Preliminary evaluation using GPT-4 as a judge shows Vicuna-13B achieves more than 90%* quality of OpenAI ChatGPT and Google Bard while outperforming other models like LLaMA and Stanford Alpaca in more than 90%* of cases. The cost of training Vicuna-13B is around $300. The code and weights, along with an online demo, are publicly available for non-commercial use.
    Starting Price: Free
  • 34
    NVIDIA Nemotron
    NVIDIA Nemotron is a family of open-source models developed by NVIDIA, designed to generate synthetic data for training large language models (LLMs) for commercial applications. The Nemotron-4 340B model, in particular, is a significant release by NVIDIA, offering developers a powerful tool to generate high-quality data and filter it based on various attributes using a reward model.
  • 35
    Falcon Mamba 7B

    Falcon Mamba 7B

    Technology Innovation Institute (TII)

    Falcon Mamba 7B is the first open-source State Space Language Model (SSLM), introducing a groundbreaking architecture for Falcon models. Recognized as the top-performing open-source SSLM worldwide by Hugging Face, it sets a new benchmark in AI efficiency. Unlike traditional transformers, SSLMs operate with minimal memory requirements and can generate extended text sequences without additional overhead. Falcon Mamba 7B surpasses leading transformer-based models, including Meta’s Llama 3.1 8B and Mistral’s 7B, showcasing superior performance. This innovation underscores Abu Dhabi’s commitment to advancing AI research and development on a global scale.
    Starting Price: Free
  • 36
    OpenEuroLLM

    OpenEuroLLM

    OpenEuroLLM

    OpenEuroLLM is a collaborative initiative among Europe's leading AI companies and research institutions to develop a series of open-source foundation models for transparent AI in Europe. The project emphasizes transparency by openly sharing data, documentation, training, testing code, and evaluation metrics, fostering community involvement. It ensures compliance with EU regulations, aiming to provide performant large language models that align with European standards. A key focus is on linguistic and cultural diversity, extending multilingual capabilities to encompass all EU official languages and beyond. The initiative seeks to enhance access to foundational models ready for fine-tuning across various applications, expand evaluation results in multiple languages, and increase the availability of training datasets and benchmarks. Transparency is maintained throughout the training processes by sharing tools, methodologies, and intermediate results.
  • 37
    IBM Granite
    IBM® Granite™ is a family of artificial intelligence (AI) models purpose-built for business, engineered from scratch to help ensure trust and scalability in AI-driven applications. Open source Granite models are available today. We make AI as accessible as possible for as many developers as possible. That’s why we have open-sourced core Granite Code, Time Series, Language, and GeoSpatial models and made them available on Hugging Face under permissive Apache 2.0 license that enables broad, unencumbered commercial usage. All Granite models are trained on carefully curated data, with industry-leading levels of transparency about the data that went into them. We have also open-sourced the tools we use to ensure the data is high quality and up to the standards that enterprise-grade applications demand.
    Starting Price: Free
  • 38
    XLNet

    XLNet

    XLNet

    XLNet is a new unsupervised language representation learning method based on a novel generalized permutation language modeling objective. Additionally, XLNet employs Transformer-XL as the backbone model, exhibiting excellent performance for language tasks involving long context. Overall, XLNet achieves state-of-the-art (SOTA) results on various downstream language tasks including question answering, natural language inference, sentiment analysis, and document ranking.
    Starting Price: Free
  • 39
    StarCoder

    StarCoder

    BigCode

    StarCoder and StarCoderBase are Large Language Models for Code (Code LLMs) trained on permissively licensed data from GitHub, including from 80+ programming languages, Git commits, GitHub issues, and Jupyter notebooks. Similar to LLaMA, we trained a ~15B parameter model for 1 trillion tokens. We fine-tuned StarCoderBase model for 35B Python tokens, resulting in a new model that we call StarCoder. We found that StarCoderBase outperforms existing open Code LLMs on popular programming benchmarks and matches or surpasses closed models such as code-cushman-001 from OpenAI (the original Codex model that powered early versions of GitHub Copilot). With a context length of over 8,000 tokens, the StarCoder models can process more input than any other open LLM, enabling a wide range of interesting applications. For example, by prompting the StarCoder models with a series of dialogues, we enabled them to act as a technical assistant.
    Starting Price: Free
  • 40
    Reka Flash 3
    ​Reka Flash 3 is a 21-billion-parameter multimodal AI model developed by Reka AI, designed to excel in general chat, coding, instruction following, and function calling. It processes and reasons with text, images, video, and audio inputs, offering a compact, general-purpose solution for various applications. Trained from scratch on diverse datasets, including publicly accessible and synthetic data, Reka Flash 3 underwent instruction tuning on curated, high-quality data to optimize performance. The final training stage involved reinforcement learning using REINFORCE Leave One-Out (RLOO) with both model-based and rule-based rewards, enhancing its reasoning capabilities. With a context length of 32,000 tokens, Reka Flash 3 performs competitively with proprietary models like OpenAI's o1-mini, making it suitable for low-latency or on-device deployments. The model's full precision requires 39GB (fp16), but it can be compressed to as small as 11GB using 4-bit quantization.
  • 41
    PygmalionAI

    PygmalionAI

    PygmalionAI

    PygmalionAI is a community dedicated to creating open-source projects based on EleutherAI's GPT-J 6B and Meta's LLaMA models. In simple terms, Pygmalion makes AI fine-tuned for chatting and roleplaying purposes. The current actively supported Pygmalion AI model is the 7B variant, based on Meta AI's LLaMA model. With only 18GB (or less) VRAM required, Pygmalion offers better chat capability than much larger language models with relatively minimal resources. Our curated dataset of high-quality roleplaying data ensures that your bot will be the optimal RP partner. Both the model weights and the code used to train it are completely open-source, and you can modify/re-distribute it for whatever purpose you want. Language models, including Pygmalion, generally run on GPUs since they need access to fast memory and massive processing power in order to output coherent text at an acceptable speed.
    Starting Price: Free
  • 42
    Mistral Saba

    Mistral Saba

    Mistral AI

    Mistral Saba is a 24-billion-parameter model trained on meticulously curated datasets from across the Middle East and South Asia. The model provides more accurate and relevant responses than models that are over five times its size while being significantly faster and lower cost. It can also serve as a strong base to train highly specific regional adaptations. Mistral Saba is available as an API and can be deployed locally within customers' security premises. Like the recently released Mistral Small 3, the model is lightweight and can be deployed on single-GPU systems, responding at speeds of over 150 tokens per second. In keeping with the rich cultural cross-pollination between the Middle East and South Asia, Mistral Saba supports Arabic and many Indian-origin languages and is particularly strong in South Indian-origin languages such as Tamil. This capability enhances its versatility in multinational use across these interconnected regions.
    Starting Price: Free
  • 43
    Dolly

    Dolly

    Databricks

    Dolly is a cheap-to-build LLM that exhibits a surprising degree of the instruction following capabilities exhibited by ChatGPT. Whereas the work from the Alpaca team showed that state-of-the-art models could be coaxed into high quality instruction-following behavior, we find that even years-old open source models with much earlier architectures exhibit striking behaviors when fine tuned on a small corpus of instruction training data. Dolly works by taking an existing open source 6 billion parameter model from EleutherAI and modifying it ever so slightly to elicit instruction following capabilities such as brainstorming and text generation not present in the original model, using data from Alpaca.
    Starting Price: Free
  • 44
    Falcon-7B

    Falcon-7B

    Technology Innovation Institute (TII)

    Falcon-7B is a 7B parameters causal decoder-only model built by TII and trained on 1,500B tokens of RefinedWeb enhanced with curated corpora. It is made available under the Apache 2.0 license. Why use Falcon-7B? It outperforms comparable open-source models (e.g., MPT-7B, StableLM, RedPajama etc.), thanks to being trained on 1,500B tokens of RefinedWeb enhanced with curated corpora. See the OpenLLM Leaderboard. It features an architecture optimized for inference, with FlashAttention and multiquery. It is made available under a permissive Apache 2.0 license allowing for commercial use, without any royalties or restrictions.
    Starting Price: Free
  • 45
    Jurassic-1

    Jurassic-1

    AI21 Labs

    Jurassic-1 models come in two sizes, where the Jumbo version, at 178B parameters, is the largest and most sophisticated language model ever released for general use by developers. AI21 Studio is currently in open beta, allowing anyone to sign up and immediately start querying Jurassic-1 using our API and interactive web environment. Our mission at AI21 Labs is to fundamentally reimagine the way humans read and write by introducing machines as thought partners, and the only way we can achieve this is if we take on this challenge together. We’ve been researching language models since our Mesozoic Era (aka 2017 😉). Jurassic-1 builds on this research, and it is the first generation of models we’re making available for widespread use.
  • 46
    Teuken 7B

    Teuken 7B

    OpenGPT-X

    Teuken-7B is a multilingual, open source language model developed under the OpenGPT-X initiative, specifically designed to cater to Europe's diverse linguistic landscape. It has been trained on a dataset comprising over 50% non-English texts, encompassing all 24 official languages of the European Union, ensuring robust performance across these languages. A key innovation in Teuken-7B is its custom multilingual tokenizer, optimized for European languages, which enhances training efficiency and reduces inference costs compared to standard monolingual tokenizers. The model is available in two versions, Teuken-7B-Base, the foundational pre-trained model, and Teuken-7B-Instruct, which has undergone instruction tuning for improved performance in following user prompts. Both versions are accessible on Hugging Face, promoting transparency and collaboration within the AI community. The development of Teuken-7B underscores a commitment to creating AI models that reflect Europe's diversity.
    Starting Price: Free
  • 47
    Hermes 3

    Hermes 3

    Nous Research

    Experiment, and push the boundaries of individual alignment, artificial consciousness, open-source software, and decentralization, in ways that monolithic companies and governments are too afraid to try. Hermes 3 contains advanced long-term context retention and multi-turn conversation capability, complex roleplaying and internal monologue abilities, and enhanced agentic function-calling. Our training data aggressively encourages the model to follow the system and instruction prompts exactly and in an adaptive manner. Hermes 3 was created by fine-tuning Llama 3.1 8B, 70B, and 405B, and training on a dataset of primarily synthetically generated responses. The model boasts comparable and superior performance to Llama 3.1 while unlocking deeper capabilities in reasoning and creativity. Hermes 3 is a series of instruct and tool-use models with strong reasoning and creative abilities.
    Starting Price: Free
  • 48
    GPT-4

    GPT-4

    OpenAI

    GPT-4 (Generative Pre-trained Transformer 4) is a large-scale unsupervised language model, yet to be released by OpenAI. GPT-4 is the successor to GPT-3 and part of the GPT-n series of natural language processing models, and was trained on a dataset of 45TB of text to produce human-like text generation and understanding capabilities. Unlike most other NLP models, GPT-4 does not require additional training data for specific tasks. Instead, it can generate text or answer questions using only its own internally generated context as input. GPT-4 has been shown to be able to perform a wide variety of tasks without any task specific training data such as translation, summarization, question answering, sentiment analysis and more.
    Starting Price: $0.0200 per 1000 tokens
  • 49
    Llama 3.2
    The open-source AI model you can fine-tune, distill and deploy anywhere is now available in more versions. Choose from 1B, 3B, 11B or 90B, or continue building with Llama 3.1. Llama 3.2 is a collection of large language models (LLMs) pretrained and fine-tuned in 1B and 3B sizes that are multilingual text only, and 11B and 90B sizes that take both text and image inputs and output text. Develop highly performative and efficient applications from our latest release. Use our 1B or 3B models for on device applications such as summarizing a discussion from your phone or calling on-device tools like calendar. Use our 11B or 90B models for image use cases such as transforming an existing image into something new or getting more information from an image of your surroundings.
    Starting Price: Free
  • 50
    Chinchilla

    Chinchilla

    Google DeepMind

    Chinchilla is a large language model. Chinchilla uses the same compute budget as Gopher but with 70B parameters and 4× more more data. Chinchilla uniformly and significantly outperforms Gopher (280B), GPT-3 (175B), Jurassic-1 (178B), and Megatron-Turing NLG (530B) on a large range of downstream evaluation tasks. This also means that Chinchilla uses substantially less compute for fine-tuning and inference, greatly facilitating downstream usage. As a highlight, Chinchilla reaches a state-of-the-art average accuracy of 67.5% on the MMLU benchmark, greater than a 7% improvement over Gopher.