Alternatives to Sonar

Compare Sonar alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to Sonar in 2025. Compare features, ratings, user reviews, pricing, and more from Sonar competitors and alternatives in order to make an informed decision for your business.

  • 1
    Exa

    Exa

    Exa.ai

    The Exa API retrieves the best content on the web using embeddings-based search. Exa understands meaning, giving results search engines can’t. Exa uses a novel link prediction transformer to predict links which match the meaning of a prompt. For queries that need semantic understanding, search with our SOTA web embeddings model over our custom index. For all other queries, we offer keyword-based search. Stop learning how to web scrape or parse HTML. Get the clean, full text of any page in our index, or intelligent embeddings-ranked highlights related to a query. Select any date range, include or exclude any domain, select a custom data vertical, or get up to 10 million results..
    Starting Price: $100 per month
  • 2
    Gemini 2.5 Pro Deep Think
    Gemini 2.5 Pro Deep Think is a cutting-edge AI model designed to enhance the reasoning capabilities of machine learning models, offering improved performance and accuracy. This advanced version of the Gemini 2.5 series incorporates a feature called "Deep Think," allowing the model to reason through its thoughts before responding. It excels in coding, handling complex prompts, and multimodal tasks, offering smarter, more efficient execution. Whether for coding tasks, visual reasoning, or handling long-context input, Gemini 2.5 Pro Deep Think provides unparalleled performance. It also introduces features like native audio for more expressive conversations and optimizations that make it faster and more accurate than previous versions.
  • 3
    NVIDIA Llama Nemotron
    ​NVIDIA Llama Nemotron is a family of advanced language models optimized for reasoning and a diverse set of agentic AI tasks. These models excel in graduate-level scientific reasoning, advanced mathematics, coding, instruction following, and tool calls. Designed for deployment across various platforms, from data centers to PCs, they offer the flexibility to toggle reasoning capabilities on or off, reducing inference costs when deep reasoning isn't required. The Llama Nemotron family includes models tailored for different deployment needs. Built upon Llama models and enhanced by NVIDIA through post-training, these models demonstrate improved accuracy, up to 20% over base models, and optimized inference speeds, achieving up to five times the performance of other leading open reasoning models. This efficiency enables handling more complex reasoning tasks, enhances decision-making capabilities, and reduces operational costs for enterprises. ​
  • 4
    Llama 3.3
    Llama 3.3 is the latest iteration in the Llama series of language models, developed to push the boundaries of AI-powered understanding and communication. With enhanced contextual reasoning, improved language generation, and advanced fine-tuning capabilities, Llama 3.3 is designed to deliver highly accurate, human-like responses across diverse applications. This version features a larger training dataset, refined algorithms for nuanced comprehension, and reduced biases compared to its predecessors. Llama 3.3 excels in tasks such as natural language understanding, creative writing, technical explanation, and multilingual communication, making it an indispensable tool for businesses, developers, and researchers. Its modular architecture allows for customizable deployment in specialized domains, ensuring versatility and performance at scale.
    Starting Price: Free
  • 5
    Hermes 3

    Hermes 3

    Nous Research

    Experiment, and push the boundaries of individual alignment, artificial consciousness, open-source software, and decentralization, in ways that monolithic companies and governments are too afraid to try. Hermes 3 contains advanced long-term context retention and multi-turn conversation capability, complex roleplaying and internal monologue abilities, and enhanced agentic function-calling. Our training data aggressively encourages the model to follow the system and instruction prompts exactly and in an adaptive manner. Hermes 3 was created by fine-tuning Llama 3.1 8B, 70B, and 405B, and training on a dataset of primarily synthetically generated responses. The model boasts comparable and superior performance to Llama 3.1 while unlocking deeper capabilities in reasoning and creativity. Hermes 3 is a series of instruct and tool-use models with strong reasoning and creative abilities.
    Starting Price: Free
  • 6
    Tülu 3
    Tülu 3 is an advanced instruction-following language model developed by the Allen Institute for AI (Ai2), designed to enhance capabilities in areas such as knowledge, reasoning, mathematics, coding, and safety. Built upon the Llama 3 Base, Tülu 3 employs a comprehensive four-stage post-training process: meticulous prompt curation and synthesis, supervised fine-tuning on a diverse set of prompts and completions, preference tuning using both off- and on-policy data, and a novel reinforcement learning approach to bolster specific skills with verifiable rewards. This open-source model distinguishes itself by providing full transparency, including access to training data, code, and evaluation tools, thereby closing the performance gap between open and proprietary fine-tuning methods. Evaluations indicate that Tülu 3 outperforms other open-weight models of similar size, such as Llama 3.1-Instruct and Qwen2.5-Instruct, across various benchmarks.
    Starting Price: Free
  • 7
    Perplexity Research

    Perplexity Research

    Perplexity AI

    Perplexity Research is an advanced AI-powered deep research tool designed to conduct thorough analyses across a wide range of complex subjects. By emulating human-like research processes, it iteratively searches, reads, and evaluates documents, refining its approach to develop a comprehensive understanding of the topic. Upon completing its analysis, Perplexity Research synthesizes the gathered information into clear, detailed reports, which users can export as PDFs or shareable web pages. This tool excels in various domains, including finance, marketing, technology, health, and travel planning, enabling users to perform expert-level research efficiently. Deep Research is currently accessible on the web, with plans to expand to iOS, Android, and Mac platforms, and is available for free, offering unlimited queries to Pro subscribers and a limited number of daily answers to non-subscribers.
    Starting Price: Free
  • 8
    Mistral Small 3.1
    ​Mistral Small 3.1 is a state-of-the-art, multimodal, and multilingual AI model released under the Apache 2.0 license. Building upon Mistral Small 3, this enhanced version offers improved text performance, and advanced multimodal understanding, and supports an expanded context window of up to 128,000 tokens. It outperforms comparable models like Gemma 3 and GPT-4o Mini, delivering inference speeds of 150 tokens per second. Designed for versatility, Mistral Small 3.1 excels in tasks such as instruction following, conversational assistance, image understanding, and function calling, making it suitable for both enterprise and consumer-grade AI applications. Its lightweight architecture allows it to run efficiently on a single RTX 4090 or a Mac with 32GB RAM, facilitating on-device deployments. It is available for download on Hugging Face, accessible via Mistral AI's developer playground, and integrated into platforms like Google Cloud Vertex AI, with availability on NVIDIA NIM and
    Starting Price: Free
  • 9
    OpenAI o4-mini
    The o4-mini model is a compact and efficient version of the o3 model, released following the launch of GPT-4.1. It offers enhanced reasoning capabilities, with improved performance in tasks that require complex reasoning and problem-solving. The o4-mini is designed to meet the growing demand for advanced AI solutions, serving as a more efficient alternative while maintaining the capabilities of its predecessor. This model is part of OpenAI's strategy to refine and advance their AI technologies ahead of the anticipated GPT-5 launch.
  • 10
    OpenAI o1
    OpenAI o1 represents a new series of AI models designed by OpenAI, focusing on enhanced reasoning capabilities. These models, including o1-preview and o1-mini, are trained using a novel reinforcement learning approach to spend more time "thinking" through problems before providing answers. This approach allows o1 to excel in complex problem-solving tasks in areas like coding, mathematics, and science, outperforming previous models like GPT-4o in certain benchmarks. The o1 series aims to tackle challenges that require deeper thought processes, marking a significant step towards AI systems that can reason more like humans, although it's still in the preview stage with ongoing improvements and evaluations.
  • 11
    Hunyuan T1

    Hunyuan T1

    Tencent

    ​​Hunyuan T1 is Tencent's deep-thinking AI model, now fully open to all users through the Tencent Yuanbao platform. This model excels in understanding multiple dimensions and potential logical relationships, making it suitable for handling complex tasks. Users can experience various AI models on the platform, including DeepSeek-R1 and Tencent Hunyuan Turbo. The official version of the Tencent Hunyuan T1 model will also be launched soon, providing external API access and other services. Built upon Tencent's Hunyuan large language model, Yuanbao excels in Chinese language understanding, logical reasoning, and task execution. It offers AI-based search, summaries, and writing capabilities, enabling users to analyze documents and engage in prompt-based interactions.
  • 12
    OpenAI o3
    OpenAI o3 is an advanced AI model designed to enhance reasoning capabilities by breaking down complex instructions into smaller, more manageable steps. It offers significant improvements over previous AI iterations, excelling in coding tasks, competitive programming, and achieving high scores in mathematics and science benchmarks. Available for widespread use, OpenAI o3 supports advanced AI-driven problem-solving and decision-making processes. The model incorporates deliberative alignment techniques to ensure its responses align with established safety and ethical guidelines, making it a powerful tool for developers, researchers, and enterprises seeking sophisticated AI solutions.
    Starting Price: $2 per 1 million tokens
  • 13
    Grok 3 Think
    Grok 3 Think, the latest iteration of xAI's AI model, is designed to enhance reasoning capabilities using advanced reinforcement learning. It can think through complex problems for extended periods, from seconds to minutes, improving its answers by backtracking, exploring alternatives, and refining its approach. This model, trained on an unprecedented scale, delivers remarkable performance in tasks such as mathematics, coding, and world knowledge, showing impressive results in competitions like the American Invitational Mathematics Examination. Grok 3 Think not only provides accurate solutions but also offers transparency by allowing users to inspect the reasoning behind its decisions, setting a new standard for AI problem-solving.
  • 14
    Grok 4
    Grok 4 is the latest AI model from Elon Musk’s xAI, marking a significant advancement in AI reasoning and natural language understanding. Developed on the Colossus supercomputer, Grok 4 supports multimodal inputs including text and images, with plans to add video capabilities soon. It features enhanced precision in language tasks and has demonstrated superior performance in scientific reasoning and visual problem-solving compared to other leading AI models. Designed for developers, researchers, and technical users, Grok 4 offers powerful tools for complex tasks. The model incorporates improved moderation to address previous concerns about biased or problematic outputs. Grok 4 represents a major leap forward in AI’s ability to understand and generate human-like responses.
  • 15
    Qwen2.5-VL-32B
    Qwen2.5-VL-32B is a state-of-the-art AI model designed for multimodal tasks, offering advanced capabilities in both text and image reasoning. It builds upon the earlier Qwen2.5-VL series, improving response quality with more human-like, formatted answers. The model excels in mathematical reasoning, fine-grained image understanding, and complex, multi-step reasoning tasks, such as those found in MathVista and MMMU benchmarks. Its superior performance has been demonstrated in comparison to other models, outperforming the larger Qwen2-VL-72B in certain areas. With improved image parsing and visual logic deduction, Qwen2.5-VL-32B provides a detailed, accurate analysis of images and can generate responses based on complex visual inputs. It has been optimized for both text and image tasks, making it ideal for applications requiring sophisticated reasoning and understanding across different media.
  • 16
    OpenAI o3-mini
    OpenAI o3-mini is a lightweight version of the advanced o3 AI model, offering powerful reasoning capabilities in a more efficient and accessible package. Designed to break down complex instructions into smaller, manageable steps, o3-mini excels in coding tasks, competitive programming, and problem-solving in mathematics and science. This compact model provides the same high-level precision and logic as its larger counterpart but with reduced computational requirements, making it ideal for use in resource-constrained environments. With built-in deliberative alignment, o3-mini ensures safe, ethical, and context-aware decision-making, making it a versatile tool for developers, researchers, and businesses seeking a balance between performance and efficiency.
  • 17
    ERNIE X1
    ERNIE X1 is an advanced conversational AI model developed by Baidu as part of their ERNIE (Enhanced Representation through Knowledge Integration) series. Unlike previous versions, ERNIE X1 is designed to be more efficient in understanding and generating human-like responses. It incorporates cutting-edge machine learning techniques to handle complex queries, making it capable of not only processing text but also generating images and engaging in multimodal communication. ERNIE X1 is often used in natural language processing applications such as chatbots, virtual assistants, and enterprise automation, offering significant improvements in accuracy, contextual understanding, and response quality.
    Starting Price: $0.28 per 1M tokens
  • 18
    Qwen3

    Qwen3

    Alibaba

    Qwen3, the latest iteration of the Qwen family of large language models, introduces groundbreaking features that enhance performance across coding, math, and general capabilities. With models like the Qwen3-235B-A22B and Qwen3-30B-A3B, Qwen3 achieves impressive results compared to top-tier models, thanks to its hybrid thinking modes that allow users to control the balance between deep reasoning and quick responses. The platform supports 119 languages and dialects, making it an ideal choice for global applications. Its pre-training process, which uses 36 trillion tokens, enables robust performance, and advanced reinforcement learning (RL) techniques continue to refine its capabilities. Available on platforms like Hugging Face and ModelScope, Qwen3 offers a powerful tool for developers and researchers working in diverse fields.
    Starting Price: Free
  • 19
    GPT-4.5

    GPT-4.5

    OpenAI

    GPT-4.5 is a powerful AI model that improves upon its predecessor by scaling unsupervised learning, enhancing reasoning abilities, and offering improved collaboration capabilities. Designed to better understand human intent and collaborate in more natural, intuitive ways, GPT-4.5 delivers higher accuracy and lower hallucination rates across a broad range of topics. Its advanced capabilities enable it to generate creative and insightful content, solve complex problems, and assist with tasks in writing, design, and even space exploration. With improved AI-human interactions, GPT-4.5 is optimized for practical applications, making it more accessible and reliable for businesses and developers.
    Starting Price: $75.00 / 1M tokens
  • 20
    PaLM 2

    PaLM 2

    Google

    PaLM 2 is our next generation large language model that builds on Google’s legacy of breakthrough research in machine learning and responsible AI. It excels at advanced reasoning tasks, including code and math, classification and question answering, translation and multilingual proficiency, and natural language generation better than our previous state-of-the-art LLMs, including PaLM. It can accomplish these tasks because of the way it was built – bringing together compute-optimal scaling, an improved dataset mixture, and model architecture improvements. PaLM 2 is grounded in Google’s approach to building and deploying AI responsibly. It was evaluated rigorously for its potential harms and biases, capabilities and downstream uses in research and in-product applications. It’s being used in other state-of-the-art models, like Med-PaLM 2 and Sec-PaLM, and is powering generative AI features and tools at Google, like Bard and the PaLM API.
  • 21
    DeepSeek-Coder-V2
    DeepSeek-Coder-V2 is an open source code language model designed to excel in programming and mathematical reasoning tasks. It features a Mixture-of-Experts (MoE) architecture with 236 billion total parameters and 21 billion activated parameters per token, enabling efficient processing and high performance. The model was trained on an extensive dataset of 6 trillion tokens, enhancing its capabilities in code generation and mathematical problem-solving. DeepSeek-Coder-V2 supports over 300 programming languages and has demonstrated superior performance on benchmarks such surpassing other models. It is available in multiple variants, including DeepSeek-Coder-V2-Instruct, optimized for instruction-based tasks; DeepSeek-Coder-V2-Base, suitable for general text generation; and lightweight versions like DeepSeek-Coder-V2-Lite-Base and DeepSeek-Coder-V2-Lite-Instruct, designed for environments with limited computational resources.
  • 22
    Smaug-72B
    Smaug-72B is a powerful open-source large language model (LLM) known for several key features: High Performance: It currently holds the top spot on the Hugging Face Open LLM leaderboard, surpassing models like GPT-3.5 in various benchmarks. This means it excels at tasks like understanding, responding to, and generating human-like text. Open Source: Unlike many other advanced LLMs, Smaug-72B is freely available for anyone to use and modify, fostering collaboration and innovation in the AI community. Focus on Reasoning and Math: It specifically shines in handling reasoning and mathematical tasks, attributing this strength to unique fine-tuning techniques developed by Abacus AI, the creators of Smaug-72B. Based on Qwen-72B: It's technically a fine-tuned version of another powerful LLM called Qwen-72B, released by Alibaba, further improving upon its capabilities. Overall, Smaug-72B represents a significant step forward in open-source AI.
    Starting Price: Free
  • 23
    OpenAI o1-pro
    OpenAI o1-pro is the enhanced version of OpenAI's o1 model, designed to tackle more complex and demanding tasks with greater reliability. It features significant performance improvements over its predecessor, the o1 preview, with a notable 34% reduction in major errors and the ability to think 50% faster. This model excels in areas like math, physics, and coding, where it can provide detailed and accurate solutions. Additionally, the o1-pro mode can process multimodal inputs, including text and images, and is particularly adept at reasoning tasks that require deep thought and problem-solving. It's accessible through a ChatGPT Pro subscription, offering unlimited usage and enhanced capabilities for users needing advanced AI assistance.
    Starting Price: $200/month
  • 24
    DeepSeekMath
    DeepSeekMath is a specialized 7B parameter language model developed by DeepSeek-AI, designed to push the boundaries of mathematical reasoning in open-source language models. It starts from the DeepSeek-Coder-v1.5 7B model and undergoes further pre-training with 120B math-related tokens sourced from Common Crawl, alongside natural language and code data. DeepSeekMath has demonstrated remarkable performance, achieving a 51.7% score on the competition-level MATH benchmark without external tools or voting techniques, closely competing with the likes of Gemini-Ultra and GPT-4. The model's capabilities are enhanced by a meticulous data selection pipeline and the introduction of Group Relative Policy Optimization (GRPO), which optimizes both mathematical reasoning and memory usage. DeepSeekMath is available in base, instruct, and RL versions, supporting both research and commercial use, and is aimed at those looking to explore or apply advanced mathematical problem-solving in AI contexts.
    Starting Price: Free
  • 25
    Grok 3
    Grok-3, developed by xAI, represents a significant advancement in the field of artificial intelligence, aiming to set new benchmarks in AI capabilities. It is designed to be a multimodal AI, capable of processing and understanding data from various sources including text, images, and audio, which allows for a more integrated and comprehensive interaction with users. Grok-3 is built on an unprecedented scale, with training involving ten times more computational resources than its predecessor, leveraging 100,000 Nvidia H100 GPUs on the Colossus supercomputer. This extensive computational power is expected to enhance Grok-3's performance in areas like reasoning, coding, and real-time analysis of current events through direct access to X posts. The model is anticipated to outperform not only its earlier versions but also compete with other leading AI models in the generative AI landscape.
  • 26
    Yi-Large
    Yi-Large is a proprietary large language model developed by 01.AI, offering a 32k context length with both input and output costs at $2 per million tokens. It stands out with its advanced capabilities in natural language processing, common-sense reasoning, and multilingual support, performing on par with leading models like GPT-4 and Claude3 in various benchmarks. Yi-Large is designed for tasks requiring complex inference, prediction, and language understanding, making it suitable for applications like knowledge search, data classification, and creating human-like chatbots. Its architecture is based on a decoder-only transformer with enhancements such as pre-normalization and Group Query Attention, and it has been trained on a vast, high-quality multilingual dataset. This model's versatility and cost-efficiency make it a strong contender in the AI market, particularly for enterprises aiming to deploy AI solutions globally.
    Starting Price: $0.19 per 1M input token
  • 27
    DataGemma
    DataGemma represents a pioneering effort by Google to enhance the accuracy and reliability of large language models (LLMs) when dealing with statistical and numerical data. Launched as a set of open models, DataGemma leverages Google's Data Commons, a vast repository of public statistical data—to ground its responses in real-world facts. This initiative employs two innovative approaches: Retrieval Interleaved Generation (RIG) and Retrieval Augmented Generation (RAG). The RIG method integrates real-time data checks during the generation process to ensure factual accuracy, while RAG retrieves relevant information before generating responses, thereby reducing the likelihood of AI hallucinations. By doing so, DataGemma aims to provide users with more trustworthy and factually grounded answers, marking a significant step towards mitigating the issue of misinformation in AI-generated content.
  • 28
    Mistral 7B

    Mistral 7B

    Mistral AI

    Mistral 7B is a 7.3-billion-parameter language model that outperforms larger models like Llama 2 13B across various benchmarks. It employs Grouped-Query Attention (GQA) for faster inference and Sliding Window Attention (SWA) to efficiently handle longer sequences. Released under the Apache 2.0 license, Mistral 7B is accessible for deployment across diverse platforms, including local environments and major cloud services. Additionally, a fine-tuned version, Mistral 7B Instruct, demonstrates enhanced performance in instruction-following tasks, surpassing models like Llama 2 13B Chat.
    Starting Price: Free
  • 29
    EXAONE Deep
    EXAONE Deep is a series of reasoning-enhanced language models developed by LG AI Research, featuring parameter sizes of 2.4 billion, 7.8 billion, and 32 billion. These models demonstrate superior capabilities in various reasoning tasks, including math and coding benchmarks. Notably, EXAONE Deep 2.4B outperforms other models of comparable size, EXAONE Deep 7.8B surpasses both open-weight models of similar scale and the proprietary reasoning model OpenAI o1-mini, and EXAONE Deep 32B shows competitive performance against leading open-weight models. The repository provides comprehensive documentation covering performance evaluations, quickstart guides for using EXAONE Deep models with Transformers, explanations of quantized EXAONE Deep weights in AWQ and GGUF formats, and instructions for running EXAONE Deep models locally using frameworks like llama.cpp and Ollama.
    Starting Price: Free
  • 30
    GPT-5

    GPT-5

    OpenAI

    GPT-5 is the anticipated next iteration of OpenAI's Generative Pre-trained Transformer, a large language model (LLM) still under development. LLMs are trained on massive amounts of text data and are able to generate realistic and coherent text, translate languages, write different kinds of creative content, and answer your questions in an informative way. It's not publicly available yet. OpenAI hasn't announced a release date, but some speculate it could be launched sometime in 2024. It's expected to be even more powerful than its predecessor, GPT-4. GPT-4 is already impressive, capable of generating human-quality text, translating languages, and writing different kinds of creative content. GPT-5 is expected to take these abilities even further, with better reasoning, factual accuracy, and ability to follow instructions.
    Starting Price: $0.0200 per 1000 tokens
  • 31
    Llama 4 Maverick
    Llama 4 Maverick is one of the most advanced multimodal AI models from Meta, featuring 17 billion active parameters and 128 experts. It surpasses its competitors like GPT-4o and Gemini 2.0 Flash in a broad range of benchmarks, especially in tasks related to coding, reasoning, and multilingual capabilities. Llama 4 Maverick combines image and text understanding, enabling it to deliver industry-leading results in image-grounding tasks and precise, high-quality output. With its efficient performance at a reduced parameter size, Maverick offers exceptional value, especially in general assistant and chat applications.
    Starting Price: Free
  • 32
    Teuken 7B

    Teuken 7B

    OpenGPT-X

    Teuken-7B is a multilingual, open source language model developed under the OpenGPT-X initiative, specifically designed to cater to Europe's diverse linguistic landscape. It has been trained on a dataset comprising over 50% non-English texts, encompassing all 24 official languages of the European Union, ensuring robust performance across these languages. A key innovation in Teuken-7B is its custom multilingual tokenizer, optimized for European languages, which enhances training efficiency and reduces inference costs compared to standard monolingual tokenizers. The model is available in two versions, Teuken-7B-Base, the foundational pre-trained model, and Teuken-7B-Instruct, which has undergone instruction tuning for improved performance in following user prompts. Both versions are accessible on Hugging Face, promoting transparency and collaboration within the AI community. The development of Teuken-7B underscores a commitment to creating AI models that reflect Europe's diversity.
    Starting Price: Free
  • 33
    Falcon Mamba 7B

    Falcon Mamba 7B

    Technology Innovation Institute (TII)

    Falcon Mamba 7B is the first open-source State Space Language Model (SSLM), introducing a groundbreaking architecture for Falcon models. Recognized as the top-performing open-source SSLM worldwide by Hugging Face, it sets a new benchmark in AI efficiency. Unlike traditional transformers, SSLMs operate with minimal memory requirements and can generate extended text sequences without additional overhead. Falcon Mamba 7B surpasses leading transformer-based models, including Meta’s Llama 3.1 8B and Mistral’s 7B, showcasing superior performance. This innovation underscores Abu Dhabi’s commitment to advancing AI research and development on a global scale.
    Starting Price: Free
  • 34
    Phi-4-mini-reasoning
    Phi-4-mini-reasoning is a 3.8-billion parameter transformer-based language model optimized for mathematical reasoning and step-by-step problem solving in environments with constrained computing or latency. Fine-tuned with synthetic data generated by the DeepSeek-R1 model, it balances efficiency with advanced reasoning ability. Trained on over one million diverse math problems spanning multiple levels of difficulty from middle school to Ph.D. level, Phi-4-mini-reasoning outperforms its base model on long sentence generation across various evaluations and surpasses larger models like OpenThinker-7B, Llama-3.2-3B-instruct, and DeepSeek-R1. It features a 128K-token context window and supports function calling, enabling integration with external tools and APIs. Phi-4-mini-reasoning can be quantized using Microsoft Olive or Apple MLX Framework for deployment on edge devices such as IoT, laptops, and mobile devices.
  • 35
    Falcon-40B

    Falcon-40B

    Technology Innovation Institute (TII)

    Falcon-40B is a 40B parameters causal decoder-only model built by TII and trained on 1,000B tokens of RefinedWeb enhanced with curated corpora. It is made available under the Apache 2.0 license. Why use Falcon-40B? It is the best open-source model currently available. Falcon-40B outperforms LLaMA, StableLM, RedPajama, MPT, etc. See the OpenLLM Leaderboard. It features an architecture optimized for inference, with FlashAttention and multiquery. It is made available under a permissive Apache 2.0 license allowing for commercial use, without any royalties or restrictions. ⚠️ This is a raw, pretrained model, which should be further finetuned for most usecases. If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at Falcon-40B-Instruct.
    Starting Price: Free
  • 36
    Claude Sonnet 3.7
    Claude Sonnet 3.7, developed by Anthropic, is a cutting-edge AI model that combines rapid response with deep reflective reasoning. This innovative model allows users to toggle between quick, efficient responses and more thoughtful, reflective answers, making it ideal for complex problem-solving. By allowing Claude to self-reflect before answering, it excels at tasks that require high-level reasoning and nuanced understanding. With its ability to engage in deeper thought processes, Claude Sonnet 3.7 enhances tasks such as coding, natural language processing, and critical thinking applications. Available across various platforms, it offers a powerful tool for professionals and organizations seeking a high-performance, adaptable AI.
  • 37
    Claude Haiku 3.5
    Our fastest model, delivering advanced coding, tool use, and reasoning at an accessible price Claude Haiku 3.5 is the next generation of our fastest model. For a similar speed to Claude Haiku 3, Claude Haiku 3.5 improves across every skill set and surpasses Claude Opus 3, the largest model in our previous generation, on many intelligence benchmarks. Claude Haiku 3.5 is available across our first-party API, Amazon Bedrock, and Google Cloud’s Vertex AI—initially as a text-only model and with image input to follow.
  • 38
    ERNIE X1 Turbo
    ERNIE X1 Turbo, developed by Baidu, is an advanced deep reasoning AI model introduced at the Baidu Create 2025 conference. Designed to handle complex multi-step tasks such as problem-solving, literary creation, and code generation, this model outperforms competitors like DeepSeek R1 in terms of reasoning abilities. With a focus on multimodal capabilities, ERNIE X1 Turbo supports text, audio, and image processing, making it an incredibly versatile AI solution. Despite its cutting-edge technology, it is priced at just a fraction of the cost of other top-tier models, offering a high-value solution for businesses and developers.
    Starting Price: $0.14 per 1M tokens
  • 39
    DBRX

    DBRX

    Databricks

    Today, we are excited to introduce DBRX, an open, general-purpose LLM created by Databricks. Across a range of standard benchmarks, DBRX sets a new state-of-the-art for established open LLMs. Moreover, it provides the open community and enterprises building their own LLMs with capabilities that were previously limited to closed model APIs; according to our measurements, it surpasses GPT-3.5, and it is competitive with Gemini 1.0 Pro. It is an especially capable code model, surpassing specialized models like CodeLLaMA-70B in programming, in addition to its strength as a general-purpose LLM. This state-of-the-art quality comes with marked improvements in training and inference performance. DBRX advances the state-of-the-art in efficiency among open models thanks to its fine-grained mixture-of-experts (MoE) architecture. Inference is up to 2x faster than LLaMA2-70B, and DBRX is about 40% of the size of Grok-1 in terms of both total and active parameter counts.
  • 40
    Upstage AI

    Upstage AI

    Upstage.ai

    Upstage AI builds powerful large language models and document processing engines designed to transform workflows across industries like insurance, healthcare, and finance. Their enterprise-grade AI technology delivers high accuracy and performance, enabling businesses to automate complex tasks such as claims processing, underwriting, and clinical document analysis. Key products include Solar Pro 2, a fast and grounded enterprise language model, Document Parse for converting PDFs and scans into machine-readable text, and Information Extract for precise data extraction from contracts and invoices. Upstage’s AI solutions help companies save time and reduce manual work by providing instant, accurate answers from large document sets. The platform supports flexible deployment options including cloud, on-premises, and hybrid, meeting strict compliance requirements. Trusted by global clients, Upstage continues to advance AI innovation with top conference publications and industry awards.
    Starting Price: $0.5 per 1M tokens
  • 41
    Mistral Saba

    Mistral Saba

    Mistral AI

    Mistral Saba is a 24-billion-parameter model trained on meticulously curated datasets from across the Middle East and South Asia. The model provides more accurate and relevant responses than models that are over five times its size while being significantly faster and lower cost. It can also serve as a strong base to train highly specific regional adaptations. Mistral Saba is available as an API and can be deployed locally within customers' security premises. Like the recently released Mistral Small 3, the model is lightweight and can be deployed on single-GPU systems, responding at speeds of over 150 tokens per second. In keeping with the rich cultural cross-pollination between the Middle East and South Asia, Mistral Saba supports Arabic and many Indian-origin languages and is particularly strong in South Indian-origin languages such as Tamil. This capability enhances its versatility in multinational use across these interconnected regions.
    Starting Price: Free
  • 42
    R1 1776

    R1 1776

    Perplexity AI

    Perplexity AI has open-sourced R1 1776, a large language model (LLM) based on DeepSeek R1 designed to enhance transparency and foster community collaboration in AI development. This release allows researchers and developers to access the model's architecture and codebase, enabling them to contribute to its improvement and adaptation for various applications. By sharing R1 1776 openly, Perplexity AI aims to promote innovation and ethical practices within the AI community.
    Starting Price: Free
  • 43
    Gemini 2.0 Flash Thinking
    Gemini 2.0 Flash Thinking is an advanced AI model developed by Google DeepMind, designed to enhance reasoning capabilities by explicitly displaying its thought processes. This transparency allows the model to tackle complex problems more effectively and provides users with clear explanations of its decision-making steps. By showcasing its internal reasoning, Gemini 2.0 Flash Thinking not only improves performance but also offers greater explainability, making it a valuable tool for applications requiring deep understanding and trust in AI-driven solutions.
  • 44
    Palmyra LLM
    Palmyra is a suite of Large Language Models (LLMs) engineered for precise, dependable performance in enterprise applications. These models excel in tasks such as question-answering, image analysis, and support for over 30 languages, with fine-tuning available for industries like healthcare and finance. Notably, Palmyra models have achieved top rankings in benchmarks like Stanford HELM and PubMedQA, and Palmyra-Fin is the first model to pass the CFA Level III exam. Writer ensures data privacy by not using client data to train or modify their models, adopting a zero data retention policy. The Palmyra family includes specialized models such as Palmyra X 004, featuring tool-calling capabilities; Palmyra Med, tailored for healthcare; Palmyra Fin, designed for finance; and Palmyra Vision, which offers advanced image and video processing. These models are available through Writer's full-stack generative AI platform, which integrates graph-based Retrieval Augmented Generation (RAG).
    Starting Price: $18 per month
  • 45
    Inception Labs

    Inception Labs

    Inception Labs

    Inception Labs is pioneering the next generation of AI with diffusion-based large language models (dLLMs), a breakthrough in AI that offers 10x faster performance and 5-10x lower cost than traditional autoregressive models. Inspired by the success of diffusion models in image and video generation, Inception’s dLLMs introduce enhanced reasoning, error correction, and multimodal capabilities, allowing for more structured and accurate text generation. With applications spanning enterprise AI, research, and content generation, Inception’s approach sets a new standard for speed, efficiency, and control in AI-driven workflows.
  • 46
    Claude Sonnet 4
    Claude Sonnet 4, the latest evolution of Anthropic’s language models, offers a significant upgrade in coding, reasoning, and performance. Designed for diverse use cases, Sonnet 4 builds upon the success of its predecessor, Claude Sonnet 3.7, delivering more precise responses and better task execution. With a state-of-the-art 72.7% performance on the SWE-bench, it stands out in agentic scenarios, offering enhanced steerability and clear reasoning capabilities. Whether handling software development, multi-feature app creation, or complex problem-solving, Claude Sonnet 4 ensures higher code quality, reduced errors, and a smoother development process.
    Starting Price: $3 / 1 million tokens (input)
  • 47
    GPT-4 Turbo
    GPT-4 is a large multimodal model (accepting text or image inputs and outputting text) that can solve difficult problems with greater accuracy than any of our previous models, thanks to its broader general knowledge and advanced reasoning capabilities. GPT-4 is available in the OpenAI API to paying customers. Like gpt-3.5-turbo, GPT-4 is optimized for chat but works well for traditional completions tasks using the Chat Completions API. GPT-4 is the latest GPT-4 model with improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Returns a maximum of 4,096 output tokens. This preview model is not yet suited for production traffic.
    Starting Price: $0.0200 per 1000 tokens
  • 48
    Grok 3 DeepSearch
    Grok 3 DeepSearch is an advanced model and research agent designed to improve reasoning and problem-solving abilities in AI, with a strong focus on deep search and iterative reasoning. Unlike traditional models that rely solely on pre-trained knowledge, Grok 3 DeepSearch can explore multiple avenues, test hypotheses, and correct errors in real-time by analyzing vast amounts of information and engaging in chain-of-thought processes. It is designed for tasks that require critical thinking, such as complex mathematical problems, coding challenges, and intricate academic inquiries. Grok 3 DeepSearch is a cutting-edge AI tool capable of providing accurate and thorough solutions by using its unique deep search capabilities, making it ideal for both STEM and creative fields.
    Starting Price: $30/month
  • 49
    Gemini-Exp-1206
    Gemini-Exp-1206 is an experimental AI model now available for preview to Gemini Advanced subscribers. This model significantly enhances performance in complex tasks such as coding, mathematics, reasoning, and following detailed instructions. It's designed to assist users in navigating intricate challenges with greater ease. As an early preview, some features may not function as expected, and it currently lacks access to real-time information. Users can access Gemini-Exp-1206 through the Gemini model drop-down on desktop and mobile web platforms.
  • 50
    Mistral Large 2
    Mistral AI has launched the Mistral Large 2, an advanced AI model designed to excel in code generation, multilingual capabilities, and complex reasoning tasks. The model features a 128k context window, supporting dozens of languages including English, French, Spanish, and Arabic, as well as over 80 programming languages. Mistral Large 2 is tailored for high-throughput single-node inference, making it ideal for large-context applications. Its improved performance on benchmarks like MMLU and its enhanced code generation and reasoning abilities ensure accuracy and efficiency. The model also incorporates better function calling and retrieval, supporting complex business applications.
    Starting Price: Free