Alternatives to LexVec

Compare LexVec alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to LexVec in 2026. Compare features, ratings, user reviews, pricing, and more from LexVec competitors and alternatives in order to make an informed decision for your business.

  • 1
    Vertex AI
    Build, deploy, and scale machine learning (ML) models faster, with fully managed ML tools for any use case. Through Vertex AI Workbench, Vertex AI is natively integrated with BigQuery, Dataproc, and Spark. You can use BigQuery ML to create and execute machine learning models in BigQuery using standard SQL queries on existing business intelligence tools and spreadsheets, or you can export datasets from BigQuery directly into Vertex AI Workbench and run your models from there. Use Vertex Data Labeling to generate highly accurate labels for your data collection. Vertex AI Agent Builder enables developers to create and deploy enterprise-grade generative AI applications. It offers both no-code and code-first approaches, allowing users to build AI agents using natural language instructions or by leveraging frameworks like LangChain and LlamaIndex.
    Compare vs. LexVec View Software
    Visit Website
  • 2
    GloVe

    GloVe

    Stanford NLP

    GloVe (Global Vectors for Word Representation) is an unsupervised learning algorithm developed by the Stanford NLP Group to obtain vector representations for words. It constructs word embeddings by analyzing global word-word co-occurrence statistics from a given corpus, resulting in vector spaces where the geometric relationships reflect semantic similarities and differences among words. A notable feature of GloVe is its ability to capture linear substructures within the word vector space, enabling vector arithmetic to express relationships. The model is trained on the non-zero entries of a global word-word co-occurrence matrix, which records how frequently pairs of words appear together in a corpus. This approach efficiently leverages statistical information by focusing on significant co-occurrences, leading to meaningful word representations. Pre-trained word vectors are available for various corpora, including Wikipedia 2014.
    Starting Price: Free
  • 3
    fastText

    fastText

    fastText

    fastText is an open source, free, and lightweight library developed by Facebook's AI Research (FAIR) lab for efficient learning of word representations and text classification. It supports both unsupervised learning of word vectors and supervised learning for text classification tasks. A key feature of fastText is its ability to capture subword information by representing words as bags of character n-grams, which enhances the handling of morphologically rich languages and out-of-vocabulary words. The library is optimized for performance and capable of training on large datasets quickly, and the resulting models can be reduced in size for deployment on mobile devices. Pre-trained word vectors are available for 157 languages, trained on Common Crawl and Wikipedia data, and can be downloaded for immediate use. fastText also offers aligned word vectors for 44 languages, facilitating cross-lingual natural language processing tasks.
    Starting Price: Free
  • 4
    voyage-code-3
    Voyage AI introduces voyage-code-3, a next-generation embedding model optimized for code retrieval. It outperforms OpenAI-v3-large and CodeSage-large by an average of 13.80% and 16.81% on a suite of 32 code retrieval datasets, respectively. It supports embeddings of 2048, 1024, 512, and 256 dimensions and offers multiple embedding quantization options, including float (32-bit), int8 (8-bit signed integer), uint8 (8-bit unsigned integer), binary (bit-packed int8), and ubinary (bit-packed uint8). With a 32 K-token context length, it surpasses OpenAI's 8K and CodeSage Large's 1K context lengths. Voyage-code-3 employs Matryoshka learning to create embeddings with a nested family of various lengths within a single vector. This allows users to vectorize documents into a 2048-dimensional vector and later use shorter versions (e.g., 256, 512, or 1024 dimensions) without re-invoking the embedding model.
  • 5
    word2vec

    word2vec

    Google

    Word2Vec is a neural network-based technique for learning word embeddings, developed by researchers at Google. It transforms words into continuous vector representations in a multi-dimensional space, capturing semantic relationships based on context. Word2Vec uses two main architectures: Skip-gram, which predicts surrounding words given a target word, and Continuous Bag-of-Words (CBOW), which predicts a target word based on surrounding words. By training on large text corpora, Word2Vec generates word embeddings where similar words are positioned closely, enabling tasks like semantic similarity, analogy solving, and text clustering. The model was influential in advancing NLP by introducing efficient training techniques such as hierarchical softmax and negative sampling. Though newer embedding models like BERT and Transformer-based methods have surpassed it in complexity and performance, Word2Vec remains a foundational method in natural language processing and machine learning research.
    Starting Price: Free
  • 6
    voyage-3-large
    Voyage AI has unveiled voyage-3-large, a cutting-edge general-purpose and multilingual embedding model that leads across eight evaluated domains, including law, finance, and code, outperforming OpenAI-v3-large and Cohere-v3-English by averages of 9.74% and 20.71%, respectively. Enabled by Matryoshka learning and quantization-aware training, it supports embeddings of 2048, 1024, 512, and 256 dimensions, along with multiple quantization options such as 32-bit floating point, signed and unsigned 8-bit integer, and binary precision, significantly reducing vector database costs with minimal impact on retrieval quality. Notably, voyage-3-large offers a 32K-token context length, surpassing OpenAI's 8K and Cohere's 512 tokens. Evaluations across 100 datasets in diverse domains demonstrate its superior performance, with flexible precision and dimensionality options enabling substantial storage savings without compromising quality.
  • 7
    Codestral Embed
    Codestral Embed is Mistral AI's first embedding model, specialized for code, optimized for high-performance code retrieval and semantic understanding. It significantly outperforms leading code embedders in the market today, such as Voyage Code 3, Cohere Embed v4.0, and OpenAI’s large embedding model. Codestral Embed can output embeddings with different dimensions and precisions; for instance, with a dimension of 256 and int8 precision, it still performs better than any model from competitors. The dimensions of the embeddings are ordered by relevance, allowing users to choose the first n dimensions for a smooth trade-off between quality and cost. It excels in retrieval use cases on real-world code data, particularly in benchmarks like SWE-Bench, which is based on real-world GitHub issues and corresponding fixes, and Text2Code (GitHub), relevant for providing context for code completion or editing.
  • 8
    BERT

    BERT

    Google

    BERT is a large language model and a method of pre-training language representations. Pre-training refers to how BERT is first trained on a large source of text, such as Wikipedia. You can then apply the training results to other Natural Language Processing (NLP) tasks, such as question answering and sentiment analysis. With BERT and AI Platform Training, you can train a variety of NLP models in about 30 minutes.
  • 9
    Gemini Embedding 2
    Gemini Embedding models, including the newer Gemini Embedding 2, are part of Google’s Gemini AI ecosystem and are designed to convert text, phrases, sentences, and code into numerical vector representations that capture their semantic meaning. Unlike generative models that produce new content, the embedding model transforms input data into dense vectors that represent meaning in a mathematical format, allowing computers to compare and analyze information based on conceptual similarity rather than exact wording. These embeddings enable applications such as semantic search, recommendation systems, document retrieval, clustering, classification, and retrieval-augmented generation pipelines. The model can process input in more than 100 languages and supports up to 2048 tokens per request, allowing it to embed longer pieces of text or code while maintaining strong contextual understanding.
    Starting Price: Free
  • 10
    Gensim

    Gensim

    Radim Řehůřek

    Gensim is a free, open source Python library designed for unsupervised topic modeling and natural language processing, focusing on large-scale semantic modeling. It enables the training of models like Word2Vec, FastText, Latent Semantic Analysis (LSA), and Latent Dirichlet Allocation (LDA), facilitating the representation of documents as semantic vectors and the discovery of semantically related documents. Gensim is optimized for performance with highly efficient implementations in Python and Cython, allowing it to process arbitrarily large corpora using data streaming and incremental algorithms without loading the entire dataset into RAM. It is platform-independent, running on Linux, Windows, and macOS, and is licensed under the GNU LGPL, promoting both personal and commercial use. The library is widely adopted, with thousands of companies utilizing it daily, over 2,600 academic citations, and more than 1 million downloads per week.
    Starting Price: Free
  • 11
    Voyage AI

    Voyage AI

    MongoDB

    Voyage AI provides best-in-class embedding models and rerankers designed to supercharge search and retrieval for unstructured data. Its technology powers high-quality Retrieval-Augmented Generation (RAG) by improving how relevant context is retrieved before responses are generated. Voyage AI offers general-purpose, domain-specific, and company-specific models to support a wide range of use cases. The models are optimized for accuracy, low latency, and reduced costs through shorter vector dimensions. With long-context support of up to 32K tokens, Voyage AI enables deeper understanding of complex documents. The platform is modular and integrates easily with any vector database or large language model. Voyage AI is trusted by industry leaders to deliver reliable, factual AI outputs at scale.
  • 12
    E5 Text Embeddings
    E5 Text Embeddings, developed by Microsoft, are advanced models designed to convert textual data into meaningful vector representations, enhancing tasks like semantic search and information retrieval. These models are trained using weakly-supervised contrastive learning on a vast dataset of over one billion text pairs, enabling them to capture intricate semantic relationships across multiple languages. The E5 family includes models of varying sizes—small, base, and large—offering a balance between computational efficiency and embedding quality. Additionally, multilingual versions of these models have been fine-tuned to support diverse languages, ensuring broad applicability in global contexts. Comprehensive evaluations demonstrate that E5 models achieve performance on par with state-of-the-art, English-only models of similar sizes.
    Starting Price: Free
  • 13
    Cohere Embed
    Cohere's Embed is a leading multimodal embedding platform designed to transform text, images, or a combination of both into high-quality vector representations. These embeddings are optimized for semantic search, retrieval-augmented generation, classification, clustering, and agentic AI applications.​ The latest model, embed-v4.0, supports mixed-modality inputs, allowing users to combine text and images into a single embedding. It offers Matryoshka embeddings with configurable dimensions of 256, 512, 1024, or 1536, enabling flexibility in balancing performance and resource usage. With a context length of up to 128,000 tokens, embed-v4.0 is well-suited for processing large documents and complex data structures. It also supports compressed embedding types, including float, int8, uint8, binary, and ubinary, facilitating efficient storage and faster retrieval in vector databases. Multilingual support spans over 100 languages, making it a versatile tool for global applications.
    Starting Price: $0.47 per image
  • 14
    Universal Sentence Encoder
    The Universal Sentence Encoder (USE) encodes text into high-dimensional vectors that can be utilized for tasks such as text classification, semantic similarity, and clustering. It offers two model variants: one based on the Transformer architecture and another on Deep Averaging Network (DAN), allowing a balance between accuracy and computational efficiency. The Transformer-based model captures context-sensitive embeddings by processing the entire input sequence simultaneously, while the DAN-based model computes embeddings by averaging word embeddings, followed by a feedforward neural network. These embeddings facilitate efficient semantic similarity calculations and enhance performance on downstream tasks with minimal supervised training data. The USE is accessible via TensorFlow Hub, enabling seamless integration into various applications.
  • 15
    Gemini Embedding
    Gemini Embedding’s first text model (gemini-embedding-001) is now generally available via the Gemini API and Vertex AI, having held a top spot on the Massive Text Embedding Benchmark Multilingual leaderboard since its experimental launch in March, thanks to superior performance across retrieval, classification, and other embedding tasks compared to both legacy Google and external proprietary models. Exceptionally versatile, it supports over 100 languages with a 2,048‑token input limit and employs the Matryoshka Representation Learning (MRL) technique to let developers choose output dimensions of 3072, 153,6, or 768 for optimal quality, performance, and storage efficiency. Developers can access it through the existing embed_content endpoint in the Gemini API, and while legacy experimental versions will be deprecated later in 2025, migration requires no re‑embedding of existing content.
    Starting Price: $0.15 per 1M input tokens
  • 16
    Azure OpenAI Service
    Apply advanced coding and language models to a variety of use cases. Leverage large-scale, generative AI models with deep understandings of language and code to enable new reasoning and comprehension capabilities for building cutting-edge applications. Apply these coding and language models to a variety of use cases, such as writing assistance, code generation, and reasoning over data. Detect and mitigate harmful use with built-in responsible AI and access enterprise-grade Azure security. Gain access to generative models that have been pretrained with trillions of words. Apply them to new scenarios including language, code, reasoning, inferencing, and comprehension. Customize generative models with labeled data for your specific scenario using a simple REST API. Fine-tune your model's hyperparameters to increase accuracy of outputs. Use the few-shot learning capability to provide the API with examples and achieve more relevant results.
    Starting Price: $0.0004 per 1000 tokens
  • 17
    EmbeddingGemma
    EmbeddingGemma is a 308-million-parameter multilingual text embedding model, lightweight yet powerful, optimized to run entirely on everyday devices such as phones, laptops, and tablets, enabling fast, offline embedding generation that protects user privacy. Built on the Gemma 3 architecture, it supports over 100 languages, processes up to 2,000 input tokens, and leverages Matryoshka Representation Learning (MRL) to offer flexible embedding dimensions (768, 512, 256, or 128) for tailored speed, storage, and precision. Its GPU-and EdgeTPU-accelerated inference delivers embeddings in milliseconds, under 15 ms for 256 tokens on EdgeTPU, while quantization-aware training keeps memory usage under 200 MB without compromising quality. This makes it ideal for real-time, on-device tasks such as semantic search, retrieval-augmented generation (RAG), classification, clustering, and similarity detection, whether for personal file search, mobile chatbots, or custom domain use.
  • 18
    Modeller

    Modeller

    Paragon Business Solutions

    Over thirty years of credit risk modelling expertise wrapped into model building software for today’s age of machine learning. Modeller is a feature-rich, flexible, interactive and transparent tool that helps organizations get the best from their analytical teams. It supports a choice of techniques, the rapid development of powerful models, full explainability and the advancement of less experienced team members. Choose from numerous modeling techniques, including machine learning, for optimal predictive accuracy – especially on datasets with multicollinearity and complex interrelationships. Create industry-standard continuous and binary target models at the click of a button. Use decision tree modeling with CART and CHAID trees. Choose from logistic regression, elastic net models, survival analysis (Cox PH), random forests, XGBoost, stochastic gradient descent and more. Export options for implementation in other scoring and decisioning software include SAS, SQL, PMML and Python.
  • 19
    DreamFusion

    DreamFusion

    DreamFusion

    Recent breakthroughs in text-to-image synthesis have been driven by diffusion models trained on billions of image-text pairs. Adapting this approach to 3D synthesis would require large-scale datasets of labeled 3D assets and efficient architectures for denoising 3D data, neither of which currently exist. In this work, we circumvent these limitations by using a pre-trained 2D text-to-image diffusion model to perform text-to-3D synthesis. We introduce a loss based on probability density distillation that enables the use of a 2D diffusion model as a prior for optimization of a parametric image generator. Using this loss in a DeepDream-like procedure, we optimize a randomly-initialized 3D model (a Neural Radiance Field, or NeRF) via gradient descent such that its 2D renderings from random angles achieve a low loss. The resulting 3D model of the given text can be viewed from any angle, relit by arbitrary illumination, or composited into any 3D environment.
  • 20
    Mixedbread

    Mixedbread

    Mixedbread

    Mixedbread is a fully-managed AI search engine that allows users to build production-ready AI search and Retrieval-Augmented Generation (RAG) applications. It offers a complete AI search stack, including vector stores, embedding and reranking models, and document parsing. Users can transform raw data into intelligent search experiences that power AI agents, chatbots, and knowledge systems without the complexity. It integrates with tools like Google Drive, SharePoint, Notion, and Slack. Its vector stores enable users to build production search engines in minutes, supporting over 100 languages. Mixedbread's embedding and reranking models have achieved over 50 million downloads and outperform OpenAI in semantic search and RAG tasks while remaining open-source and cost-effective. The document parser extracts text, tables, and layouts from PDFs, images, and complex documents, providing clean, AI-ready content without manual preprocessing.
  • 21
    Nomic Embed
    Nomic Embed is a suite of open source, high-performance embedding models designed for various applications, including multilingual text, multimodal content, and code. The ecosystem includes models like Nomic Embed Text v2, which utilizes a Mixture-of-Experts (MoE) architecture to support over 100 languages with efficient inference using 305M active parameters. Nomic Embed Text v1.5 offers variable embedding dimensions (64 to 768) through Matryoshka Representation Learning, enabling developers to balance performance and storage needs. For multimodal applications, Nomic Embed Vision v1.5 aligns with the text models to provide a unified latent space for text and image data, facilitating seamless multimodal search. Additionally, Nomic Embed Code delivers state-of-the-art performance on code embedding tasks across multiple programming languages.
    Starting Price: Free
  • 22
    Aquarium

    Aquarium

    Aquarium

    Aquarium's embedding technology surfaces the biggest problems in your model performance and finds the right data to solve them. Unlock the power of neural network embeddings without worrying about maintaining infrastructure or debugging embedding models. Automatically find the most critical patterns of model failures in your dataset. Understand the long tail of edge cases and triage which issues to solve first. Trawl through massive unlabeled datasets to find edge-case scenarios. Bootstrap new classes with a handful of examples using few-shot learning technology. The more data you have, the more value we offer. Aquarium reliably scales to datasets containing hundreds of millions of data points. Aquarium offers solutions engineering resources, customer success syncs, and user training to help customers get value. We also offer an anonymous mode for organizations who want to use Aquarium without exposing any sensitive data.
    Starting Price: $1,250 per month
  • 23
    Microsoft Cognitive Toolkit
    The Microsoft Cognitive Toolkit (CNTK) is an open-source toolkit for commercial-grade distributed deep learning. It describes neural networks as a series of computational steps via a directed graph. CNTK allows the user to easily realize and combine popular model types such as feed-forward DNNs, convolutional neural networks (CNNs) and recurrent neural networks (RNNs/LSTMs). CNTK implements stochastic gradient descent (SGD, error backpropagation) learning with automatic differentiation and parallelization across multiple GPUs and servers. CNTK can be included as a library in your Python, C#, or C++ programs, or used as a standalone machine-learning tool through its own model description language (BrainScript). In addition you can use the CNTK model evaluation functionality from your Java programs. CNTK supports 64-bit Linux or 64-bit Windows operating systems. To install you can either choose pre-compiled binary packages, or compile the toolkit from the source provided in GitHub.
  • 24
    ZeroEntropy

    ZeroEntropy

    ZeroEntropy

    ZeroEntropy is a search and retrieval platform built to deliver faster, more accurate, human-level search experiences. It provides cutting-edge rerankers, embeddings, and hybrid retrieval models that go beyond traditional lexical and vector search. ZeroEntropy focuses on understanding context, nuance, and domain-specific meaning rather than just keywords. Its models consistently outperform leading alternatives on industry benchmarks. Developers can integrate ZeroEntropy quickly using a simple, production-ready API. The platform is optimized for low latency, high accuracy, and cost efficiency. ZeroEntropy enables teams to ship search systems that actually return the right answers.
  • 25
    txtai

    txtai

    NeuML

    txtai is an all-in-one open source embeddings database designed for semantic search, large language model orchestration, and language model workflows. It unifies vector indexes (both sparse and dense), graph networks, and relational databases, providing a robust foundation for vector search and serving as a powerful knowledge source for LLM applications. With txtai, users can build autonomous agents, implement retrieval augmented generation processes, and develop multi-modal workflows. Key features include vector search with SQL support, object storage integration, topic modeling, graph analysis, and multimodal indexing capabilities. It supports the creation of embeddings for various data types, including text, documents, audio, images, and video. Additionally, txtai offers pipelines powered by language models that handle tasks such as LLM prompting, question-answering, labeling, transcription, translation, and summarization.
    Starting Price: Free
  • 26
    Llama 3.1
    The open source AI model you can fine-tune, distill and deploy anywhere. Our latest instruction-tuned model is available in 8B, 70B and 405B versions. Using our open ecosystem, build faster with a selection of differentiated product offerings to support your use cases. Choose from real-time inference or batch inference services. Download model weights to further optimize cost per token. Adapt for your application, improve with synthetic data and deploy on-prem or in the cloud. Use Llama system components and extend the model using zero shot tool use and RAG to build agentic behaviors. Leverage 405B high quality data to improve specialized models for specific use cases.
    Starting Price: Free
  • 27
    Llama 3.3
    Llama 3.3 is the latest iteration in the Llama series of language models, developed to push the boundaries of AI-powered understanding and communication. With enhanced contextual reasoning, improved language generation, and advanced fine-tuning capabilities, Llama 3.3 is designed to deliver highly accurate, human-like responses across diverse applications. This version features a larger training dataset, refined algorithms for nuanced comprehension, and reduced biases compared to its predecessors. Llama 3.3 excels in tasks such as natural language understanding, creative writing, technical explanation, and multilingual communication, making it an indispensable tool for businesses, developers, and researchers. Its modular architecture allows for customizable deployment in specialized domains, ensuring versatility and performance at scale.
    Starting Price: Free
  • 28
    Llama 3.2
    The open-source AI model you can fine-tune, distill and deploy anywhere is now available in more versions. Choose from 1B, 3B, 11B or 90B, or continue building with Llama 3.1. Llama 3.2 is a collection of large language models (LLMs) pretrained and fine-tuned in 1B and 3B sizes that are multilingual text only, and 11B and 90B sizes that take both text and image inputs and output text. Develop highly performative and efficient applications from our latest release. Use our 1B or 3B models for on device applications such as summarizing a discussion from your phone or calling on-device tools like calendar. Use our 11B or 90B models for image use cases such as transforming an existing image into something new or getting more information from an image of your surroundings.
    Starting Price: Free
  • 29
    NVIDIA NeMo Retriever
    NVIDIA NeMo Retriever is a collection of microservices for building multimodal extraction, reranking, and embedding pipelines with high accuracy and maximum data privacy. It delivers quick, context-aware responses for AI applications like advanced retrieval-augmented generation (RAG) and agentic AI workflows. As part of the NVIDIA NeMo platform and built with NVIDIA NIM, NeMo Retriever allows developers to flexibly leverage these microservices to connect AI applications to large enterprise datasets wherever they reside and fine-tune them to align with specific use cases. NeMo Retriever provides components for building data extraction and information retrieval pipelines. The pipeline extracts structured and unstructured data (e.g., text, charts, tables), converts it to text, and filters out duplicates. A NeMo Retriever embedding NIM converts the chunks into embeddings and stores them in a vector database, accelerated by NVIDIA cuVS, for enhanced performance and speed of indexing.
  • 30
    Alpa

    Alpa

    Alpa

    Alpa aims to automate large-scale distributed training and serving with just a few lines of code. Alpa was initially developed by folks in the Sky Lab, UC Berkeley. Some advanced techniques used in Alpa have been written in a paper published in OSDI'2022. Alpa community is growing with new contributors from Google. A language model is a probability distribution over sequences of words. It predicts the next word based on all the previous words. It is useful for a variety of AI applications, such the auto-completion in your email or chatbot service. For more information, check out the language model wikipedia page. GPT-3 is very large language model, with 175 billion parameters, that uses deep learning to produce human-like text. Many researchers and news articles described GPT-3 as "one of the most interesting and important AI systems ever produced". GPT-3 is gradually being used as a backbone in the latest NLP research and applications.
    Starting Price: Free
  • 31
    Plagius

    Plagius

    GH Software

    Plagius - Plagiarism Detector is a plagiarism checker software that analyzes documents thoroughly for occurrences of suspected plagiarism. This tool is designed to aid in enhancing the quality of academic, as even those who are prone to "copy and paste" can now revise and enhance their work by utilizing Plagius for prior analysis. With the ability to examine documents in various formats, such as Word, PDF, OpenOffice, HTML, and plain text, Plagius generates comprehensive reports detailing the references found, the frequency of occurrences on the internet or local files, and the percentage of suspected plagiarism. Compared to other tools for detecting plagiarism, Plagius boasts several key advantages, including user-friendly operation, superior performance, and efficiency. Thanks to its simplicity and speed, Plagius provides an exceptional plagiarism detection experience for its users.
    Starting Price: $5.90 per month
  • 32
    Spellbook
    Draft contracts 3x faster with AI. Spellbook uses GPT-3 to review and suggest language for your contracts, right in Microsoft Word. Trained on billions of lines of legal text, Spellbook suggests language for your contracts instantly. Close better deals, faster. Spellbook gets the small stuff out of the way so that you can focus on adding strategic value. Spellbook doesn't just understand contracts. It has been trained on Wikipedia, books and the Internet, so that it can reference facts about the world. Spellbook is powered by OpenAI's GPT-3: a massive large language model providing incredible performance. Spellbook is the only GPT-3 powered tool that has been tuned for contracting & integrated with Word.
  • 33
    spaCy

    spaCy

    spaCy

    spaCy is designed to help you do real work, build real products, or gather real insights. The library respects your time and tries to avoid wasting it. It's easy to install, and its API is simple and productive. spaCy excels at large-scale information extraction tasks. It's written from the ground up in carefully memory-managed Cython. If your application needs to process entire web dumps, spaCy is the library you want to be using. Since its release in 2015, spaCy has become an industry standard with a huge ecosystem. Choose from a variety of plugins, integrate with your machine learning stack, and build custom components and workflows. Components for named entity recognition, part-of-speech tagging, dependency parsing, sentence segmentation, text classification, lemmatization, morphological analysis, entity linking, and more. Easily extensible with custom components and attributes. Easy model packaging, deployment, and workflow management.
    Starting Price: Free
  • 34
    Qwen-7B

    Qwen-7B

    Alibaba

    Qwen-7B is the 7B-parameter version of the large language model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-7B is a Transformer-based large language model, which is pretrained on a large volume of data, including web texts, books, codes, etc. Additionally, based on the pretrained Qwen-7B, we release Qwen-7B-Chat, a large-model-based AI assistant, which is trained with alignment techniques. The features of the Qwen-7B series include: Trained with high-quality pretraining data. We have pretrained Qwen-7B on a self-constructed large-scale high-quality dataset of over 2.2 trillion tokens. The dataset includes plain texts and codes, and it covers a wide range of domains, including general domain data and professional domain data. Strong performance. In comparison with the models of the similar model size, we outperform the competitors on a series of benchmark datasets, which evaluates natural language understanding, mathematics, coding, etc. And more.
    Starting Price: Free
  • 35
    Neum AI

    Neum AI

    Neum AI

    No one wants their AI to respond with out-of-date information to a customer. ‍Neum AI helps companies have accurate and up-to-date context in their AI applications. Use built-in connectors for data sources like Amazon S3 and Azure Blob Storage, vector stores like Pinecone and Weaviate to set up your data pipelines in minutes. Supercharge your data pipeline by transforming and embedding your data with built-in connectors for embedding models like OpenAI and Replicate, and serverless functions like Azure Functions and AWS Lambda. Leverage role-based access controls to make sure only the right people can access specific vectors. Bring your own embedding models, vector stores and sources. Ask us about how you can even run Neum AI in your own cloud.
  • 36
    BGE

    BGE

    BGE

    BGE (BAAI General Embedding) is a comprehensive retrieval toolkit designed for search and Retrieval-Augmented Generation (RAG) applications. It offers inference, evaluation, and fine-tuning capabilities for embedding models and rerankers, facilitating the development of advanced information retrieval systems. The toolkit includes components such as embedders and rerankers, which can be integrated into RAG pipelines to enhance search relevance and accuracy. BGE supports various retrieval methods, including dense retrieval, multi-vector retrieval, and sparse retrieval, providing flexibility to handle different data types and retrieval scenarios. The models are available through platforms like Hugging Face, and the toolkit provides tutorials and APIs to assist users in implementing and customizing their retrieval systems. By leveraging BGE, developers can build robust and efficient search solutions tailored to their specific needs.
    Starting Price: Free
  • 37
    TextBlob

    TextBlob

    TextBlob

    TextBlob is a Python library for processing textual data, offering a simple API to perform common natural language processing tasks such as part-of-speech tagging, noun phrase extraction, sentiment analysis, and classification. It stands on the giant shoulders of NLTK and Pattern, and plays nicely with both. Key features include tokenization (splitting text into words and sentences), word and phrase frequencies, parsing, n-grams, word inflection (pluralization and singularization) lemmatization, spelling correction, and WordNet integration. TextBlob is compatible with Python versions 2.7 and above, and 3.5 and above. It is actively developed on GitHub and is licensed under the MIT License. Comprehensive documentation, including a quick start guide and tutorials, is available to assist users in implementing various NLP tasks.
  • 38
    OpenAI

    OpenAI

    OpenAI

    OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome. Apply our API to any language task — semantic search, summarization, sentiment analysis, content generation, translation, and more — with only a few examples or by specifying your task in English. One simple integration gives you access to our constantly-improving AI technology. Explore how you integrate with the API with these sample completions.
  • 39
    Olmo 3
    Olmo 3 is a fully open model family spanning 7 billion and 32 billion parameter variants that delivers not only high-performing base, reasoning, instruction, and reinforcement-learning models, but also exposure of the entire model flow, including raw training data, intermediate checkpoints, training code, long-context support (65,536 token window), and provenance tooling. Starting with the Dolma 3 dataset (≈9 trillion tokens) and its disciplined mix of web text, scientific PDFs, code, and long-form documents, the pre-training, mid-training, and long-context phases shape the base models, which are then post-trained via supervised fine-tuning, direct preference optimisation, and RL with verifiable rewards to yield the Think and Instruct variants. The 32 B Think model is described as the strongest fully open reasoning model to date, competitively close to closed-weight peers in math, code, and complex reasoning.
    Starting Price: Free
  • 40
    DeepSeek-V2

    DeepSeek-V2

    DeepSeek

    DeepSeek-V2 is a state-of-the-art Mixture-of-Experts (MoE) language model introduced by DeepSeek-AI, characterized by its economical training and efficient inference capabilities. With a total of 236 billion parameters, of which only 21 billion are active per token, it supports a context length of up to 128K tokens. DeepSeek-V2 employs innovative architectures like Multi-head Latent Attention (MLA) for efficient inference by compressing the Key-Value (KV) cache and DeepSeekMoE for cost-effective training through sparse computation. This model significantly outperforms its predecessor, DeepSeek 67B, by saving 42.5% in training costs, reducing the KV cache by 93.3%, and enhancing generation throughput by 5.76 times. Pretrained on an 8.1 trillion token corpus, DeepSeek-V2 excels in language understanding, coding, and reasoning tasks, making it a top-tier performer among open-source models.
    Starting Price: Free
  • 41
    Llama 2
    The next generation of our open source large language model. This release includes model weights and starting code for pretrained and fine-tuned Llama language models — ranging from 7B to 70B parameters. Llama 2 pretrained models are trained on 2 trillion tokens, and have double the context length than Llama 1. Its fine-tuned models have been trained on over 1 million human annotations. Llama 2 outperforms other open source language models on many external benchmarks, including reasoning, coding, proficiency, and knowledge tests. Llama 2 was pretrained on publicly available online data sources. The fine-tuned model, Llama-2-chat, leverages publicly available instruction datasets and over 1 million human annotations. We have a broad range of supporters around the world who believe in our open approach to today’s AI — companies that have given early feedback and are excited to build with Llama 2.
    Starting Price: Free
  • 42
    voyage-4-large
    The Voyage 4 model family from Voyage AI is a new generation of text embedding models designed to produce high-quality semantic vectors with an industry-first shared embedding space that lets different models in the series generate compatible embeddings so developers can mix and match models for document and query embedding to optimize accuracy, latency, and cost trade-offs. It includes voyage-4-large (a flagship model using a mixture-of-experts architecture delivering state-of-the-art retrieval accuracy at about 40% lower serving cost than comparable dense models), voyage-4 (balancing quality and efficiency), voyage-4-lite (high-quality embeddings with fewer parameters and lower compute cost), and the open-weight voyage-4-nano (ideal for local development and prototyping with an Apache 2.0 license). All four models in the series operate in a single shared embedding space, so embeddings generated by different variants are interchangeable, enabling asymmetric retrieval strategies.
  • 43
    Llama

    Llama

    Meta

    Llama (Large Language Model Meta AI) is a state-of-the-art foundational large language model designed to help researchers advance their work in this subfield of AI. Smaller, more performant models such as Llama enable others in the research community who don’t have access to large amounts of infrastructure to study these models, further democratizing access in this important, fast-changing field. Training smaller foundation models like Llama is desirable in the large language model space because it requires far less computing power and resources to test new approaches, validate others’ work, and explore new use cases. Foundation models train on a large set of unlabeled data, which makes them ideal for fine-tuning for a variety of tasks. We are making Llama available at several sizes (7B, 13B, 33B, and 65B parameters) and also sharing a Llama model card that details how we built the model in keeping with our approach to Responsible AI practices.
  • 44
    Context Data

    Context Data

    Context Data

    Context Data is an enterprise data infrastructure built to accelerate the development of data pipelines for Generative AI applications. The platform automates the process of setting up internal data processing and transformation flows using an easy-to-use connectivity framework where developers and enterprises can quickly connect to all of their internal data sources, embedding models and vector database targets without having to set up expensive infrastructure or engineers. The platform also allows developers to schedule recurring data flows for refreshed and up-to-date data.
    Starting Price: $99 per month
  • 45
    Awario

    Awario

    Awario

    The web is talking about your business. Find these conversations and join in. Instant mentions from all around the Web and social media. At Awario, we want you to be the first to know when important conversations spark out online. Unlike other monitoring apps that rely on a single third-party data provider, we crawl over 13 billion web pages daily, in addition to the APIs. So when we say Awario is the most immediate way to learn about new mentions, we aren't messing around. Find important discussions, join them in real time, and spread the word about your product, service, or content across social and the Web to reach new audiences and trigger word-of-mouth marketing. Social media is full of invaluable market information and consumer intelligence - but it's also full of noise. Sieve out meaningful customer insights with Awario to understand your audience better and marketing campaigns.
    Starting Price: $39.00 per month
  • 46
    ERNIE 3.0 Titan
    Pre-trained language models have achieved state-of-the-art results in various Natural Language Processing (NLP) tasks. GPT-3 has shown that scaling up pre-trained language models can further exploit their enormous potential. A unified framework named ERNIE 3.0 was recently proposed for pre-training large-scale knowledge enhanced models and trained a model with 10 billion parameters. ERNIE 3.0 outperformed the state-of-the-art models on various NLP tasks. In order to explore the performance of scaling up ERNIE 3.0, we train a hundred-billion-parameter model called ERNIE 3.0 Titan with up to 260 billion parameters on the PaddlePaddle platform. Furthermore, We design a self-supervised adversarial loss and a controllable language modeling loss to make ERNIE 3.0 Titan generate credible and controllable texts.
  • 47
    Datos

    Datos

    Datos

    Datos is a global clickstream data provider focused on licensing anonymized, at-scale, privacy-compliant datasets to ensure its clients and partners are safe in an otherwise perilous marketplace. Datos offers access to the desktop and mobile browsing clickstream for tens of millions of users across the globe, packaged into clean, easy-to-understand data feeds. Datos' mission is to provide clickstream data built on trust and driven by tangible results. Major firms around the globe trust Datos to provide the data they need to stop operating blindly in an ever-changing digital landscape. Datos offers a range of products, including the Datos Activity Feed, which provides visibility into the full conversion funnel by tracking every page visit and understanding diverse user behaviors. The Datos Behavior Feed offers detailed data on user tendencies.
  • 48
    NLP Cloud

    NLP Cloud

    NLP Cloud

    Fast and accurate AI models suited for production. Highly-available inference API leveraging the most advanced NVIDIA GPUs. We selected the best open-source natural language processing (NLP) models from the community and deployed them for you. Fine-tune your own models - including GPT-J - or upload your in-house custom models, and deploy them easily to production. Upload or Train/Fine-Tune your own AI models - including GPT-J - from your dashboard, and use them straight away in production without worrying about deployment considerations like RAM usage, high-availability, scalability... You can upload and deploy as many models as you want to production.
    Starting Price: $29 per month
  • 49
    Hippocratic AI

    Hippocratic AI

    Hippocratic AI

    Hippocratic AI is the new state of the art (SOTA) model, outperforming GPT-4 on 105 of 114 healthcare exams and certifications. Hippocratic AI has outperformed GPT-4 on 105 out of 114 tests and certifications, outperformed by a margin of five percent or more on 74 of the certifications, and outperformed by a margin of ten percent or more on 43 of the certifications. Most language models pre-train on the common crawl of the Internet, which may include incorrect and misleading information. Unlike these LLMs, Hippocratic AI is investing heavily in legally acquiring evidence-based healthcare content. We’re conducting a unique Reinforcement Learning with Human Feedback process using healthcare professionals to train and validate the model’s readiness for deployment. We call this RLHF-HP. Hippocratic AI will not release the model until a large number of these licensed professionals deem it safe.
  • 50
    Step 3.5 Flash
    Step 3.5 Flash is an advanced open source foundation language model engineered for frontier reasoning and agentic capabilities with exceptional efficiency, built on a sparse Mixture of Experts (MoE) architecture that selectively activates only about 11 billion of its ~196 billion parameters per token to deliver high-density intelligence and real-time responsiveness. Its 3-way Multi-Token Prediction (MTP-3) enables generation throughput in the hundreds of tokens per second for complex multi-step reasoning chains and task execution, and it supports efficient long contexts with a hybrid sliding window attention approach that reduces computational overhead across large datasets or codebases. It demonstrates robust performance on benchmarks for reasoning, coding, and agentic tasks, rivaling or exceeding many larger proprietary models, and includes a scalable reinforcement learning framework for consistent self-improvement.
    Starting Price: Free