Alternatives to voyage-3-large

Compare voyage-3-large alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to voyage-3-large in 2026. Compare features, ratings, user reviews, pricing, and more from voyage-3-large competitors and alternatives in order to make an informed decision for your business.

  • 1
    Yardi Voyager

    Yardi Voyager

    Yardi Systems

    Yardi Voyager is a web-based, fully integrated end-to-end platform with mobile access for larger portfolios to manage operations, execute leasing, run analytics, and provide innovative resident, tenant, and investor services. With a solution and best-of-breed product suite designed for every real estate market including commercial (office, retail, industrial), multifamily, affordable, senior, PHA and military housing, Voyager helps you meet all your property management and accounting needs using a single database to run your entire business. Voyager automates workflows and provides system-wide transparency that enables you to work more productively and collaboratively than ever before. Using any browser and mobile device, Voyager gives you instant access to your data. And as a SaaS platform, Voyager frees you from managing your software — so you can focus on your business.
  • 2
    voyage-code-3
    Voyage AI introduces voyage-code-3, a next-generation embedding model optimized for code retrieval. It outperforms OpenAI-v3-large and CodeSage-large by an average of 13.80% and 16.81% on a suite of 32 code retrieval datasets, respectively. It supports embeddings of 2048, 1024, 512, and 256 dimensions and offers multiple embedding quantization options, including float (32-bit), int8 (8-bit signed integer), uint8 (8-bit unsigned integer), binary (bit-packed int8), and ubinary (bit-packed uint8). With a 32 K-token context length, it surpasses OpenAI's 8K and CodeSage Large's 1K context lengths. Voyage-code-3 employs Matryoshka learning to create embeddings with a nested family of various lengths within a single vector. This allows users to vectorize documents into a 2048-dimensional vector and later use shorter versions (e.g., 256, 512, or 1024 dimensions) without re-invoking the embedding model.
  • 3
    Voyage AI

    Voyage AI

    MongoDB

    Voyage AI provides best-in-class embedding models and rerankers designed to supercharge search and retrieval for unstructured data. Its technology powers high-quality Retrieval-Augmented Generation (RAG) by improving how relevant context is retrieved before responses are generated. Voyage AI offers general-purpose, domain-specific, and company-specific models to support a wide range of use cases. The models are optimized for accuracy, low latency, and reduced costs through shorter vector dimensions. With long-context support of up to 32K tokens, Voyage AI enables deeper understanding of complex documents. The platform is modular and integrates easily with any vector database or large language model. Voyage AI is trusted by industry leaders to deliver reliable, factual AI outputs at scale.
  • 4
    voyage-4-large
    The Voyage 4 model family from Voyage AI is a new generation of text embedding models designed to produce high-quality semantic vectors with an industry-first shared embedding space that lets different models in the series generate compatible embeddings so developers can mix and match models for document and query embedding to optimize accuracy, latency, and cost trade-offs. It includes voyage-4-large (a flagship model using a mixture-of-experts architecture delivering state-of-the-art retrieval accuracy at about 40% lower serving cost than comparable dense models), voyage-4 (balancing quality and efficiency), voyage-4-lite (high-quality embeddings with fewer parameters and lower compute cost), and the open-weight voyage-4-nano (ideal for local development and prototyping with an Apache 2.0 license). All four models in the series operate in a single shared embedding space, so embeddings generated by different variants are interchangeable, enabling asymmetric retrieval strategies.
  • 5
    Cohere Embed
    Cohere's Embed is a leading multimodal embedding platform designed to transform text, images, or a combination of both into high-quality vector representations. These embeddings are optimized for semantic search, retrieval-augmented generation, classification, clustering, and agentic AI applications.​ The latest model, embed-v4.0, supports mixed-modality inputs, allowing users to combine text and images into a single embedding. It offers Matryoshka embeddings with configurable dimensions of 256, 512, 1024, or 1536, enabling flexibility in balancing performance and resource usage. With a context length of up to 128,000 tokens, embed-v4.0 is well-suited for processing large documents and complex data structures. It also supports compressed embedding types, including float, int8, uint8, binary, and ubinary, facilitating efficient storage and faster retrieval in vector databases. Multilingual support spans over 100 languages, making it a versatile tool for global applications.
    Starting Price: $0.47 per image
  • 6
    Codestral Embed
    Codestral Embed is Mistral AI's first embedding model, specialized for code, optimized for high-performance code retrieval and semantic understanding. It significantly outperforms leading code embedders in the market today, such as Voyage Code 3, Cohere Embed v4.0, and OpenAI’s large embedding model. Codestral Embed can output embeddings with different dimensions and precisions; for instance, with a dimension of 256 and int8 precision, it still performs better than any model from competitors. The dimensions of the embeddings are ordered by relevance, allowing users to choose the first n dimensions for a smooth trade-off between quality and cost. It excels in retrieval use cases on real-world code data, particularly in benchmarks like SWE-Bench, which is based on real-world GitHub issues and corresponding fixes, and Text2Code (GitHub), relevant for providing context for code completion or editing.
  • 7
    EmbeddingGemma
    EmbeddingGemma is a 308-million-parameter multilingual text embedding model, lightweight yet powerful, optimized to run entirely on everyday devices such as phones, laptops, and tablets, enabling fast, offline embedding generation that protects user privacy. Built on the Gemma 3 architecture, it supports over 100 languages, processes up to 2,000 input tokens, and leverages Matryoshka Representation Learning (MRL) to offer flexible embedding dimensions (768, 512, 256, or 128) for tailored speed, storage, and precision. Its GPU-and EdgeTPU-accelerated inference delivers embeddings in milliseconds, under 15 ms for 256 tokens on EdgeTPU, while quantization-aware training keeps memory usage under 200 MB without compromising quality. This makes it ideal for real-time, on-device tasks such as semantic search, retrieval-augmented generation (RAG), classification, clustering, and similarity detection, whether for personal file search, mobile chatbots, or custom domain use.
  • 8
    Gemini Embedding
    Gemini Embedding’s first text model (gemini-embedding-001) is now generally available via the Gemini API and Vertex AI, having held a top spot on the Massive Text Embedding Benchmark Multilingual leaderboard since its experimental launch in March, thanks to superior performance across retrieval, classification, and other embedding tasks compared to both legacy Google and external proprietary models. Exceptionally versatile, it supports over 100 languages with a 2,048‑token input limit and employs the Matryoshka Representation Learning (MRL) technique to let developers choose output dimensions of 3072, 153,6, or 768 for optimal quality, performance, and storage efficiency. Developers can access it through the existing embed_content endpoint in the Gemini API, and while legacy experimental versions will be deprecated later in 2025, migration requires no re‑embedding of existing content.
    Starting Price: $0.15 per 1M input tokens
  • 9
    Cohere

    Cohere

    Cohere AI

    Cohere is an enterprise AI platform that enables developers and businesses to build powerful language-based applications. Specializing in large language models (LLMs), Cohere provides solutions for text generation, summarization, and semantic search. Their model offerings include the Command family for high-performance language tasks and Aya Expanse for multilingual applications across 23 languages. Focused on security and customization, Cohere allows flexible deployment across major cloud providers, private cloud environments, or on-premises setups to meet diverse enterprise needs. The company collaborates with industry leaders like Oracle and Salesforce to integrate generative AI into business applications, improving automation and customer engagement. Additionally, Cohere For AI, their research lab, advances machine learning through open-source projects and a global research community.
  • 10
    Gemini Embedding 2
    Gemini Embedding models, including the newer Gemini Embedding 2, are part of Google’s Gemini AI ecosystem and are designed to convert text, phrases, sentences, and code into numerical vector representations that capture their semantic meaning. Unlike generative models that produce new content, the embedding model transforms input data into dense vectors that represent meaning in a mathematical format, allowing computers to compare and analyze information based on conceptual similarity rather than exact wording. These embeddings enable applications such as semantic search, recommendation systems, document retrieval, clustering, classification, and retrieval-augmented generation pipelines. The model can process input in more than 100 languages and supports up to 2048 tokens per request, allowing it to embed longer pieces of text or code while maintaining strong contextual understanding.
  • 11
    Arctic Embed 2.0
    Snowflake's Arctic Embed 2.0 introduces multilingual capabilities to its text embedding models, enhancing global-scale retrieval without compromising English performance or scalability. Building upon the robust foundation of previous releases, Arctic Embed 2.0 supports multiple languages, enabling developers to create stream-processing pipelines that incorporate neural networks and complex tasks like tracking, video encoding/decoding, and rendering, facilitating real-time analytics on various data types. The model leverages Matryoshka Representation Learning (MRL) for efficient embedding storage, allowing for significant compression with minimal quality degradation. This advancement ensures that enterprises can handle demanding workloads such as training large-scale models, fine-tuning, real-time inference, and high-performance computing tasks across diverse languages and regions.
    Starting Price: $2 per credit
  • 12
    word2vec

    word2vec

    Google

    Word2Vec is a neural network-based technique for learning word embeddings, developed by researchers at Google. It transforms words into continuous vector representations in a multi-dimensional space, capturing semantic relationships based on context. Word2Vec uses two main architectures: Skip-gram, which predicts surrounding words given a target word, and Continuous Bag-of-Words (CBOW), which predicts a target word based on surrounding words. By training on large text corpora, Word2Vec generates word embeddings where similar words are positioned closely, enabling tasks like semantic similarity, analogy solving, and text clustering. The model was influential in advancing NLP by introducing efficient training techniques such as hierarchical softmax and negative sampling. Though newer embedding models like BERT and Transformer-based methods have surpassed it in complexity and performance, Word2Vec remains a foundational method in natural language processing and machine learning research.
  • 13
    Nomic Embed
    Nomic Embed is a suite of open source, high-performance embedding models designed for various applications, including multilingual text, multimodal content, and code. The ecosystem includes models like Nomic Embed Text v2, which utilizes a Mixture-of-Experts (MoE) architecture to support over 100 languages with efficient inference using 305M active parameters. Nomic Embed Text v1.5 offers variable embedding dimensions (64 to 768) through Matryoshka Representation Learning, enabling developers to balance performance and storage needs. For multimodal applications, Nomic Embed Vision v1.5 aligns with the text models to provide a unified latent space for text and image data, facilitating seamless multimodal search. Additionally, Nomic Embed Code delivers state-of-the-art performance on code embedding tasks across multiple programming languages.
  • 14
    Ex Libris Voyager
    Voyager® is the integrated library solution chosen by many of the world’s leading libraries to serve as the backbone of their service systems. Voyager has an intuitive graphical interface, is standards-based, and built on open systems technology. This allows Voyager to interoperate with existing library systems and scale to accommodate future library needs. Voyager integrates and interoperates smoothly with existing library systems as well as with new technologies. Core technologies, standards, and language support have been carefully chosen to ensure that Voyager meets the ever-evolving needs of your library. Voyager client/server software supports the control of Web-based public access cataloging and authority control as well as acquisitions, serials, circulation and course reserves modules. Sophisticated reporting and system administration are all part of the out-of-the-box product offering.
  • 15
    Phi-4-mini-reasoning
    Phi-4-mini-reasoning is a 3.8-billion parameter transformer-based language model optimized for mathematical reasoning and step-by-step problem solving in environments with constrained computing or latency. Fine-tuned with synthetic data generated by the DeepSeek-R1 model, it balances efficiency with advanced reasoning ability. Trained on over one million diverse math problems spanning multiple levels of difficulty from middle school to Ph.D. level, Phi-4-mini-reasoning outperforms its base model on long sentence generation across various evaluations and surpasses larger models like OpenThinker-7B, Llama-3.2-3B-instruct, and DeepSeek-R1. It features a 128K-token context window and supports function calling, enabling integration with external tools and APIs. Phi-4-mini-reasoning can be quantized using Microsoft Olive or Apple MLX Framework for deployment on edge devices such as IoT, laptops, and mobile devices.
  • 16
    DeePhi Quantization Tool

    DeePhi Quantization Tool

    DeePhi Quantization Tool

    This is a model quantization tool for convolution neural networks(CNN). This tool could quantize both weights/biases and activations from 32-bit floating-point (FP32) format to 8-bit integer(INT8) format or any other bit depths. With this tool, you can boost the inference performance and efficiency significantly, while maintaining the accuracy. This tool supports common layer types in neural networks, including convolution, pooling, fully-connected, batch normalization and so on. The quantization tool does not need the retraining of the network or labeled datasets, only one batch of pictures are needed. The process time ranges from a few seconds to several minutes depending on the size of neural network, which makes rapid model update possible. This tool is collaborative optimized for DeePhi DPU and could generate INT8 format model files required by DNNC.
    Starting Price: $0.90 per hour
  • 17
    FileVoyager

    FileVoyager

    FileVoyager

    FileVoyager is a freeware Orthodox file manager (OFM) for Microsoft Windows. OFMs are file managers using two panels of disk browsers. This dual pane layout makes very easy the transfer operations of files or folders between sources and destinations. FileVoyager contains a large collection of tools and functionality. Browse disks, folders (real or virtual), shares, archives, and FTP/FTPS in one unified way. Browsing in various display modes (like a report or thumbnail modes) Usual file operations (rename, copy, move, link, delete, recycle) in the containers listed above and even between them. Pack and unpack ZIP, 7Zip, GZip, BZip2, XZ, Tar, and WIM formats (FileVoyager wraps 7-zip) Unpack ARJ, CAB, XAR, Z, RAR, LZH, LZMA, ISO, WIM and many others (FileVoyager wraps 7-zip) Play virtually any audio or video formats (FileVoyager relies at once on installed codecs, on WMP, and on VLC) Compare files or folders. Synchronize folders.
  • 18
    LexVec

    LexVec

    Alexandre Salle

    LexVec is a word embedding model that achieves state-of-the-art results in multiple natural language processing tasks by factorizing the Positive Pointwise Mutual Information (PPMI) matrix using stochastic gradient descent. This approach assigns heavier penalties for errors on frequent co-occurrences while accounting for negative co-occurrences. Pre-trained vectors are available, including a common crawl dataset with 58 billion tokens and 2 million words in 300 dimensions, and an English Wikipedia 2015 + NewsCrawl dataset with 7 billion tokens and 368,999 words in 300 dimensions. Evaluations demonstrate that LexVec matches or outperforms other models like word2vec in terms of word similarity and analogy tasks. The implementation is open source under the MIT License and is available on GitHub.
  • 19
    Voyager

    Voyager

    Voyager

    Voyager offers investors best execution, data, wallet and custody services through its institutional-grade open architecture platform. Voyager was founded by established Wall Street and Silicon Valley entrepreneurs who teamed to bring a better, more transparent and cost-efficient alternative for trading crypto assets to the marketplace. Voyager supports Bitcoin, top DeFi coins, stablecoins, and a wide-variety of altcoins. We offer something for every investor. Honesty and transparency are our top priorities. Voyager is audited to ensure every asset is accounted for in our secure system. Rest assured knowing our advanced technology is preventing hackers and fraud, always securing your funds. We are insured, so the cash you hold on Voyager is protected and always safe with us. Build and grow your crypto portfolio the easy way. Take your assets on the go, never miss a trade, and always have the crypto market in reach. Sign up and start investing in 3-minutes or less.
  • 20
    ORX Travel Management

    ORX Travel Management

    NDC Solutions Inc.

    VoyagePro elevates corporate travel management by offering an all-in-one platform with NDC and GDS fare integration. It provides custom pricing, airline rate management, and tools for efficient corporate travel. Key features include branded agent booking portals, PCI-compliant credit card vaults, and extensive customization options. VoyagePro maximizes profitability and operational efficiency, supports hybrid event planning, and offers AI-powered travel assistance. Enhance your corporate travel operations and revenue growth with VoyagePro.
  • 21
    Voyage 2.0

    Voyage 2.0

    Futuristic Software Consultancy

    VOYAGE 2.0 A single desktop solution for Tour Operators. VOYAGE can be used for both In–Bound Tour operations as well as Out–Bound Tours. VOYAGE takes on your operations from registering even the enquiries for FIT/GIT's proposing itineraries. These enquiries once confirmed can be operated as files as you have been doing so far but in more efficient and chaos free execution methodology. VOYAGE can take on from enquiry handling phase to final invoice generation. Once the file is operated you can also use the details for future CRM Practices to generate Repurchase/Repeat Business. VOYAGE has been designed keeping in mind the distinguished needs of various tour operators. Basic ideology driving the design of the system was to enable the users focus on data and its usage rather then maintaining and compiling the data. VOYAGE takes care of all your operational needs, be it daily, weekly, monthly or even annual processes.
  • 22
    Voyager

    Voyager

    Voyager

    Voyager is a Laravel Admin Package that includes BREAD(CRUD) operations, a media manager, a menu builder, and much more. Voyager will take care of your administrative tasks, this way you can focus on what you do best, which is building the next kick-ass app! Voyager can save you so much time and it will make building applications even more fun! Baked right in like a fresh loaf of BREAD! Voyager's admin interface allows you to create CRUD or BREAD (Browse, Read, Edit, Add, and Delete) functionality to your posts, pages, or any other table in your database. Voyager has a fully functional media manager which allows you to view/edit/delete files from your storage. All files in your application will be easily accessible and will live in a single place. Compatible with local or s3 file storage. You can easily build menus for your site. In fact the menu in the voyager admin is built using the menu builder. You can add/edit/delete menu items from any menu.
  • 23
    CO2 Emissions, CII & EU ETS
    • Our CO2 estimator provides an accurate estimate of fuel consumption and CO2 emissions based on measured voyage sequence and event breakdown thanks to AXSMarine Trade Flows and our proprietary speed and consumption curves. • Calculate CO2 emissions and potential EUA cost associated with an individual voyage with a voyage estimator. • Tonnage list ranking based on CO2 emissions, TCE & voyage cost for a specific cargo in Shiplist. • Analyse historical and year-to-date CO2 emissions, CII, CII rating, EEOI, and EUA financial exposure for a vessel or an entire fleet with the emissions dashboard. • Visualize CO2 emissions, CII, CII rating EEOI, and EUA financial exposure since 2013 for each vessel. • Get a detail view of all voyages performed and their impact on Emissions and ratings. • Get access to AXSMarine's unique and accurate methodology for CO2 estimation. • Quick access to CO2 calculations within a multiple-vessel grid.
  • 24
    Action Seas Software
    The software is designed and supported by highly qualified and experienced team of programmers with experience in shipping companies. The module was designed to calculate and estimate voyages in a fast and easy way. It supports all types of voyage estimation. It applies FIFO or Average method for calculating the cost of supplying fuel. It provides reports by analyzing voyage and comparison of voyage estimation vs actual calculation. The module Crew is designed to cover the flexible management of human resources on board. It monitors certificates and their validity to vessels and updates with the appropriate reminders before their expiration. It updates the Crew List of each ship and checks who is proposed / rejected and when any crew member is available for his next embarkation. We apply best practices to adapt, and where ever necessary re-engineer existing processes to ensure our solutions deliver competitive advantage to further enable effective cost control.
  • 25
    E5 Text Embeddings
    E5 Text Embeddings, developed by Microsoft, are advanced models designed to convert textual data into meaningful vector representations, enhancing tasks like semantic search and information retrieval. These models are trained using weakly-supervised contrastive learning on a vast dataset of over one billion text pairs, enabling them to capture intricate semantic relationships across multiple languages. The E5 family includes models of varying sizes—small, base, and large—offering a balance between computational efficiency and embedding quality. Additionally, multilingual versions of these models have been fine-tuned to support diverse languages, ensuring broad applicability in global contexts. Comprehensive evaluations demonstrate that E5 models achieve performance on par with state-of-the-art, English-only models of similar sizes.
  • 26
    Voyager Infinity

    Voyager Infinity

    Voyager Software

    Voyager Infinity is the smart CRM for permanent, contract and temporary recruitment. And now Voyager recruitment software comes with FREE skills testing giving you a true competitive edge by helping you source and place the best talent faster. Voyager Infinity – the only solution that comes with free Online Skills Testing, allows you to recruit smarter and test, process and score an ever-increasing number of candidates faster at no extra cost. It’s intuitive, efficient, and automates the mundane tasks, so you can focus on what you do best – place the best talent.
    Starting Price: $80 per month
  • 27
    Universal Sentence Encoder
    The Universal Sentence Encoder (USE) encodes text into high-dimensional vectors that can be utilized for tasks such as text classification, semantic similarity, and clustering. It offers two model variants: one based on the Transformer architecture and another on Deep Averaging Network (DAN), allowing a balance between accuracy and computational efficiency. The Transformer-based model captures context-sensitive embeddings by processing the entire input sequence simultaneously, while the DAN-based model computes embeddings by averaging word embeddings, followed by a feedforward neural network. These embeddings facilitate efficient semantic similarity calculations and enhance performance on downstream tasks with minimal supervised training data. The USE is accessible via TensorFlow Hub, enabling seamless integration into various applications.
  • 28
    Voyager

    Voyager

    Recursion Software

    Voyager™ is a best-in-class middleware platform enabling the development of state-of-the-art mobile applications for the enterprise – applications that facilitate communication and collaboration through reliable, real-time, and secure sharing and distribution of information and content. Voyager™ provides simpler and better Service Oriented Architecture, allowing developers to solve problems without wasting time learning overly complex SOA code and configurations, and thereby carving out a distinct position for itself among all middleware tools and SOA products. The driving purpose of Voyager™ is to increase design flexibility, reduce complexity, and accelerate the development of collaborative mobile applications across the enterprise, leveraging all connected device assets and facilitating M2M communications.
  • 29
    EXAONE Deep
    EXAONE Deep is a series of reasoning-enhanced language models developed by LG AI Research, featuring parameter sizes of 2.4 billion, 7.8 billion, and 32 billion. These models demonstrate superior capabilities in various reasoning tasks, including math and coding benchmarks. Notably, EXAONE Deep 2.4B outperforms other models of comparable size, EXAONE Deep 7.8B surpasses both open-weight models of similar scale and the proprietary reasoning model OpenAI o1-mini, and EXAONE Deep 32B shows competitive performance against leading open-weight models. The repository provides comprehensive documentation covering performance evaluations, quickstart guides for using EXAONE Deep models with Transformers, explanations of quantized EXAONE Deep weights in AWQ and GGUF formats, and instructions for running EXAONE Deep models locally using frameworks like llama.cpp and Ollama.
  • 30
    BitNet

    BitNet

    Microsoft

    The BitNet b1.58 2B4T is a cutting-edge 1-bit Large Language Model (LLM) developed by Microsoft, designed to enhance computational efficiency while maintaining high performance. This model, built with approximately 2 billion parameters and trained on 4 trillion tokens, uses innovative quantization techniques to optimize memory usage, energy consumption, and latency. The platform supports multiple modalities and is particularly valuable for applications in AI-powered text generation, offering substantial efficiency gains compared to full-precision models.
  • 31
    txtai

    txtai

    NeuML

    txtai is an all-in-one open source embeddings database designed for semantic search, large language model orchestration, and language model workflows. It unifies vector indexes (both sparse and dense), graph networks, and relational databases, providing a robust foundation for vector search and serving as a powerful knowledge source for LLM applications. With txtai, users can build autonomous agents, implement retrieval augmented generation processes, and develop multi-modal workflows. Key features include vector search with SQL support, object storage integration, topic modeling, graph analysis, and multimodal indexing capabilities. It supports the creation of embeddings for various data types, including text, documents, audio, images, and video. Additionally, txtai offers pipelines powered by language models that handle tasks such as LLM prompting, question-answering, labeling, transcription, translation, and summarization.
  • 32
    Exa

    Exa

    Exa.ai

    The Exa API retrieves the best content on the web using embeddings-based search. Exa understands meaning, giving results search engines can’t. Exa uses a novel link prediction transformer to predict links which match the meaning of a prompt. For queries that need semantic understanding, search with our SOTA web embeddings model over our custom index. For all other queries, we offer keyword-based search. Stop learning how to web scrape or parse HTML. Get the clean, full text of any page in our index, or intelligent embeddings-ranked highlights related to a query. Select any date range, include or exclude any domain, select a custom data vertical, or get up to 10 million results..
    Starting Price: $100 per month
  • 33
    Voyager IoT Management
    From the moment you choose Datablaze, we don’t stop working for you. We have everything you need to manage your IoT projects, but we know not all solutions are perfect for everyone. No worries, we’ll ensure you have everything you need, even if that means customizing the IoT Management solution to you. Voyager™ IoT Management is the proprietary IoT software, developed by Datablaze for Datablaze customers. Take control of your billing, data usage, and anything else you could ever need with our IoT solutions. Voyager™ IoT management software keeps you in control. Voyager™ IoT Management gives you complete visibility to all of your connections in real-time. From data usage to current charges, you have control over your wireless connections at any time, from anywhere.
  • 34
    NVIDIA NeMo Retriever
    NVIDIA NeMo Retriever is a collection of microservices for building multimodal extraction, reranking, and embedding pipelines with high accuracy and maximum data privacy. It delivers quick, context-aware responses for AI applications like advanced retrieval-augmented generation (RAG) and agentic AI workflows. As part of the NVIDIA NeMo platform and built with NVIDIA NIM, NeMo Retriever allows developers to flexibly leverage these microservices to connect AI applications to large enterprise datasets wherever they reside and fine-tune them to align with specific use cases. NeMo Retriever provides components for building data extraction and information retrieval pipelines. The pipeline extracts structured and unstructured data (e.g., text, charts, tables), converts it to text, and filters out duplicates. A NeMo Retriever embedding NIM converts the chunks into embeddings and stores them in a vector database, accelerated by NVIDIA cuVS, for enhanced performance and speed of indexing.
  • 35
    Seametrix

    Seametrix

    Seanergix

    Seametrix serves over 30,000 ports and terminals, offering fully customisable sea routing combinations with the most accurate sea distance results by far. Rhumbline & Great-Circle navigation methods are calculated on the fly by different servers, providing our users with real-world nautical distances by sea. Our voyage estimation module is made in such a way that makes it easy, quick & sleek to work with. By fully modifying your desired sea routing, you can have your customised itinerary exported to our voyage estimation module, and accurately estimate sea freights, voyage costs and ship expenses, in less than a minute! Seametrix offers by far the most detailed and accurate sea distances and sea routing API, with a lot of navigational parametirizations, such as Great-Circle and Rhumbline Navigation, SECA avoidance, Piracy avoidance, many On/Off passages, Indonesian Archipelagic Sea Lanes Compliance, & usage of either ports or co-ordinates.
  • 36
    GovCIO Voyager
    GovCIO’s Voyager product suite delivers complete situational awareness and seamless integration to our federal, law enforcement (federal, state, and local), and commercial customers. Voyager Query offers law enforcement officials immediate access to critical criminal justice data, using its nationwide cloud-based infrastructure to obtain critical Criminal Justice Information Services (CJIS) compliant data over any wireless data network. Voyager Victim Notification allows law enforcement officials to complete traditionally paper-based victim forms using a smartphone or tablet. Place your organization’s situational awareness mission into the hands of your mobile workforce. By providing a 360° view of system activity on an edge-to-edge map, Atlas gives users the data they need to make critical decisions quickly. Command Tracker provides Motorola GPS-enabled radios with flexible and easy-to-implement personnel incident and asset management capabilities.
  • 37
    BGE

    BGE

    BGE

    BGE (BAAI General Embedding) is a comprehensive retrieval toolkit designed for search and Retrieval-Augmented Generation (RAG) applications. It offers inference, evaluation, and fine-tuning capabilities for embedding models and rerankers, facilitating the development of advanced information retrieval systems. The toolkit includes components such as embedders and rerankers, which can be integrated into RAG pipelines to enhance search relevance and accuracy. BGE supports various retrieval methods, including dense retrieval, multi-vector retrieval, and sparse retrieval, providing flexibility to handle different data types and retrieval scenarios. The models are available through platforms like Hugging Face, and the toolkit provides tutorials and APIs to assist users in implementing and customizing their retrieval systems. By leveraging BGE, developers can build robust and efficient search solutions tailored to their specific needs.
  • 38
    SOS VOYAGER

    SOS VOYAGER

    Elesteshary Information Systems

    SIS has a special interest in developing systems that support the management of cargo transport means in general and maritime cargo transport in specific. It has developed several maritime decision support systems under the name “Shipping Optimization Systems (SOS)”. Three SOS systems are developed to support decisions of 3 shipping activities: SOS Voyager to optimize the outcome of each ship voyage, SOS Allocator to optimally allocate existing ships to cargo trade areas, and SOS Appraiser to appraise the purchasing, building, and chartering of new ships.To understand the concepts and the information systems behind SOS, download:
    Starting Price: $10000.00/one-time
  • 39
    GloVe

    GloVe

    Stanford NLP

    GloVe (Global Vectors for Word Representation) is an unsupervised learning algorithm developed by the Stanford NLP Group to obtain vector representations for words. It constructs word embeddings by analyzing global word-word co-occurrence statistics from a given corpus, resulting in vector spaces where the geometric relationships reflect semantic similarities and differences among words. A notable feature of GloVe is its ability to capture linear substructures within the word vector space, enabling vector arithmetic to express relationships. The model is trained on the non-zero entries of a global word-word co-occurrence matrix, which records how frequently pairs of words appear together in a corpus. This approach efficiently leverages statistical information by focusing on significant co-occurrences, leading to meaningful word representations. Pre-trained word vectors are available for various corpora, including Wikipedia 2014.
  • 40
    Kimi K2 Thinking

    Kimi K2 Thinking

    Moonshot AI

    Kimi K2 Thinking is an advanced open source reasoning model developed by Moonshot AI, designed specifically for long-horizon, multi-step workflows where the system interleaves chain-of-thought processes with tool invocation across hundreds of sequential tasks. The model uses a mixture-of-experts architecture with a total of 1 trillion parameters, yet only about 32 billion parameters are activated per inference pass, optimizing efficiency while maintaining vast capacity. It supports a context window of up to 256,000 tokens, enabling the handling of extremely long inputs and reasoning chains without losing coherence. Native INT4 quantization is built in, which reduces inference latency and memory usage without performance degradation. Kimi K2 Thinking is explicitly built for agentic workflows; it can autonomously call external tools, manage sequential logic steps (up to and typically between 200-300 tool calls in a single chain), and maintain consistent reasoning.
  • 41
    ZeroEntropy

    ZeroEntropy

    ZeroEntropy

    ZeroEntropy is a search and retrieval platform built to deliver faster, more accurate, human-level search experiences. It provides cutting-edge rerankers, embeddings, and hybrid retrieval models that go beyond traditional lexical and vector search. ZeroEntropy focuses on understanding context, nuance, and domain-specific meaning rather than just keywords. Its models consistently outperform leading alternatives on industry benchmarks. Developers can integrate ZeroEntropy quickly using a simple, production-ready API. The platform is optimized for low latency, high accuracy, and cost efficiency. ZeroEntropy enables teams to ship search systems that actually return the right answers.
  • 42
    DailyRoads Voyager

    DailyRoads Voyager

    DailyRoads Voyager

    DailyRoads Voyager is an Android dashcam app designed for continuous video recording from vehicles, functioning as a video black box, auto DVR, or dashcam. It records everything but only retains the footage the user deems important, triggered by events like sudden speed changes or manual interaction. It captures videos with timestamped and geotagged GPS data, including speed, elevation, and coordinates, which can be displayed within the app. It also features background recording and offers an option to protect videos manually or automatically. DailyRoads Voyager records continuously, deleting the oldest footage to make room for new data when the storage fills up. It’s highly customizable, providing settings for power users, and is designed to run seamlessly alongside other applications like navigation. It has been downloaded by millions of drivers since its release in 2009 and is used worldwide for protection against insurance fraud, accident disputes, and scam protection.
  • 43
    Neum AI

    Neum AI

    Neum AI

    No one wants their AI to respond with out-of-date information to a customer. ‍Neum AI helps companies have accurate and up-to-date context in their AI applications. Use built-in connectors for data sources like Amazon S3 and Azure Blob Storage, vector stores like Pinecone and Weaviate to set up your data pipelines in minutes. Supercharge your data pipeline by transforming and embedding your data with built-in connectors for embedding models like OpenAI and Replicate, and serverless functions like Azure Functions and AWS Lambda. Leverage role-based access controls to make sure only the right people can access specific vectors. Bring your own embedding models, vector stores and sources. Ask us about how you can even run Neum AI in your own cloud.
  • 44
    Reka Flash 3
    ​Reka Flash 3 is a 21-billion-parameter multimodal AI model developed by Reka AI, designed to excel in general chat, coding, instruction following, and function calling. It processes and reasons with text, images, video, and audio inputs, offering a compact, general-purpose solution for various applications. Trained from scratch on diverse datasets, including publicly accessible and synthetic data, Reka Flash 3 underwent instruction tuning on curated, high-quality data to optimize performance. The final training stage involved reinforcement learning using REINFORCE Leave One-Out (RLOO) with both model-based and rule-based rewards, enhancing its reasoning capabilities. With a context length of 32,000 tokens, Reka Flash 3 performs competitively with proprietary models like OpenAI's o1-mini, making it suitable for low-latency or on-device deployments. The model's full precision requires 39GB (fp16), but it can be compressed to as small as 11GB using 4-bit quantization.
  • 45
    VoyageX AI

    VoyageX AI

    VoyageX AI

    VoyageX AI is an AI-powered maritime software designed to streamline ship management operations. The platform offers key modules such as Crew Management for hiring, certifications, and payroll; Planned Maintenance for automating maintenance schedules and tracking spare parts; and Safety Management for handling safety protocols and incident reports. It includes Vessel Performance Tracking to optimize fuel consumption, Budgeting & Financial Management for operational costs, and Compliance tools to meet maritime regulations. Additional features like Inventory Management, Carbon Emissions Tracking, and real-time data analytics enhance sustainability and efficiency. With customizable dashboards, mobile access, and cloud-based infrastructure, VoyageX AI ensures operational excellence, safety, and cost savings for maritime businesses.
  • 46
    Meii AI

    Meii AI

    Meii AI

    Meii AI is a global leader in AI solutions, offering industry-trained Large Language Models that can be tuned accordingly with company-specific data and hosted privately or in your cloud. Our RAG ( Retrieval Augmented Generation ) based AI approach uses Embedded Model and Retrieval context ( Semantic Search ) while processing a conversational query to curate Insightful response that is specific for an Enterprise. Blended with our unique skills and decade long experience we had gained in Data Analytics solutions, we combine LLMs and ML Algorithms that offer great solutions for Mid level Enterprises. We are engineering a future that allows people, businesses, and governments to seamlessly leverage technology. With a vision to make AI accessible for everyone on the planet, our team is constantly breaking the barriers between machines and humans.
  • 47
    Mixedbread

    Mixedbread

    Mixedbread

    Mixedbread is a fully-managed AI search engine that allows users to build production-ready AI search and Retrieval-Augmented Generation (RAG) applications. It offers a complete AI search stack, including vector stores, embedding and reranking models, and document parsing. Users can transform raw data into intelligent search experiences that power AI agents, chatbots, and knowledge systems without the complexity. It integrates with tools like Google Drive, SharePoint, Notion, and Slack. Its vector stores enable users to build production search engines in minutes, supporting over 100 languages. Mixedbread's embedding and reranking models have achieved over 50 million downloads and outperform OpenAI in semantic search and RAG tasks while remaining open-source and cost-effective. The document parser extracts text, tables, and layouts from PDFs, images, and complex documents, providing clean, AI-ready content without manual preprocessing.
  • 48
    Yardi Energy Solution
    Meet all your energy management and sustainability needs. This comprehensive solution for utility billing, utility expense management, ENERGY STAR® benchmarking and full-service submeter installation and maintenance is built into Yardi Voyager and backed by 24/7 live customer service.
  • 49
    Llama 3.1
    The open source AI model you can fine-tune, distill and deploy anywhere. Our latest instruction-tuned model is available in 8B, 70B and 405B versions. Using our open ecosystem, build faster with a selection of differentiated product offerings to support your use cases. Choose from real-time inference or batch inference services. Download model weights to further optimize cost per token. Adapt for your application, improve with synthetic data and deploy on-prem or in the cloud. Use Llama system components and extend the model using zero shot tool use and RAG to build agentic behaviors. Leverage 405B high quality data to improve specialized models for specific use cases.
  • 50
    Voyager SDK

    Voyager SDK

    Axelera AI

    The Voyager SDK is purpose‑built for Computer Vision at the Edge and enables customers to solve their AI business requirements by effortlessly deploying AI on edge devices. Customers use the SDK to bring their applications into the Metis AI platform and run them on Axelera’s powerful Metis AI Processing Unit (AIPU), whether the application is developed using proprietary or standard industry models. The Voyager SDK offers end‑to‑end integration and is API‑compatible with de facto industry standards, unleashing the potential of the Metis AIPU, delivering high‑performance AI that can be deployed quickly and easily. Developers describe their end‑to‑end application pipelines in a simple, human‑readable, high‑level declarative language, YAML, with one or more neural networks and corresponding pre‑ & post‑processing tasks, including sophisticated image processing operations.