11 Integrations with NVIDIA NeMo

View a list of NVIDIA NeMo integrations and software that integrates with NVIDIA NeMo below. Compare the best NVIDIA NeMo integrations as well as features, ratings, user reviews, and pricing of software that integrates with NVIDIA NeMo. Here are the current NVIDIA NeMo integrations in 2026:

  • 1
    NVIDIA FLARE
    NVIDIA FLARE (Federated Learning Application Runtime Environment) is an open source, extensible SDK designed to facilitate federated learning across diverse industries, including healthcare, finance, and automotive. It enables secure, privacy-preserving AI model training by allowing multiple parties to collaboratively train models without sharing raw data. FLARE supports various machine learning frameworks such as PyTorch, TensorFlow, RAPIDS, and XGBoost, making it adaptable to existing workflows. FLARE's componentized architecture allows for customization and scalability, supporting both horizontal and vertical federated learning. It is suitable for applications requiring data privacy and regulatory compliance, such as medical imaging and financial analytics. It is available for download via the NVIDIA NVFlare GitHub repository and PyPi.
    Starting Price: Free
  • 2
    NVIDIA Blueprints
    NVIDIA Blueprints are reference workflows for agentic and generative AI use cases. Enterprises can build and operationalize custom AI applications, creating data-driven AI flywheels, using Blueprints along with NVIDIA AI and Omniverse libraries, SDKs, and microservices. Blueprints also include partner microservices, reference code, customization documentation, and a Helm chart for deployment at scale. With NVIDIA Blueprints, developers benefit from a unified experience across the NVIDIA stack, from cloud and data centers to NVIDIA RTX AI PCs and workstations. Use NVIDIA Blueprints to create AI agents that use sophisticated reasoning and iterative planning to solve complex problems. Check out new NVIDIA Blueprints, which equip millions of enterprise developers with reference workflows for building and deploying generative AI applications. Connect AI applications to enterprise data using industry-leading embedding and reranking models for information retrieval at scale.
  • 3
    NVIDIA NIM
    Explore the latest optimized AI models, connect AI agents to data with NVIDIA NeMo, and deploy anywhere with NVIDIA NIM microservices. NVIDIA NIM is a set of easy-to-use inference microservices that facilitate the deployment of foundation models across any cloud or data center, ensuring data security and streamlined AI integration. Additionally, NVIDIA AI provides access to the Deep Learning Institute (DLI), offering technical training to gain in-demand skills, hands-on experience, and expert knowledge in AI, data science, and accelerated computing. AI models generate responses and outputs based on complex algorithms and machine learning techniques, and those responses or outputs may be inaccurate, harmful, biased, or indecent. By testing this model, you assume the risk of any harm caused by any response or output of the model. Please do not upload any confidential information or personal data unless expressly permitted. Your use is logged for security purposes.
  • 4
    NVIDIA AI Foundations
    Impacting virtually every industry, generative AI unlocks a new frontier of opportunities, for knowledge and creative workers, to solve today’s most important challenges. NVIDIA is powering generative AI through an impressive suite of cloud services, pre-trained foundation models, as well as cutting-edge frameworks, optimized inference engines, and APIs to bring intelligence to your enterprise applications. NVIDIA AI Foundations is a set of cloud services that advance enterprise-level generative AI and enable customization across use cases in areas such as text (NVIDIA NeMo™), visual content (NVIDIA Picasso), and biology (NVIDIA BioNeMo™). Unleash the full potential with NeMo, Picasso, and BioNeMo cloud services, powered by NVIDIA DGX™ Cloud, the AI supercomputer. Marketing copy, storyline creation, and global translation in many languages. For news, email, meeting minutes, and information synthesis.
  • 5
    Accenture AI Refinery
    Accenture's AI Refinery is a comprehensive platform designed to help organizations rapidly build and deploy AI agents to enhance their workforce and address industry-specific challenges. The platform offers a collection of industry agent solutions, each codified with business workflows and industry expertise, enabling companies to customize these agents with their own data. This approach reduces the time to build and derive value from AI agents from months or weeks to days. AI Refinery integrates digital twins, robotics, and domain-specific models to optimize manufacturing, logistics, and quality through advanced AI, simulations, and collaboration in Omniverse, enabling autonomy, efficiency, and cost reduction across operations and engineering processes. The platform is built with NVIDIA AI Enterprise software, including NVIDIA NeMo, NVIDIA NIM microservices, and NVIDIA AI Blueprints, such as video search, summarization, and digital human.
  • 6
    Globant Enterprise AI
    Globant Enterprise AI is an AI Accelerator Platform designed to effortlessly create customized AI agents and assistants tailored to your business requirements. It enables the definition of various types of artificial intelligence assistants that can interact with documents, APIs, databases, or directly with large language models. These assistants can be integrated using the platform's REST API, regardless of the programming language employed. The platform seamlessly integrates with existing technology stacks, prioritizing security, privacy, and scalability. It incorporates NVIDIA's robust frameworks and libraries for managing LLMs, enhancing its capabilities. Additionally, the platform offers advanced security and privacy features, including integrated access control models and the inclusion of NVIDIA NeMo Guardrails, underscoring its commitment to secure and responsible AI application development.
  • 7
    AI-Q NVIDIA Blueprint
    Create AI agents that reason, plan, reflect, and refine to produce high-quality reports based on source materials of your choice. An AI research agent, informed by many data sources, can synthesize hours of research in minutes. The AI-Q NVIDIA Blueprint enables developers to build AI agents that use reasoning and connect to many data sources and tools to distill in-depth source materials with efficiency and precision. Using AI-Q, agents summarize large data sets, generating tokens 5x faster and ingesting petabyte-scale data 15x faster with better semantic accuracy. Multimodal PDF data extraction and retrieval with NVIDIA NeMo Retriever, 15x faster ingestion of enterprise data, 3x lower retrieval latency, multilingual and cross-lingual, reranking to further improve accuracy, and GPU-accelerated index creation and search.
  • 8
    NVIDIA AI Data Platform
    ​NVIDIA's AI Data Platform is a comprehensive solution designed to accelerate enterprise storage and optimize AI workloads, facilitating the development of agentic AI applications. It integrates NVIDIA Blackwell GPUs, BlueField-3 DPUs, Spectrum-X networking, and NVIDIA AI Enterprise software to enhance performance and accuracy in AI workflows. NVIDIA AI Data Platform optimizes workload distribution across GPUs and nodes, leveraging intelligent routing, load balancing, and advanced caching to enable scalable, complex AI processes. This infrastructure supports the deployment and scaling of AI agents across hybrid data centers, transforming raw data into actionable insights in real-time. ​With the platform, enterprises can process and extract insights from structured or unstructured data, unlocking valuable insights from all available data sources, text, PDF, images, and video.
  • 9
    NVIDIA Llama Nemotron
    ​NVIDIA Llama Nemotron is a family of advanced language models optimized for reasoning and a diverse set of agentic AI tasks. These models excel in graduate-level scientific reasoning, advanced mathematics, coding, instruction following, and tool calls. Designed for deployment across various platforms, from data centers to PCs, they offer the flexibility to toggle reasoning capabilities on or off, reducing inference costs when deep reasoning isn't required. The Llama Nemotron family includes models tailored for different deployment needs. Built upon Llama models and enhanced by NVIDIA through post-training, these models demonstrate improved accuracy, up to 20% over base models, and optimized inference speeds, achieving up to five times the performance of other leading open reasoning models. This efficiency enables handling more complex reasoning tasks, enhances decision-making capabilities, and reduces operational costs for enterprises. ​
  • 10
    Linker Vision

    Linker Vision

    Linker Vision

    Linker VisionAI Platform is a comprehensive, end-to-end solution for vision AI, encompassing simulation, training, and deployment to empower smart cities and enterprises. It comprises three core components, Mirra, for synthetic data generation using NVIDIA Omniverse and NVIDIA Cosmos; DataVerse, facilitating data curation, annotation, and model training with NVIDIA NeMo and NVIDIA TAO; and Observ, enabling large-scale Vision Language Model (VLM) deployment with NVIDIA NIM. This integrated approach allows for the seamless transition from data simulation to real-world application, ensuring that AI models are robust and adaptable. Linker VisionAI Platform supports a range of applications, including traffic and transportation management, worker safety, disaster response, and more, by leveraging urban camera networks and AI to drive responsive decisions.
  • 11
    NVIDIA NeMo Retriever
    NVIDIA NeMo Retriever is a collection of microservices for building multimodal extraction, reranking, and embedding pipelines with high accuracy and maximum data privacy. It delivers quick, context-aware responses for AI applications like advanced retrieval-augmented generation (RAG) and agentic AI workflows. As part of the NVIDIA NeMo platform and built with NVIDIA NIM, NeMo Retriever allows developers to flexibly leverage these microservices to connect AI applications to large enterprise datasets wherever they reside and fine-tune them to align with specific use cases. NeMo Retriever provides components for building data extraction and information retrieval pipelines. The pipeline extracts structured and unstructured data (e.g., text, charts, tables), converts it to text, and filters out duplicates. A NeMo Retriever embedding NIM converts the chunks into embeddings and stores them in a vector database, accelerated by NVIDIA cuVS, for enhanced performance and speed of indexing.
  • Previous
  • You're on page 1
  • Next