15 Integrations with NVIDIA DGX Cloud

View a list of NVIDIA DGX Cloud integrations and software that integrates with NVIDIA DGX Cloud below. Compare the best NVIDIA DGX Cloud integrations as well as features, ratings, user reviews, and pricing of software that integrates with NVIDIA DGX Cloud. Here are the current NVIDIA DGX Cloud integrations in 2026:

  • 1
    Google Cloud Platform
    Google Cloud is a cloud-based service that allows you to create anything from simple websites to complex applications for businesses of all sizes. New customers get $300 in free credits to run, test, and deploy workloads. All customers can use 25+ products for free, up to monthly usage limits. Use Google's core infrastructure, data analytics & machine learning. Secure and fully featured for all enterprises. Tap into big data to find answers faster and build better products. Grow from prototype to production to planet-scale, without having to think about capacity, reliability or performance. From virtual machines with proven price/performance advantages to a fully managed app development platform. Scalable, resilient, high performance object storage and databases for your applications. State-of-the-art software-defined networking products on Google’s private fiber network. Fully managed data warehousing, batch and stream processing, data exploration, Hadoop/Spark, and messaging.
    Leader badge
    Starting Price: Free ($300 in free credits)
    View Software
    Visit Website
  • 2
    Amazon Web Services (AWS)
    Amazon Web Services (AWS) is the world’s most comprehensive cloud platform, trusted by millions of customers across industries. From startups to global enterprises and government agencies, AWS provides on-demand solutions for compute, storage, networking, AI, analytics, and more. The platform empowers organizations to innovate faster, reduce costs, and scale globally with unmatched flexibility and reliability. With services like Amazon EC2 for compute, Amazon S3 for storage, SageMaker for AI/ML, and CloudFront for content delivery, AWS covers nearly every business and technical need. Its global infrastructure spans 120 availability zones across 38 regions, ensuring resilience, compliance, and security. Backed by the largest community of customers, partners, and developers, AWS continues to lead the cloud industry in innovation and operational expertise.
  • 3
    Microsoft Azure
    Microsoft's Azure is a cloud computing platform that allows for rapid and secure application development, testing and management. Azure. Invent with purpose. Turn ideas into solutions with more than 100 services to build, deploy, and manage applications—in the cloud, on-premises, and at the edge—using the tools and frameworks of your choice. Continuous innovation from Microsoft supports your development today, and your product visions for tomorrow. With a commitment to open source, and support for all languages and frameworks, build how you want, and deploy where you want to. On-premises, in the cloud, and at the edge—we’ll meet you where you are. Integrate and manage your environments with services designed for hybrid cloud. Get security from the ground up, backed by a team of experts, and proactive compliance trusted by enterprises, governments, and startups. The cloud you can trust, with the numbers to prove it.
  • 4
    NVIDIA Isaac GR00T
    NVIDIA Isaac GR00T (Generalist Robot 00 Technology) is a research-driven platform for developing general-purpose humanoid robot foundation models and data pipelines. It includes models like Isaac GR00T-N, and synthetic motion blueprints, GR00T-Mimic for augmenting demonstrations, and GR00T-Dreams for generating novel synthetic trajectories, to accelerate humanoid robotics development. Recently, the open source Isaac GR00T N1 foundation model debuted, featuring a dual-system cognitive architecture, a fast-reacting “System 1” action model, and a deliberative, language-enabled “System 2” reasoning model. The updated GR00T N1.5 introduces enhancements such as improved vision-language grounding, better language command following, few-shot adaptability, and new robot embodiment support. Together with tools like Isaac Sim, Lab, and Omniverse, GR00T empowers developers to train, simulate, post-train, and deploy adaptable humanoid agents using both real and synthetic data.
    Starting Price: Free
  • 5
    NVIDIA Isaac
    NVIDIA Isaac is an AI robot development platform that comprises NVIDIA CUDA-accelerated libraries, application frameworks, and AI models to expedite the creation of AI robots, including autonomous mobile robots, robotic arms, and humanoids. The platform features NVIDIA Isaac ROS, a collection of CUDA-accelerated computing packages and AI models built on the open source ROS 2 framework, designed to streamline the development of advanced AI robotics applications. Isaac Manipulator, built on Isaac ROS, enables the development of AI-powered robotic arms that can seamlessly perceive, understand, and interact with their environments. Isaac Perceptor facilitates the rapid development of advanced AMRs capable of operating in unstructured environments like warehouses or factories. For humanoid robotics, NVIDIA Isaac GR00T serves as a research initiative and development platform for general-purpose robot foundation models and data pipelines.
  • 6
    NVIDIA Base Command
    NVIDIA Base Command™ is a software service for enterprise-class AI training that enables businesses and their data scientists to accelerate AI development. Part of the NVIDIA DGX™ platform, Base Command Platform provides centralized, hybrid control of AI training projects. It works with NVIDIA DGX Cloud and NVIDIA DGX SuperPOD. Base Command Platform, in combination with NVIDIA-accelerated AI infrastructure, provides a cloud-hosted solution for AI development, so users can avoid the overhead and pitfalls of deploying and running a do-it-yourself platform. Base Command Platform efficiently configures and manages AI workloads, delivers integrated dataset management, and executes them on right-sized resources ranging from a single GPU to large-scale, multi-node clusters in the cloud or on-premises. Because NVIDIA’s own engineers and researchers rely on it every day, the platform receives continuous software enhancements.
  • 7
    NVIDIA AI Enterprise
    The software layer of the NVIDIA AI platform, NVIDIA AI Enterprise accelerates the data science pipeline and streamlines development and deployment of production AI including generative AI, computer vision, speech AI and more. With over 50 frameworks, pretrained models and development tools, NVIDIA AI Enterprise is designed to accelerate enterprises to the leading edge of AI, while also simplifying AI to make it accessible to every enterprise. The adoption of artificial intelligence and machine learning has gone mainstream, and is core to nearly every company’s competitive strategy. One of the toughest challenges for enterprises is the struggle with siloed infrastructure across the cloud and on-premises data centers. AI requires their environments to be managed as a common platform, instead of islands of compute.
  • 8
    NVIDIA AI Foundations
    Impacting virtually every industry, generative AI unlocks a new frontier of opportunities, for knowledge and creative workers, to solve today’s most important challenges. NVIDIA is powering generative AI through an impressive suite of cloud services, pre-trained foundation models, as well as cutting-edge frameworks, optimized inference engines, and APIs to bring intelligence to your enterprise applications. NVIDIA AI Foundations is a set of cloud services that advance enterprise-level generative AI and enable customization across use cases in areas such as text (NVIDIA NeMo™), visual content (NVIDIA Picasso), and biology (NVIDIA BioNeMo™). Unleash the full potential with NeMo, Picasso, and BioNeMo cloud services, powered by NVIDIA DGX™ Cloud, the AI supercomputer. Marketing copy, storyline creation, and global translation in many languages. For news, email, meeting minutes, and information synthesis.
  • 9
    NVIDIA Picasso
    NVIDIA Picasso is a cloud service for building generative AI–powered visual applications. Enterprises, software creators, and service providers can run inference on their models, train NVIDIA Edify foundation models on proprietary data, or start from pre-trained models to generate image, video, and 3D content from text prompts. Picasso service is fully optimized for GPUs and streamlines training, optimization, and inference on NVIDIA DGX Cloud. Organizations and developers can train NVIDIA’s Edify models on their proprietary data or get started with models pre-trained with our premier partners. Expert denoising network to generate photorealistic 4K images. Temporal layers and novel video denoiser generate high-fidelity videos with temporal consistency. A novel optimization framework for generating 3D objects and meshes with high-quality geometry. Cloud service for building and deploying generative AI-powered image, video, and 3D applications.
  • 10
    Fugatto

    Fugatto

    NVIDIA

    Using text and audio as inputs, a new generative AI model from NVIDIA can create any combination of music, voices, and sounds. A team of generative AI researchers created a Swiss Army knife for sound, one that allows users to control the audio output simply using text. While some AI models can compose a song or modify a voice, none have the dexterity of the new offering. Called Fugatto, it generates or transforms any mix of music, voices, and sounds described with prompts using any combination of text and audio files. For example, it can create a music snippet based on a text prompt, remove or add instruments from an existing song, change the accent or emotion in a voice, and even let people produce sounds never heard before. Supporting numerous audio generation and transformation tasks, Fugatto is the first foundational generative AI model that showcases emergent properties.
  • 11
    NVIDIA Base Command Manager
    NVIDIA Base Command Manager offers fast deployment and end-to-end management for heterogeneous AI and high-performance computing clusters at the edge, in the data center, and in multi- and hybrid-cloud environments. It automates the provisioning and administration of clusters ranging in size from a couple of nodes to hundreds of thousands, supports NVIDIA GPU-accelerated and other systems, and enables orchestration with Kubernetes. The platform integrates with Kubernetes for workload orchestration and offers tools for infrastructure monitoring, workload management, and resource allocation. Base Command Manager is optimized for accelerated computing environments, making it suitable for diverse HPC and AI workloads. It is available with NVIDIA DGX systems and as part of the NVIDIA AI Enterprise software suite. High-performance Linux clusters can be quickly built and managed with NVIDIA Base Command Manager, supporting HPC, machine learning, and analytics applications.
  • 12
    Azure Marketplace
    Azure Marketplace is a comprehensive online store that provides access to thousands of certified, ready-to-use software applications, services, and solutions from Microsoft and third-party vendors. It enables businesses to discover, purchase, and deploy software directly within the Azure cloud environment. The marketplace offers a wide range of products, including virtual machine images, AI and machine learning models, developer tools, security solutions, and industry-specific applications. With flexible pricing options like pay-as-you-go, free trials, and subscription models, Azure Marketplace simplifies the procurement process and centralizes billing through a single Azure invoice. It supports seamless integration with Azure services, enabling organizations to enhance their cloud infrastructure, streamline workflows, and accelerate digital transformation initiatives.
  • 13
    NVIDIA Llama Nemotron
    ​NVIDIA Llama Nemotron is a family of advanced language models optimized for reasoning and a diverse set of agentic AI tasks. These models excel in graduate-level scientific reasoning, advanced mathematics, coding, instruction following, and tool calls. Designed for deployment across various platforms, from data centers to PCs, they offer the flexibility to toggle reasoning capabilities on or off, reducing inference costs when deep reasoning isn't required. The Llama Nemotron family includes models tailored for different deployment needs. Built upon Llama models and enhanced by NVIDIA through post-training, these models demonstrate improved accuracy, up to 20% over base models, and optimized inference speeds, achieving up to five times the performance of other leading open reasoning models. This efficiency enables handling more complex reasoning tasks, enhances decision-making capabilities, and reduces operational costs for enterprises. ​
  • 14
    NVIDIA DGX Cloud Serverless Inference
    NVIDIA DGX Cloud Serverless Inference is a high-performance, serverless AI inference solution that accelerates AI innovation with auto-scaling, cost-efficient GPU utilization, multi-cloud flexibility, and seamless scalability. With NVIDIA DGX Cloud Serverless Inference, you can scale down to zero instances during periods of inactivity to optimize resource utilization and reduce costs. There's no extra cost for cold-boot start times, and the system is optimized to minimize them. NVIDIA DGX Cloud Serverless Inference is powered by NVIDIA Cloud Functions (NVCF), which offers robust observability features. It allows you to integrate your preferred monitoring tools, such as Splunk, for comprehensive insights into your AI workloads. NVCF offers flexible deployment options for NIM microservices while allowing you to bring your own containers, models, and Helm charts.
  • 15
    NVIDIA Cloud Functions
    NVIDIA Cloud Functions (NVCF) is a serverless API designed to deploy and manage AI workloads on GPUs, providing security, scalability, and reliability. It supports HTTP polling, HTTP streaming, and gRPC protocols for accessing workloads. Cloud Functions is primarily suited for shorter-running, preemptable workloads such as inferencing and fine-tuning. It supports two function types: "Container" and "Helm Chart", allowing users to define functions based on their specific requirements. Workloads are ephemeral and preemptable, so users should ensure they save their work appropriately. Models, containers, helm charts, and additional resources are hosted and pulled from the NGC Private Registry. To get started, users can follow the functions quickstart guide, which provides an end-to-end workflow for creating and deploying a container-based function using the fastapi_echo_sample container.
  • Previous
  • You're on page 1
  • Next