2 Integrations with NVIDIA NetQ
View a list of NVIDIA NetQ integrations and software that integrates with NVIDIA NetQ below. Compare the best NVIDIA NetQ integrations as well as features, ratings, user reviews, and pricing of software that integrates with NVIDIA NetQ. Here are the current NVIDIA NetQ integrations in 2026:
-
1
SONiC
NVIDIA Networking
NVIDIA offers pure SONiC, a community-developed, open-source, Linux-based network operating system that has been hardened in the data centers of some of the largest cloud service providers. Pure SONiC through NVIDIA removes distribution limitations and lets enterprises take full advantage of the benefits of open networking—as well as the NVIDIA expertise, experience, training, documentation, professional services, and support that best guarantee success. NVIDIA provides support for Free Range Routing (FRR), SONiC, Switch Abstraction Interface (SAI), systems, and application-specific integrated circuits (ASIC)—all in one place. Unlike a distribution, SONiC doesn’t require reliance upon a single vendor for roadmap additions, bug fixes, or security patches. With SONiC, you can achieve unified management with existing management tools across the data center. -
2
NVIDIA Magnum IO
NVIDIA
NVIDIA Magnum IO is the architecture for parallel, intelligent data center I/O. It maximizes storage, network, and multi-node, multi-GPU communications for the world’s most important applications, using large language models, recommender systems, imaging, simulation, and scientific research. Magnum IO utilizes storage I/O, network I/O, in-network compute, and I/O management to simplify and speed up data movement, access, and management for multi-GPU, multi-node systems. It supports NVIDIA CUDA-X libraries and makes the best use of a range of NVIDIA GPU and networking hardware topologies to achieve optimal throughput and low latency. In multi-GPU, multi-node systems, slow CPU, single-thread performance is in the critical path of data access from local or remote storage devices. With storage I/O acceleration, the GPU bypasses the CPU and system memory, and accesses remote storage via 8x 200 Gb/s NICs, achieving up to 1.6 TB/s of raw storage bandwidth.
- Previous
- You're on page 1
- Next