Alternatives to NVIDIA DRIVE

Compare NVIDIA DRIVE alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to NVIDIA DRIVE in 2026. Compare features, ratings, user reviews, pricing, and more from NVIDIA DRIVE competitors and alternatives in order to make an informed decision for your business.

  • 1
    Mobileye

    Mobileye

    Mobileye

    From a variety of ADAS solutions to a self-driving system for autonomous public transport or goods delivery, all the way to consumer AVs. By developing everything from the silicon through to the self-driving system in-house, numerous efficiencies and synergies are unlocked, allowing us to reach AV at scale. From the beginning, Mobileye has developed hardware and software in-house, paving the way for highly efficient hardware, software, and algorithmic stacks at a superior cost-performance ratio. Everything Mobileye develops is safe by design, with a distinct strategy so that the technology can reach the mass market.
  • 2
    NVIDIA DRIVE Map
    NVIDIA DRIVE® Map is a multi-modal mapping platform designed to enable the highest levels of autonomy while improving safety. It combines the accuracy of ground truth mapping with the freshness and scale of AI-based fleet-sourced mapping. With four localization layers—camera, lidar, radar, and GNSS—DRIVE Map provides the redundancy and versatility required by the most advanced AI drivers. DRIVE Map is designed for the highest level of accuracy, the ground truth map engine creates DRIVE Maps using rich sensors—cameras, radars, lidars, and differential GNSS/IMU—with NVIDIA DRIVE Hyperion data collection vehicles. It achieves better than 5 cm accuracy for higher levels of autonomy (L3/L4) in selected environments, such as highways and urban environments. DRIVE Map is designed for near real-time operation and global scalability. Based on both ground truth and fleet-sourced data, it represents the collective memory of millions of vehicles.
  • 3
    DriveMod
    DriveMod is Cyngn’s full-stack autonomous driving solution. It integrates with off-the-shelf sensing and computing hardware to enable industrial vehicles to perceive the world, make decisions, and take action. DriveMod has been engineered to integrate smoothly into your existing workflows, enabling you to easily program vehicle routes, loops, and missions. In short, if a human driver can do it so can DriveMod. Safely bring autonomous capabilities to any commercially available vehicle through a retrofit. Drivemod’s flexibility ensures heterogeneous fleets operate smoothly regardless of vehicle type or manufacturer. Our AI software combines with leading sensors and computing hardware to create capabilities that far exceed that of human drivers. DriveMod can detect thousands of objects, propose thousands of candidate paths, and then navigate the optimal route — all in fractions of a second.
  • 4
    Kodiak Driver
    Kodiak AI’s technology centers on the Kodiak Driver, a unified autonomous driving platform that combines advanced AI-powered software with modular, vehicle-agnostic hardware to enable scalable, real-world autonomy for trucks and ground vehicles. Designed to integrate seamlessly across different vehicle types and operating conditions, the system uses a suite of sensors, housed in field-swappable SensorPods for full 360° perception, deep-learning based perception models to interpret complex environments, forward planning to anticipate changes in the road ahead, and redundant compute, power, steering, and braking systems engineered for safety and reliability in demanding use cases. It supports deployment in commercial long-haul trucking, industrial logistics, and defense ground vehicles, with connectivity and telematics enabling over-the-air updates, remote fleet management, and Assisted Autonomy capabilities that allow human oversight.
  • 5
    NVIDIA TensorRT
    NVIDIA TensorRT is an ecosystem of APIs for high-performance deep learning inference, encompassing an inference runtime and model optimizations that deliver low latency and high throughput for production applications. Built on the CUDA parallel programming model, TensorRT optimizes neural network models trained on all major frameworks, calibrating them for lower precision with high accuracy, and deploying them across hyperscale data centers, workstations, laptops, and edge devices. It employs techniques such as quantization, layer and tensor fusion, and kernel tuning on all types of NVIDIA GPUs, from edge devices to PCs to data centers. The ecosystem includes TensorRT-LLM, an open source library that accelerates and optimizes inference performance of recent large language models on the NVIDIA AI platform, enabling developers to experiment with new LLMs for high performance and quick customization through a simplified Python API.
  • 6
    Apollo Autonomous Vehicle Platform
    Various sensors, such as LiDAR, cameras and radar collect environmental data surrounding the vehicle. Using sensor fusion technology perception algorithms can determine in real time the type, location, velocity and orientation of objects on the road. This autonomous perception system is backed by both Baidu’s big data and deep learning technologies, as well as a vast collection of real world labeled driving data. The large-scale deep-learning platform and GPU clusters. Simulation provides the ability to virtually drive millions of kilometers daily using an array of real world traffic and autonomous driving data. Through the simulation service, partners gain access to a large number of autonomous driving scenes to quickly test, validate, and optimize models with comprehensive coverage in a way that is safe and efficient.
  • 7
    Waymo

    Waymo

    Waymo

    Waymo is an autonomous driving technology company that develops self-driving vehicles and operates fully driverless transportation services. Originally created as Google’s self-driving car project in 2009, the company later became an independent subsidiary of Alphabet with the goal of making transportation safer, more accessible, and more efficient through autonomous mobility. Its core technology, known as the Waymo Driver, combines artificial intelligence, high-resolution cameras, radar, lidar sensors, and detailed digital maps to allow vehicles to perceive their surroundings and navigate roads without human intervention. It continuously analyzes traffic signals, pedestrians, other vehicles, and road conditions to determine safe driving actions in real time. Before operating in a new area, Waymo vehicles map roads in extreme detail, identifying lane markings, signs, and intersections, and then combine this information with real-time sensor data to maintain precise positioning.
  • 8
    Applied Intuition Vehicle OS
    Applied Intuition Vehicle OS is a scalable, modular platform that enables automakers, commercial fleets, and defense integrators to develop, deploy, and update comprehensive vehicle software, hardware, and AI applications across all domains, from ADAS and infotainment to autonomy and digital services. The on-board SDK provides embedded real-time OS, drivers, middleware, and reference compute architecture for safety-critical and consumer‑facing functions, while the off-board platform supports cloud-based data logging, remote diagnostics, OTA vehicle updates, and digital twin management. Developers work within a unified Workbench environment featuring integrated build and testing tools, CI pipelines, and automated validation workflows. It bridges vehicle intelligence across ecosystems by combining autonomy stacks, simulation suites including vehicle dynamics and sensor simulation, and a vibrant developer toolchain.
  • 9
    Aurora Driver
    Created from industry-leading hardware and software, the Aurora Driver is designed to adapt to a variety of vehicle types and use cases, allowing us to deliver the benefits of self-driving across several industries, including long-haul trucking, local goods delivery, and people movement. The Aurora Driver consists of sensors that perceive the world, software that plans a safe path through it, and the computer that powers and integrates them both with the vehicle. The Aurora Driver was designed to operate any vehicle type, from a sedan to a Class 8 truck. The Aurora Computer is the central hub that connects our hardware and autonomy software and enables the Aurora Driver to seamlessly integrate with every vehicle type. Our custom-designed sensor suite—including FirstLight Lidar, long-range imaging radar, and high-resolution cameras—work together to build a 3D representation of the world, giving the Aurora Driver a 360˚ view of what’s happening around the vehicle in real time.
  • 10
    MORAI

    MORAI

    MORAI

    MORAI offers a digital twin simulation platform that accelerates the development and testing of autonomous vehicles, urban air mobility, and maritime autonomous surface ships. Built with high-definition maps and a powerful physics engine, it bridges the gap between real-world and simulation test environments, providing all key elements for verifying autonomous systems, including autonomous driving, unmanned aerial vehicles, and unmanned ship systems. It provides a variety of sensor models, including cameras, LiDAR, GPS, radar, and Inertial Measurement Units (IMUs). Users can generate complex and diverse test scenarios from real-world data, including log-based scenarios and edge case scenarios. MORAI's cloud simulation allows for safe, cost-effective, and scalable testing, enabling multiple simulations to run concurrently and evaluate different scenarios in parallel.
  • 11
    AutonomouStuff

    AutonomouStuff

    AutonomouStuff

    World’s premier automated platform provider. Starting with a customizable R&D vehicle platform can accelerate your work on advanced driver assistance systems (ADAS), advanced algorithm development and automated driving initiatives — or help take your driverless efforts to the next level. Step-by-step specification of your own R&D vehicle platform, from the vehicle to sensors, software and storage. When you purchase a platform from AutonomouStuff, we’re part of your team. A knowledgeable and experienced project manager stays in touch with you on a regular basis to keep you current on platform updates and to make sure we’re meeting your needs.
  • 12
    Oxbotica Selenium
    Selenium is our flagship product, a full-stack autonomy system, the product of over 500 person-years of effort. An on-vehicle suite of software which given a drive-by-wire interface and very modest compute hardware, brings full autonomy to a land-based vehicle. Selenium has the ability to transform any suitable vehicle platform into an autonomous vehicle, both at prototype volume and at scale. It is a collection of interoperable software modules that allow the vehicle to answer three key questions, where am I? What’s around me? What do I do next? Selenium spans the technological spectrum, from low-level device drivers, through calibration, 4-modal localization, mapping, perception, machine learning and planning, and its remarkable vertical integration even covers user interface and data export systems. It does not even need GPS or HD-Maps (although this can still be utilized, if available).
  • 13
    Qualcomm Snapdragon Ride
    The Qualcomm® Snapdragon Ride™ Platform is one of the automotive industry’s most advanced, scalable and fully customizable automated driving platforms. It will facilitate automotive suppliers and automakers flexibility to deploy the safety, convenience and autonomous driving features in demand today--with the ability to scale in the future. Reliable, extreme, auto-ready performance at low power with more simplicity and higher automotive safety. And unlike other autonomous driving solutions that require liquid cooling, the Snapdragon Ride Platform is passive or air-cooled. Our comprehensive customizable platform features multi-ECU aggregation, allowing it to easily scale from Active safety to Convenience to full Self-Driving across a wider range of vehicles. In addition to our high-performance, energy-efficient hardware, the new Snapdragon Ride Autonomous Stack combines with the hardware to provide one of the most robust vehicle perception and driving brains available.
  • 14
    PRODRIVER

    PRODRIVER

    embotech

    PRODRIVER is Embotech’s solution to the problem of motion planning for autonomous or highly-automated vehicles. It is an essential component of the autonomous driving software stack, within the so-called ‘decision making’ layer. As a motion planner, PRODRIVER is responsible to generate drivable trajectories or directly actuator commands such as steering, accelerating and braking. These are computed given the surrounding environment information. PRODRIVER does so by continuously making predictions and solving an optimization problem in real time. Its most important inputs are information about the drivable space, the obstacles within it and a goal (which could be a position or an objective such as making progress along a route). Its outputs can be used directly to control the vehicle or to provide set-points for the vehicle’s low-level controllers to track. The diagram below gives a schematic overview of how PRODRIVER is integrated within a typical autonomous vehicle software stack.
  • 15
    CUDA

    CUDA

    NVIDIA

    CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs. In GPU-accelerated applications, the sequential part of the workload runs on the CPU – which is optimized for single-threaded performance – while the compute intensive portion of the application runs on thousands of GPU cores in parallel. When using CUDA, developers program in popular languages such as C, C++, Fortran, Python and MATLAB and express parallelism through extensions in the form of a few basic keywords. The CUDA Toolkit from NVIDIA provides everything you need to develop GPU-accelerated applications. The CUDA Toolkit includes GPU-accelerated libraries, a compiler, development tools and the CUDA runtime.
  • 16
    NVIDIA Isaac
    NVIDIA Isaac is an AI robot development platform that comprises NVIDIA CUDA-accelerated libraries, application frameworks, and AI models to expedite the creation of AI robots, including autonomous mobile robots, robotic arms, and humanoids. The platform features NVIDIA Isaac ROS, a collection of CUDA-accelerated computing packages and AI models built on the open source ROS 2 framework, designed to streamline the development of advanced AI robotics applications. Isaac Manipulator, built on Isaac ROS, enables the development of AI-powered robotic arms that can seamlessly perceive, understand, and interact with their environments. Isaac Perceptor facilitates the rapid development of advanced AMRs capable of operating in unstructured environments like warehouses or factories. For humanoid robotics, NVIDIA Isaac GR00T serves as a research initiative and development platform for general-purpose robot foundation models and data pipelines.
  • 17
    Helm.ai

    Helm.ai

    Helm.ai

    We license AI software throughout the L2-L4 autonomous driving stack, perception, intent modeling, path planning, and vehicle control. Highest accuracy perception and intent prediction, leading to safer autonomous driving systems. Unsupervised learning and mathematical modeling, instead of supervised learning, allow learning from huge datasets. Our technologies are up to several orders of magnitude more capital-efficient, enabling much lower cost of development. Helm.ai full scene vision-based semantic segmentation fused with Lidar SLAM output from Ouster. L2+ autonomous driving with Helm.ai across highways 280 to 92 to 101, lane-keeping + ACC lane changes. Helm.ai pedestrian segmentation, with key-point prediction. Pedestrian segmentation and keypoint detection. Rain lane detection corner cases and Lidar-vision fusion. Full scene semantic segmentation, botts dots, and faded lane markings.
  • 18
    Wayve

    Wayve

    Wayve

    Wayve is an autonomous driving technology platform that develops AI foundation models to power next-generation self-driving vehicles through its Embodied AI approach. Wayve’s core innovation is a self-learning “AI driver” that enables vehicles to perceive, predict, and navigate complex real-world environments by learning from experience rather than relying on hand-coded rules or high-definition maps. Using primarily camera data and deep learning, the system builds a general-purpose driving intelligence that can adapt to new roads, cities, and vehicles with minimal retraining. Wayve’s mapless, hardware-agnostic architecture allows automakers to deploy advanced driver assistance and autonomous capabilities through software upgrades, supporting automation levels from L2+ to L4. It is designed to learn continuously from real-world and simulated data, enabling safe, natural driving behavior and improved handling of unexpected situations.
  • 19
    NVIDIA Isaac Sim
    NVIDIA Isaac Sim is an open source reference robotics simulation application built on NVIDIA Omniverse, enabling developers to design, simulate, test, and train AI-driven robots in physically realistic virtual environments. It is built atop Universal Scene Description (OpenUSD), offering full extensibility so developers can create custom simulators or seamlessly integrate Isaac Sim's capabilities into existing validation pipelines. The platform supports three essential workflows; large-scale synthetic data generation for training foundation models with photorealistic rendering and automatic ground truth labeling; software-in-the-loop testing, which connects actual robot software with simulated hardware to validate control and perception systems; and robot learning through NVIDIA’s Isaac Lab, which accelerates training of behaviors in simulation before real-world deployment. Isaac Sim delivers GPU-accelerated physics (via NVIDIA PhysX) and RTX-enabled sensor simulation.
  • 20
    Cognata

    Cognata

    Cognata

    Cognata delivers full product lifecycle simulation for ADAS and autonomous vehicle developers. Automatically-generated 3D environments and realistic AI-driven traffic agents for AV simulation. Autonomous vehicles ready-to-use scenario library and simple authoring to create millions of AV edge cases. Closed-loop testing with painless integration. Configurable rules and visualization for autonomous simulation. Measured and tracked performance. Digital twin grade 3D environments of roads, buildings, and infrastructure that are accurate down to the last lane marking, surface material, and traffic light. A global, cost-effective, and efficient architecture built for the cloud from the beginning. Closed-loop simulation or integration with your CI/CD environment are a few clicks away. Enables engineers to easily combine control, fusion, and vehicle models with Cognata’s environment, scenario, and sensor modeling capabilities.
  • 21
    NVIDIA DeepStream SDK
    NVIDIA's DeepStream SDK is a comprehensive streaming analytics toolkit based on GStreamer, designed for AI-based multi-sensor processing, including video, audio, and image understanding. It enables developers to create stream-processing pipelines that incorporate neural networks and complex tasks like tracking, video encoding/decoding, and rendering, facilitating real-time analytics on various data types. DeepStream is integral to NVIDIA Metropolis, a platform for building end-to-end services that transform pixel and sensor data into actionable insights. The SDK offers a powerful and flexible environment suitable for a wide range of industries, supporting multiple programming options such as C/C++, Python, and Graph Composer's intuitive UI. It allows for real-time insights by understanding rich, multi-modal sensor data at the edge and supports managed AI services through deployment in cloud-native containers orchestrated with Kubernetes.
  • 22
    NVIDIA Holoscan
    NVIDIA® Holoscan is a domain-agnostic AI computing platform that delivers the accelerated, full-stack infrastructure required for scalable, software-defined, and real-time processing of streaming data running at the edge or in the cloud. Holoscan supports a camera serial interface and front-end sensors for video capture, ultrasound research, data acquisition, and connection to legacy medical devices. Use the NVIDIA Holoscan SDK’s data transfer latency tool to measure complete, end-to-end latency for video processing applications. Access AI reference pipelines for radar, high-energy light sources, endoscopy, ultrasound, and other streaming video applications. NVIDIA Holoscan includes optimized libraries for network connectivity, data processing, and AI, as well as examples to create and run low-latency data-streaming applications using either C++, Python, or Graph Composer.
  • 23
    NVIDIA Iray
    NVIDIA® Iray® is an intuitive physically based rendering technology that generates photorealistic imagery for interactive and batch rendering workflows. Leveraging AI denoising, CUDA®, NVIDIA OptiX™, and Material Definition Language (MDL), Iray delivers world-class performance and impeccable visuals—in record time—when paired with the newest NVIDIA RTX™-based hardware. The latest version of Iray adds support for RTX, which includes dedicated ray-tracing-acceleration hardware support (RT Cores) and an advanced acceleration structure to enable real-time ray tracing in your graphics applications. In the 2019 release of the Iray SDK, all render modes utilize NVIDIA RTX technology. In combination with AI denoising, this enables you to create photorealistic rendering in seconds instead of minutes. Using Tensor Cores on the newest NVIDIA hardware brings the power of deep learning to both final-frame and interactive photorealistic renderings.
  • 24
    Aptiv

    Aptiv

    Aptiv

    Aptiv is a global technology company that develops safer, greener and more connected solutions enabling the future of mobility. Aptiv is focused on developing and commercializing autonomous vehicles and systems that enable point-to-point mobility via large fleets of autonomous vehicles in challenging urban driving environments. With talented teams working across the globe, from Boston to Singapore, Aptiv is the first company to deploy a commercial, autonomous ride-hailing service based in Las Vegas. Aptiv has provided over 100,000 public passenger rides, with 98% of passengers rating their Aptiv self-driving experience 5-out-of-5 stars. At Aptiv, we believe that our mobility solutions have the power to change the world.
  • 25
    Momenta

    Momenta

    Momenta

    Momenta is a leading autonomous driving technology company. Momenta is dedicated to reshaping the future of mobility by offering solutions to enable multiple levels of driving autonomy. It has pioneered a unique scalable path toward full autonomous driving by combining a data-driven approach with iterating algorithms, referred to as its “flywheel approach”, as well as a “two-leg” product strategy focusing on both Mpilot, its mass-production-ready highly autonomous driving solutions, and MSD (Momenta Self-Driving), its driving solution targeting full autonomy. Mpilot is purpose-built mass production ready highly automated driving software solution for private vehicles. Our core product includes Mpilot X, which provides a highly autonomous end-to-end driving experience with full driving scenario coverage and key functions including Mpilot Highway, Mpilot Urban and Mpilot Parking.
  • 26
    Carziqo

    Carziqo

    Carziqo

    Carziqo is an innovative technology company focused on autonomous driving and smart mobility services. We are committed to transforming the way people travel and earn through intelligent transportation technology. As a global leader in self-driving car rental, Carziqo provides individuals and enterprises with high-performance, intelligent, and safe autonomous vehicles, enabling every user to effortlessly embrace the future of technology. We offer more than just a car — we provide a complete intelligent mobility solution. Through the Carziqo platform, users can rent autonomous vehicles for logistics delivery or ride-hailing services, easily generating additional income. Whether for individual entrepreneurs or business clients, Carziqo empowers users to achieve more efficient, eco-friendly, and cost-effective smart transportation.
  • 27
    NVIDIA Picasso
    NVIDIA Picasso is a cloud service for building generative AI–powered visual applications. Enterprises, software creators, and service providers can run inference on their models, train NVIDIA Edify foundation models on proprietary data, or start from pre-trained models to generate image, video, and 3D content from text prompts. Picasso service is fully optimized for GPUs and streamlines training, optimization, and inference on NVIDIA DGX Cloud. Organizations and developers can train NVIDIA’s Edify models on their proprietary data or get started with models pre-trained with our premier partners. Expert denoising network to generate photorealistic 4K images. Temporal layers and novel video denoiser generate high-fidelity videos with temporal consistency. A novel optimization framework for generating 3D objects and meshes with high-quality geometry. Cloud service for building and deploying generative AI-powered image, video, and 3D applications.
  • 28
    NVIDIA Parabricks
    NVIDIA® Parabricks® is the only GPU-accelerated suite of genomic analysis applications that delivers fast and accurate analysis of genomes and exomes for sequencing centers, clinical teams, genomics researchers, and high-throughput sequencing instrument developers. NVIDIA Parabricks provides GPU-accelerated versions of tools used every day by computational biologists and bioinformaticians—enabling significantly faster runtimes, workflow scalability, and lower compute costs. From FastQ to Variant Call Format (VCF), NVIDIA Parabricks accelerates runtimes across a series of hardware configurations with NVIDIA A100 Tensor Core GPUs. Genomic researchers can experience acceleration across every step of their analysis workflows, from alignment to sorting to variant calling. When more GPUs are used, a near-linear scaling in compute time is observed compared to CPU-only systems, allowing up to 107X acceleration.
  • 29
    NVIDIA Brev
    NVIDIA Brev is a cloud-based platform that provides instant access to fully configured GPU environments optimized for AI and machine learning development. Its Launchables feature offers prebuilt, customizable compute setups that let developers start projects quickly without complex setup or configuration. Users can create Launchables by specifying GPU resources, Docker images, and project files, then share them easily with collaborators. The platform also offers prebuilt Launchables featuring the latest AI frameworks, microservices, and NVIDIA Blueprints to jumpstart development. NVIDIA Brev provides a seamless GPU sandbox with support for CUDA, Python, and Jupyter Lab accessible via browser or CLI. This enables developers to fine-tune, train, and deploy AI models with minimal friction and maximum flexibility.
    Starting Price: $0.04 per hour
  • 30
    RTMaps

    RTMaps

    Intempora

    RTMaps (Real-time multisensor applications) is a highly-optimized component-based development and execution middleware. Thanks to RTMaps, developers can design complex real-time systems and perception algorithms for their autonomous applications such as mobile robots, railway, defense but also ADAS and Highly automated driving. RTMaps is a versatile swiss-knife tool to develop and execute your application and offering multiple key benefits: ● Asynchronous data acquisition ● Optimized performance ● Synchronous recording and playback ● Comprehensive component libraries: over 600 I/O software components available ● Flexible algorithm development: Share and collaborate ● Multi-platform processing ● Cross-platform compatibility and scalable: from PC, Embedded targets, to the Cloud. ● Rapid prototyping and testing ● Integration with dSPACE tools ● Time and resource savings ● Limiting Development risks, errors and efforts. ● Certification ISO26262 ASIL-B: on demand
  • 31
    NVIDIA DGX Cloud Lepton
    NVIDIA DGX Cloud Lepton is an AI platform that connects developers to a global network of GPU compute across multiple cloud providers through a single platform. It offers a unified experience to discover and utilize GPU resources, along with integrated AI services to streamline the deployment lifecycle across multiple clouds. Developers can start building with instant access to NVIDIA’s accelerated APIs, including serverless endpoints, prebuilt NVIDIA Blueprints, and GPU-backed compute. When it’s time to scale, DGX Cloud Lepton powers seamless customization and deployment across a global network of GPU cloud providers. It enables frictionless deployment across any GPU cloud, allowing AI applications to be deployed across multi-cloud and hybrid environments with minimal operational burden, leveraging integrated services for inference, testing, and training workloads.
  • 32
    NVIDIA Confidential Computing
    NVIDIA Confidential Computing secures data in use, protecting AI models and workloads as they execute, by leveraging hardware-based trusted execution environments built into NVIDIA Hopper and Blackwell architectures and supported platforms. It enables enterprises to deploy AI training and inference, whether on-premises, in the cloud, or at the edge, with no changes to model code, while ensuring the confidentiality and integrity of both data and models. Key features include zero-trust isolation of workloads from the host OS or hypervisor, device attestation to verify that only legitimate NVIDIA hardware is running the code, and full compatibility with shared or remote infrastructure for ISVs, enterprises, and multi-tenant environments. By safeguarding proprietary AI models, inputs, weights, and inference activities, NVIDIA Confidential Computing enables high-performance AI without compromising security or performance.
  • 33
    NVIDIA HPC SDK
    The NVIDIA HPC Software Development Kit (SDK) includes the proven compilers, libraries and software tools essential to maximizing developer productivity and the performance and portability of HPC applications. The NVIDIA HPC SDK C, C++, and Fortran compilers support GPU acceleration of HPC modeling and simulation applications with standard C++ and Fortran, OpenACC® directives, and CUDA®. GPU-accelerated math libraries maximize performance on common HPC algorithms, and optimized communications libraries enable standards-based multi-GPU and scalable systems programming. Performance profiling and debugging tools simplify porting and optimization of HPC applications, and containerization tools enable easy deployment on-premises or in the cloud. With support for NVIDIA GPUs and Arm, OpenPOWER, or x86-64 CPUs running Linux, the HPC SDK provides the tools you need to build NVIDIA GPU-accelerated HPC applications.
  • 34
    vLLM

    vLLM

    vLLM

    vLLM is a high-performance library designed to facilitate efficient inference and serving of Large Language Models (LLMs). Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has evolved into a community-driven project with contributions from both academia and industry. It offers state-of-the-art serving throughput by efficiently managing attention key and value memory through its PagedAttention mechanism. It supports continuous batching of incoming requests and utilizes optimized CUDA kernels, including integration with FlashAttention and FlashInfer, to enhance model execution speed. Additionally, vLLM provides quantization support for GPTQ, AWQ, INT4, INT8, and FP8, as well as speculative decoding capabilities. Users benefit from seamless integration with popular Hugging Face models, support for various decoding algorithms such as parallel sampling and beam search, and compatibility with NVIDIA GPUs, AMD CPUs and GPUs, Intel CPUs, and more.
  • 35
    NVIDIA Blueprints
    NVIDIA Blueprints are reference workflows for agentic and generative AI use cases. Enterprises can build and operationalize custom AI applications, creating data-driven AI flywheels, using Blueprints along with NVIDIA AI and Omniverse libraries, SDKs, and microservices. Blueprints also include partner microservices, reference code, customization documentation, and a Helm chart for deployment at scale. With NVIDIA Blueprints, developers benefit from a unified experience across the NVIDIA stack, from cloud and data centers to NVIDIA RTX AI PCs and workstations. Use NVIDIA Blueprints to create AI agents that use sophisticated reasoning and iterative planning to solve complex problems. Check out new NVIDIA Blueprints, which equip millions of enterprise developers with reference workflows for building and deploying generative AI applications. Connect AI applications to enterprise data using industry-leading embedding and reranking models for information retrieval at scale.
  • 36
    NVIDIA Triton Inference Server
    NVIDIA Triton™ inference server delivers fast and scalable AI in production. Open-source inference serving software, Triton inference server streamlines AI inference by enabling teams deploy trained AI models from any framework (TensorFlow, NVIDIA TensorRT®, PyTorch, ONNX, XGBoost, Python, custom and more on any GPU- or CPU-based infrastructure (cloud, data center, or edge). Triton runs models concurrently on GPUs to maximize throughput and utilization, supports x86 and ARM CPU-based inferencing, and offers features like dynamic batching, model analyzer, model ensemble, and audio streaming. Triton helps developers deliver high-performance inference aTriton integrates with Kubernetes for orchestration and scaling, exports Prometheus metrics for monitoring, supports live model updates, and can be used in all major public cloud machine learning (ML) and managed Kubernetes platforms. Triton helps standardize model deployment in production.
  • 37
    VMware Private AI Foundation
    VMware Private AI Foundation is a joint, on‑premises generative AI platform built on VMware Cloud Foundation (VCF) that enables enterprises to run retrieval‑augmented generation workflows, fine‑tune and customize large language models, and perform inference in their own data centers, addressing privacy, choice, cost, performance, and compliance requirements. It integrates the Private AI Package (including vector databases, deep learning VMs, data indexing and retrieval services, and AI agent‑builder tools) with NVIDIA AI Enterprise (comprising NVIDIA microservices like NIM, NVIDIA’s own LLMs, and third‑party/open source models from places like Hugging Face). It supports full GPU virtualization, monitoring, live migration, and efficient resource pooling on NVIDIA‑certified HGX servers with NVLink/NVSwitch acceleration. Deployable via GUI, CLI, and API, it offers unified management through self‑service provisioning, model store governance, and more.
  • 38
    NVIDIA AI Foundations
    Impacting virtually every industry, generative AI unlocks a new frontier of opportunities, for knowledge and creative workers, to solve today’s most important challenges. NVIDIA is powering generative AI through an impressive suite of cloud services, pre-trained foundation models, as well as cutting-edge frameworks, optimized inference engines, and APIs to bring intelligence to your enterprise applications. NVIDIA AI Foundations is a set of cloud services that advance enterprise-level generative AI and enable customization across use cases in areas such as text (NVIDIA NeMo™), visual content (NVIDIA Picasso), and biology (NVIDIA BioNeMo™). Unleash the full potential with NeMo, Picasso, and BioNeMo cloud services, powered by NVIDIA DGX™ Cloud, the AI supercomputer. Marketing copy, storyline creation, and global translation in many languages. For news, email, meeting minutes, and information synthesis.
  • 39
    Conigital

    Conigital

    Conigital Group

    Connecting city infrastructures with digital assets, transforming traditional industries to create profit, social impact, and sustainability. We are a deep tech, AI driverless vehicle company developing our full-stack “lift and shift” driverless vehicle platform ConICAV™ for any vehicle type. We retrofit or custom-build driverless vehicles for industrial and commercial fleets, for a variety of different use cases. Utilizing AI and machine learning techniques, our ConICAV™ integrated solution improves asset management, customer experience, and operational efficiency. Conigital has secured a groundbreaking £500 million Series A+ funding offer from a global private equity firm with £150 billion under management.
  • 40
    RightNow AI

    RightNow AI

    RightNow AI

    RightNow AI is an AI-powered platform designed to automatically profile, detect bottlenecks, and optimize CUDA kernels for peak performance. It supports all major NVIDIA architectures, including Ampere, Hopper, Ada Lovelace, and Blackwell GPUs. It enables users to generate optimized CUDA kernels instantly using natural language prompts, eliminating the need for deep GPU expertise. With serverless GPU profiling, users can identify performance issues without relying on local hardware. RightNow AI replaces complex legacy optimization tools with a streamlined solution, offering features such as inference-time scaling and performance benchmarking. Trusted by leading AI and HPC teams worldwide, including Nvidia, Adobe, and Samsung, RightNow AI has demonstrated performance improvements ranging from 2x to 20x over standard implementations.
    Starting Price: $20 per month
  • 41
    Carver21

    Carver21

    DeepScale

    AI Building Blocks for Intelligent Cars. Carver21 is purpose built to scale to your perception needs whether enabling safety features or delivering autonomous driving functions.
  • 42
    NVIDIA Magnum IO
    NVIDIA Magnum IO is the architecture for parallel, intelligent data center I/O. It maximizes storage, network, and multi-node, multi-GPU communications for the world’s most important applications, using large language models, recommender systems, imaging, simulation, and scientific research. Magnum IO utilizes storage I/O, network I/O, in-network compute, and I/O management to simplify and speed up data movement, access, and management for multi-GPU, multi-node systems. It supports NVIDIA CUDA-X libraries and makes the best use of a range of NVIDIA GPU and networking hardware topologies to achieve optimal throughput and low latency. In multi-GPU, multi-node systems, slow CPU, single-thread performance is in the critical path of data access from local or remote storage devices. With storage I/O acceleration, the GPU bypasses the CPU and system memory, and accesses remote storage via 8x 200 Gb/s NICs, achieving up to 1.6 TB/s of raw storage bandwidth.
  • 43
    NVIDIA NeMo Megatron
    NVIDIA NeMo Megatron is an end-to-end framework for training and deploying LLMs with billions and trillions of parameters. NVIDIA NeMo Megatron, part of the NVIDIA AI platform, offers an easy, efficient, and cost-effective containerized framework to build and deploy LLMs. Designed for enterprise application development, it builds upon the most advanced technologies from NVIDIA research and provides an end-to-end workflow for automated distributed data processing, training large-scale customized GPT-3, T5, and multilingual T5 (mT5) models, and deploying models for inference at scale. Harnessing the power of LLMs is made easy through validated and converged recipes with predefined configurations for training and inference. Customizing models is simplified by the hyperparameter tool, which automatically searches for the best hyperparameter configurations and performance for training and inference on any given distributed GPU cluster configuration.
  • 44
    Tronis

    Tronis

    TWT GmbH Science & Innovation

    Tronis is an environment for virtual prototyping and for safeguarding driver assistance systems, e.g. for highly automated or autonomous driving . Based on a modern 3D game engine, real driving situations and traffic scenarios can be efficiently mapped and used for testing, e.g. for camera and radar-based environment detection. This can significantly accelerate the development of corresponding systems and reduce the need for real prototypes.
  • 45
    NVIDIA DGX Cloud Serverless Inference
    NVIDIA DGX Cloud Serverless Inference is a high-performance, serverless AI inference solution that accelerates AI innovation with auto-scaling, cost-efficient GPU utilization, multi-cloud flexibility, and seamless scalability. With NVIDIA DGX Cloud Serverless Inference, you can scale down to zero instances during periods of inactivity to optimize resource utilization and reduce costs. There's no extra cost for cold-boot start times, and the system is optimized to minimize them. NVIDIA DGX Cloud Serverless Inference is powered by NVIDIA Cloud Functions (NVCF), which offers robust observability features. It allows you to integrate your preferred monitoring tools, such as Splunk, for comprehensive insights into your AI workloads. NVCF offers flexible deployment options for NIM microservices while allowing you to bring your own containers, models, and Helm charts.
  • 46
    Skyportal

    Skyportal

    Skyportal

    Skyportal is a GPU cloud platform built for AI engineers, offering 50% less cloud costs and 100% GPU performance. It provides a cost-effective GPU infrastructure for machine learning workloads, eliminating unpredictable cloud bills and hidden fees. Skyportal has seamlessly integrated Kubernetes, Slurm, PyTorch, TensorFlow, CUDA, cuDNN, and NVIDIA Drivers, fully optimized for Ubuntu 22.04 LTS and 24.04 LTS, allowing users to focus on innovating and scaling with ease. It offers high-performance NVIDIA H100 and H200 GPUs optimized specifically for ML/AI workloads, with instant scalability and 24/7 expert support from a team that understands ML workflows and optimization. Skyportal's transparent pricing and zero egress fees provide predictable costs for AI infrastructure. Users can share their AI/ML project requirements and goals, deploy models within the infrastructure using familiar tools and frameworks, and scale their infrastructure as needed.
    Starting Price: $2.40 per hour
  • 47
    OpenFleet

    OpenFleet

    OpenFleet

    Easily monitor your vehicles. Make your drivers' commute more enjoyable. Optimize your fleet use. Our Team is here for you. We help you rationalize your vehicle fleet, whether we're taking short/long-term lease, corporate/peer to peer carsharing, autonomous vehicle service etc. Offering full portfolio of mobility solutions : onboard telematics, online supervision platforms and mobile apps. Managing a mobility service has never been so easy. Limitless mobility for your drivers. An innovative all-inclusive service. For Corporate or P2P use. Say goodbye to your keyboxes. Drivers now use their badge, smartphone or smart watch to unlock your fleet. Connect to your fleet through User or Manager Portal. Your vehicles are now available anywhere, anytime. Mentoring programs, 24/7 customer service, PR, training: a dedicated Mobility Manager will assist you all along your mobility project.
    Starting Price: $20 per user per month
  • 48
    NVIDIA AI Enterprise
    The software layer of the NVIDIA AI platform, NVIDIA AI Enterprise accelerates the data science pipeline and streamlines development and deployment of production AI including generative AI, computer vision, speech AI and more. With over 50 frameworks, pretrained models and development tools, NVIDIA AI Enterprise is designed to accelerate enterprises to the leading edge of AI, while also simplifying AI to make it accessible to every enterprise. The adoption of artificial intelligence and machine learning has gone mainstream, and is core to nearly every company’s competitive strategy. One of the toughest challenges for enterprises is the struggle with siloed infrastructure across the cloud and on-premises data centers. AI requires their environments to be managed as a common platform, instead of islands of compute.
  • 49
    NVIDIA NIM
    Explore the latest optimized AI models, connect AI agents to data with NVIDIA NeMo, and deploy anywhere with NVIDIA NIM microservices. NVIDIA NIM is a set of easy-to-use inference microservices that facilitate the deployment of foundation models across any cloud or data center, ensuring data security and streamlined AI integration. Additionally, NVIDIA AI provides access to the Deep Learning Institute (DLI), offering technical training to gain in-demand skills, hands-on experience, and expert knowledge in AI, data science, and accelerated computing. AI models generate responses and outputs based on complex algorithms and machine learning techniques, and those responses or outputs may be inaccurate, harmful, biased, or indecent. By testing this model, you assume the risk of any harm caused by any response or output of the model. Please do not upload any confidential information or personal data unless expressly permitted. Your use is logged for security purposes.
  • 50
    Pony.ai

    Pony.ai

    Pony.ai

    We are developing safe and reliable autonomous driving technology globally. Having accumulated millions of kilometers in autonomous road testing in complex scenarios, we have built a solid foundation to deliver autonomous driving systems at scale. Pony.ai was the first to launch Robotaxi service in December 2018, allowing passengers to hail self-driving cars via the PonyPilot+ App to start a new, safe and enjoyable journey. The service is currently available in Guangzhou, Beijing, Irvine, CA, and Fremont, CA. We have launched autonomous mobility pilots in multiple cities across the US and China, serving hundreds of riders every day. These pilots have enabled us to build a strong technical and operational foundation to further expand and improve our service. We have come together to tackle the biggest tech challenges in mobility. We are making concrete progress every day toward our vision of autonomous mobility everywhere.