Alternatives to NVIDIA DRIVE

Compare NVIDIA DRIVE alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to NVIDIA DRIVE in 2025. Compare features, ratings, user reviews, pricing, and more from NVIDIA DRIVE competitors and alternatives in order to make an informed decision for your business.

  • 1
    Mobileye

    Mobileye

    Mobileye

    From a variety of ADAS solutions to a self-driving system for autonomous public transport or goods delivery, all the way to consumer AVs. By developing everything from the silicon through to the self-driving system in-house, numerous efficiencies and synergies are unlocked, allowing us to reach AV at scale. From the beginning, Mobileye has developed hardware and software in-house, paving the way for highly efficient hardware, software, and algorithmic stacks at a superior cost-performance ratio. Everything Mobileye develops is safe by design, with a distinct strategy so that the technology can reach the mass market.
  • 2
    Applied Intuition Vehicle OS
    Applied Intuition Vehicle OS is a scalable, modular platform that enables automakers, commercial fleets, and defense integrators to develop, deploy, and update comprehensive vehicle software, hardware, and AI applications across all domains, from ADAS and infotainment to autonomy and digital services. The on-board SDK provides embedded real-time OS, drivers, middleware, and reference compute architecture for safety-critical and consumer‑facing functions, while the off-board platform supports cloud-based data logging, remote diagnostics, OTA vehicle updates, and digital twin management. Developers work within a unified Workbench environment featuring integrated build and testing tools, CI pipelines, and automated validation workflows. It bridges vehicle intelligence across ecosystems by combining autonomy stacks, simulation suites including vehicle dynamics and sensor simulation, and a vibrant developer toolchain.
  • 3
    DriveMod
    DriveMod is Cyngn’s full-stack autonomous driving solution. It integrates with off-the-shelf sensing and computing hardware to enable industrial vehicles to perceive the world, make decisions, and take action. DriveMod has been engineered to integrate smoothly into your existing workflows, enabling you to easily program vehicle routes, loops, and missions. In short, if a human driver can do it so can DriveMod. Safely bring autonomous capabilities to any commercially available vehicle through a retrofit. Drivemod’s flexibility ensures heterogeneous fleets operate smoothly regardless of vehicle type or manufacturer. Our AI software combines with leading sensors and computing hardware to create capabilities that far exceed that of human drivers. DriveMod can detect thousands of objects, propose thousands of candidate paths, and then navigate the optimal route — all in fractions of a second.
  • 4
    Oxbotica Selenium
    Selenium is our flagship product, a full-stack autonomy system, the product of over 500 person-years of effort. An on-vehicle suite of software which given a drive-by-wire interface and very modest compute hardware, brings full autonomy to a land-based vehicle. Selenium has the ability to transform any suitable vehicle platform into an autonomous vehicle, both at prototype volume and at scale. It is a collection of interoperable software modules that allow the vehicle to answer three key questions, where am I? What’s around me? What do I do next? Selenium spans the technological spectrum, from low-level device drivers, through calibration, 4-modal localization, mapping, perception, machine learning and planning, and its remarkable vertical integration even covers user interface and data export systems. It does not even need GPS or HD-Maps (although this can still be utilized, if available).
  • 5
    MORAI

    MORAI

    MORAI

    MORAI offers a digital twin simulation platform that accelerates the development and testing of autonomous vehicles, urban air mobility, and maritime autonomous surface ships. Built with high-definition maps and a powerful physics engine, it bridges the gap between real-world and simulation test environments, providing all key elements for verifying autonomous systems, including autonomous driving, unmanned aerial vehicles, and unmanned ship systems. It provides a variety of sensor models, including cameras, LiDAR, GPS, radar, and Inertial Measurement Units (IMUs). Users can generate complex and diverse test scenarios from real-world data, including log-based scenarios and edge case scenarios. MORAI's cloud simulation allows for safe, cost-effective, and scalable testing, enabling multiple simulations to run concurrently and evaluate different scenarios in parallel.
  • 6
    Helm.ai

    Helm.ai

    Helm.ai

    We license AI software throughout the L2-L4 autonomous driving stack, perception, intent modeling, path planning, and vehicle control. Highest accuracy perception and intent prediction, leading to safer autonomous driving systems. Unsupervised learning and mathematical modeling, instead of supervised learning, allow learning from huge datasets. Our technologies are up to several orders of magnitude more capital-efficient, enabling much lower cost of development. Helm.ai full scene vision-based semantic segmentation fused with Lidar SLAM output from Ouster. L2+ autonomous driving with Helm.ai across highways 280 to 92 to 101, lane-keeping + ACC lane changes. Helm.ai pedestrian segmentation, with key-point prediction. Pedestrian segmentation and keypoint detection. Rain lane detection corner cases and Lidar-vision fusion. Full scene semantic segmentation, botts dots, and faded lane markings.
  • 7
    NVIDIA DRIVE Map
    NVIDIA DRIVE® Map is a multi-modal mapping platform designed to enable the highest levels of autonomy while improving safety. It combines the accuracy of ground truth mapping with the freshness and scale of AI-based fleet-sourced mapping. With four localization layers—camera, lidar, radar, and GNSS—DRIVE Map provides the redundancy and versatility required by the most advanced AI drivers. DRIVE Map is designed for the highest level of accuracy, the ground truth map engine creates DRIVE Maps using rich sensors—cameras, radars, lidars, and differential GNSS/IMU—with NVIDIA DRIVE Hyperion data collection vehicles. It achieves better than 5 cm accuracy for higher levels of autonomy (L3/L4) in selected environments, such as highways and urban environments. DRIVE Map is designed for near real-time operation and global scalability. Based on both ground truth and fleet-sourced data, it represents the collective memory of millions of vehicles.
  • 8
    Apollo Autonomous Vehicle Platform
    Various sensors, such as LiDAR, cameras and radar collect environmental data surrounding the vehicle. Using sensor fusion technology perception algorithms can determine in real time the type, location, velocity and orientation of objects on the road. This autonomous perception system is backed by both Baidu’s big data and deep learning technologies, as well as a vast collection of real world labeled driving data. The large-scale deep-learning platform and GPU clusters. Simulation provides the ability to virtually drive millions of kilometers daily using an array of real world traffic and autonomous driving data. Through the simulation service, partners gain access to a large number of autonomous driving scenes to quickly test, validate, and optimize models with comprehensive coverage in a way that is safe and efficient.
  • 9
    Aurora Driver
    Created from industry-leading hardware and software, the Aurora Driver is designed to adapt to a variety of vehicle types and use cases, allowing us to deliver the benefits of self-driving across several industries, including long-haul trucking, local goods delivery, and people movement. The Aurora Driver consists of sensors that perceive the world, software that plans a safe path through it, and the computer that powers and integrates them both with the vehicle. The Aurora Driver was designed to operate any vehicle type, from a sedan to a Class 8 truck. The Aurora Computer is the central hub that connects our hardware and autonomy software and enables the Aurora Driver to seamlessly integrate with every vehicle type. Our custom-designed sensor suite—including FirstLight Lidar, long-range imaging radar, and high-resolution cameras—work together to build a 3D representation of the world, giving the Aurora Driver a 360˚ view of what’s happening around the vehicle in real time.
  • 10
    AutonomouStuff

    AutonomouStuff

    AutonomouStuff

    World’s premier automated platform provider. Starting with a customizable R&D vehicle platform can accelerate your work on advanced driver assistance systems (ADAS), advanced algorithm development and automated driving initiatives — or help take your driverless efforts to the next level. Step-by-step specification of your own R&D vehicle platform, from the vehicle to sensors, software and storage. When you purchase a platform from AutonomouStuff, we’re part of your team. A knowledgeable and experienced project manager stays in touch with you on a regular basis to keep you current on platform updates and to make sure we’re meeting your needs.
  • 11
    Qualcomm Snapdragon Ride
    The Qualcomm® Snapdragon Ride™ Platform is one of the automotive industry’s most advanced, scalable and fully customizable automated driving platforms. It will facilitate automotive suppliers and automakers flexibility to deploy the safety, convenience and autonomous driving features in demand today--with the ability to scale in the future. Reliable, extreme, auto-ready performance at low power with more simplicity and higher automotive safety. And unlike other autonomous driving solutions that require liquid cooling, the Snapdragon Ride Platform is passive or air-cooled. Our comprehensive customizable platform features multi-ECU aggregation, allowing it to easily scale from Active safety to Convenience to full Self-Driving across a wider range of vehicles. In addition to our high-performance, energy-efficient hardware, the new Snapdragon Ride Autonomous Stack combines with the hardware to provide one of the most robust vehicle perception and driving brains available.
  • 12
    PRODRIVER

    PRODRIVER

    embotech

    PRODRIVER is Embotech’s solution to the problem of motion planning for autonomous or highly-automated vehicles. It is an essential component of the autonomous driving software stack, within the so-called ‘decision making’ layer. As a motion planner, PRODRIVER is responsible to generate drivable trajectories or directly actuator commands such as steering, accelerating and braking. These are computed given the surrounding environment information. PRODRIVER does so by continuously making predictions and solving an optimization problem in real time. Its most important inputs are information about the drivable space, the obstacles within it and a goal (which could be a position or an objective such as making progress along a route). Its outputs can be used directly to control the vehicle or to provide set-points for the vehicle’s low-level controllers to track. The diagram below gives a schematic overview of how PRODRIVER is integrated within a typical autonomous vehicle software stack.
  • 13
    Cognata

    Cognata

    Cognata

    Cognata delivers full product lifecycle simulation for ADAS and autonomous vehicle developers. Automatically-generated 3D environments and realistic AI-driven traffic agents for AV simulation. Autonomous vehicles ready-to-use scenario library and simple authoring to create millions of AV edge cases. Closed-loop testing with painless integration. Configurable rules and visualization for autonomous simulation. Measured and tracked performance. Digital twin grade 3D environments of roads, buildings, and infrastructure that are accurate down to the last lane marking, surface material, and traffic light. A global, cost-effective, and efficient architecture built for the cloud from the beginning. Closed-loop simulation or integration with your CI/CD environment are a few clicks away. Enables engineers to easily combine control, fusion, and vehicle models with Cognata’s environment, scenario, and sensor modeling capabilities.
  • 14
    RTMaps

    RTMaps

    Intempora

    RTMaps (Real-time multisensor applications) is a highly-optimized component-based development and execution middleware. Thanks to RTMaps, developers can design complex real-time systems and perception algorithms for their autonomous applications such as mobile robots, railway, defense but also ADAS and Highly automated driving. RTMaps is a versatile swiss-knife tool to develop and execute your application and offering multiple key benefits: ● Asynchronous data acquisition ● Optimized performance ● Synchronous recording and playback ● Comprehensive component libraries: over 600 I/O software components available ● Flexible algorithm development: Share and collaborate ● Multi-platform processing ● Cross-platform compatibility and scalable: from PC, Embedded targets, to the Cloud. ● Rapid prototyping and testing ● Integration with dSPACE tools ● Time and resource savings ● Limiting Development risks, errors and efforts. ● Certification ISO26262 ASIL-B: on demand
  • 15
    Tronis

    Tronis

    TWT GmbH Science & Innovation

    Tronis is an environment for virtual prototyping and for safeguarding driver assistance systems, e.g. for highly automated or autonomous driving . Based on a modern 3D game engine, real driving situations and traffic scenarios can be efficiently mapped and used for testing, e.g. for camera and radar-based environment detection. This can significantly accelerate the development of corresponding systems and reduce the need for real prototypes.
  • 16
    Momenta

    Momenta

    Momenta

    Momenta is a leading autonomous driving technology company. Momenta is dedicated to reshaping the future of mobility by offering solutions to enable multiple levels of driving autonomy. It has pioneered a unique scalable path toward full autonomous driving by combining a data-driven approach with iterating algorithms, referred to as its “flywheel approach”, as well as a “two-leg” product strategy focusing on both Mpilot, its mass-production-ready highly autonomous driving solutions, and MSD (Momenta Self-Driving), its driving solution targeting full autonomy. Mpilot is purpose-built mass production ready highly automated driving software solution for private vehicles. Our core product includes Mpilot X, which provides a highly autonomous end-to-end driving experience with full driving scenario coverage and key functions including Mpilot Highway, Mpilot Urban and Mpilot Parking.
  • 17
    Conigital

    Conigital

    Conigital Group

    Connecting city infrastructures with digital assets, transforming traditional industries to create profit, social impact, and sustainability. We are a deep tech, AI driverless vehicle company developing our full-stack “lift and shift” driverless vehicle platform ConICAV™ for any vehicle type. We retrofit or custom-build driverless vehicles for industrial and commercial fleets, for a variety of different use cases. Utilizing AI and machine learning techniques, our ConICAV™ integrated solution improves asset management, customer experience, and operational efficiency. Conigital has secured a groundbreaking £500 million Series A+ funding offer from a global private equity firm with £150 billion under management.
  • 18
    Aptiv

    Aptiv

    Aptiv

    Aptiv is a global technology company that develops safer, greener and more connected solutions enabling the future of mobility. Aptiv is focused on developing and commercializing autonomous vehicles and systems that enable point-to-point mobility via large fleets of autonomous vehicles in challenging urban driving environments. With talented teams working across the globe, from Boston to Singapore, Aptiv is the first company to deploy a commercial, autonomous ride-hailing service based in Las Vegas. Aptiv has provided over 100,000 public passenger rides, with 98% of passengers rating their Aptiv self-driving experience 5-out-of-5 stars. At Aptiv, we believe that our mobility solutions have the power to change the world.
  • 19
    Pony.ai

    Pony.ai

    Pony.ai

    We are developing safe and reliable autonomous driving technology globally. Having accumulated millions of kilometers in autonomous road testing in complex scenarios, we have built a solid foundation to deliver autonomous driving systems at scale. Pony.ai was the first to launch Robotaxi service in December 2018, allowing passengers to hail self-driving cars via the PonyPilot+ App to start a new, safe and enjoyable journey. The service is currently available in Guangzhou, Beijing, Irvine, CA, and Fremont, CA. We have launched autonomous mobility pilots in multiple cities across the US and China, serving hundreds of riders every day. These pilots have enabled us to build a strong technical and operational foundation to further expand and improve our service. We have come together to tackle the biggest tech challenges in mobility. We are making concrete progress every day toward our vision of autonomous mobility everywhere.
  • 20
    Impersonate.ai

    Impersonate.ai

    EchoTech.ai

    Impersonate is an autonomous control platform for indoor or outdoor robots in highly dynamic environments that fully encompass perception and planning into a single AI stack that’s generalized to any robotic setup.
  • 21
    Carver21

    Carver21

    DeepScale

    AI Building Blocks for Intelligent Cars. Carver21 is purpose built to scale to your perception needs whether enabling safety features or delivering autonomous driving functions.
  • 22
    OpenFleet

    OpenFleet

    OpenFleet

    Easily monitor your vehicles. Make your drivers' commute more enjoyable. Optimize your fleet use. Our Team is here for you. We help you rationalize your vehicle fleet, whether we're taking short/long-term lease, corporate/peer to peer carsharing, autonomous vehicle service etc. Offering full portfolio of mobility solutions : onboard telematics, online supervision platforms and mobile apps. Managing a mobility service has never been so easy. Limitless mobility for your drivers. An innovative all-inclusive service. For Corporate or P2P use. Say goodbye to your keyboxes. Drivers now use their badge, smartphone or smart watch to unlock your fleet. Connect to your fleet through User or Manager Portal. Your vehicles are now available anywhere, anytime. Mentoring programs, 24/7 customer service, PR, training: a dedicated Mobility Manager will assist you all along your mobility project.
    Starting Price: $20 per user per month
  • 23
    eMCOS

    eMCOS

    eSOL

    ● Application platform for autonomous driving systems that recognizes external environment through various collective data, and autonomously drive based on appropriate judgment ● Scalable distributed computing environment backed by many-core processors, and various types of processors to support advanced computing power required for intelligent information processing ● Scalable platform that allows software assets to be run on various hardware resources with real-time capability to ensure reliability and safety
  • 24
    Luxoft Autonomous

    Luxoft Autonomous

    Luxoft, a DXC Technology Company

    We co-create smart solutions that empower clients to make the transition to sustainable mobility and enable the automotive world to move forward. Driven by the convergence of new technologies, AI, IoT, connected infrastructure, digitization, electrification, we’re rapidly advancing toward a revolutionized automotive future. An exciting time, providing the ultimate freedom with zero accidents, zero emissions and zero ownership. This is a great prospect for society, the economy and the environment. We enable automakers to innovate across vital areas of advanced automotive and mobility technology. Combining the agility, energy and speed of a startup with the reach, positioning and manpower of an enterprise, we deliver highly complex solutions, at speed, in critical environments. Meet your current and future software needs in the age of autonomous driving. Differentiate with highly personalized and intelligent in-vehicle experiences.
  • 25
    Apex.AI

    Apex.AI

    Apex.AI

    ​Analyzing your current automotive or robotic software systems to assess a path to commercialization. Transitioning your existing software to Apex.OS. Transitioning your existing environment to a state-of-the-art continuous integration, continuous testing, and continuous delivery framework. Providing software architecture reviews and recommendations. Providing functional safety training to enable your team to build your safe autonomous system. Apex.OS is a fork of ROS 2 that has been made real-time, reliable, and deterministic so that it can be used in safety-critical applications. Apex.OS is developed in sync with future releases of ROS 2 and APIs stay compatible to ROS 2. Based on the integration of Eclipse Cyclone DDS™ and Eclipse iceoryx™, both are available as open-source and proven in automotive and mission-critical distributed systems.
  • 26
    NVIDIA TensorRT
    NVIDIA TensorRT is an ecosystem of APIs for high-performance deep learning inference, encompassing an inference runtime and model optimizations that deliver low latency and high throughput for production applications. Built on the CUDA parallel programming model, TensorRT optimizes neural network models trained on all major frameworks, calibrating them for lower precision with high accuracy, and deploying them across hyperscale data centers, workstations, laptops, and edge devices. It employs techniques such as quantization, layer and tensor fusion, and kernel tuning on all types of NVIDIA GPUs, from edge devices to PCs to data centers. The ecosystem includes TensorRT-LLM, an open source library that accelerates and optimizes inference performance of recent large language models on the NVIDIA AI platform, enabling developers to experiment with new LLMs for high performance and quick customization through a simplified Python API.
  • 27
    Journeyware

    Journeyware

    Xevo by Lear

    The future of the connected car experience: thin client application and cloud framework enabling consumer commerce, in-car media applications, mobile apps, and enterprise services. The Xevo Journeyware platform for cloud, car, and mobile devices enables multi-media applications, AI-driven contextual recommendations, and content delivery to give drivers and passengers an enhanced in-vehicle experience and provide automakers with new monetization opportunities. Components of Journeyware connected car platform are deployed in more than 25 million vehicles worldwide. Xevo Context recommendation engine utilizes Journeyware’s automotive AI technology to provide a dynamic, hyper-contextual user experience. This highly-relevant content is delivered as suggestions and offers based on drivers’ preferences, past behavior, time of day, vehicle location, current route, and more.
  • 28
    Wejo

    Wejo

    Wejo

    Simple and secure access to the world’s connected car data all in one place, helping our partners to innovate and solve mobility problems. The way we travel is changing. We’re living in a world that’s hyper-connected and our cars are becoming increasingly more connected too. Connected car data can offer you valuable insights to help transform your business and shape your future. Connected car data is accurate, authentic and most importantly, useful. The data comes from actual vehicles on the road, giving you unrivalled insights into driver needs and behaviours. We partner with leading vehicle manufacturers to harness the power that this data holds and make accessing it simple. For private and public sector businesses, the benefits of connected car data are huge. From locating traffic hotspots, to finding out where there are safety risks, to gaining insight into driver patterns and trends.
  • 29
    NODAR

    NODAR

    NODAR

    NODAR has re-imagined stereo vision using patented algorithms that enable standard cameras to deliver unprecedented 3D range, precision, and reliability. NODAR's advanced 3D vision technology is engineered to push the limits of what's possible in automotive and industrial applications. Whether passenger vehicles, autonomous trains, or monitoring and surveillance applications, reliable 3D data is critical to safety and performance. NODAR provides industry-leading 3D spatial data for autonomous systems across industries. Autonomy is transforming nearly every aspect of society, boosting output, delivering convenience, and enhancing safety. In many outdoor settings, conditions can be harsh and demanding. Automated equipment must operate 24 hours a day in the face of inclement weather, glare, dust, and high vibration. NODAR technology and products deliver unparalleled data, precision, and reliability for safety-critical applications that depend on 3D data.
  • 30
    Nauto

    Nauto

    Nauto

    Driver and Fleet Safety Platform. The only AI technology impacting driver behavior in real-time. Anticipate events before they happen to help avoid incidents, minimize repair and maintenance costs, and reduce fleet claims. Minimize your injury, liability, and processing costs with AI-enabled, real-time detection and incident reporting. Driver Behavior Alerts coach your entire fleet in real-time. For drivers that need additional fleet safety training, Nauto Manager-Led Coaching helps you prioritize drivers and guides you through the coaching process. Reduce operational and insurance costs associated with fleet collisions by preventing distracted and drowsy driving. Drive adoption with features designed to help protect driver privacy and provide transparency between managers and fleet drivers.
  • 31
    Deeproute.ai

    Deeproute.ai

    Deeproute.ai

    DeepRoute.ai aims to provide the safest, smartest, and most reliable Level 4 full-stack self-driving solution. We are committed to innovating the future of transportation through the power of technology. DeepRoute.ai cooperates with OEMs and mobility Companies, achieving self-driving commercialization. DeepRoute.ai utilizes advancements in technological research and commercialization to collaborate with OEMs, mobility companies, and other partners to make self-driving more readily available to the public. Through semantic mapping and a real-time positioning system, DeepRoute.ai’s planning and control algorithm can plan the optimal path under complex road environments, and provide a safe, comfortable and effective driving experience.
  • 32
    42dot

    42dot

    42dot

    42dot combines cutting-edge technology with tailored connected services to optimize the process of moving people and goods. Our goal is to create a fully autonomous, centralized infrastructure for autonomous and frictionless transportation services. By accelerating the transition to autonomous Transportation-as-a-Service, we hope to bring valuable, life-enhancing products to the market. UMOS is a comprehensive mobility and logistics platform that integrates all forms of ground and air transportation services such as e-hailing, fleet management, demand-responsive transport, smart logistics and more. The platform encompasses multiple layers of functionality built with robust technology that provides enterprise-level scale. UMOS aims to create an ecosystem where we no longer have to worry about getting around. UMOS will take care of our mobility needs and everything related to being in motion.
  • 33
    openpilot

    openpilot

    comma.ai

    openpilot is open source software built to improve upon the existing driver assistance in most new cars on the road today. Tesla Autopilot like functionality for your Toyota, Honda, and other top brands. While engaged, openpilot includes camera based driver monitoring that works both day and night to alert the driver when their eyes are not on the road ahead.
  • 34
    NVIDIA Isaac
    NVIDIA Isaac is an AI robot development platform that comprises NVIDIA CUDA-accelerated libraries, application frameworks, and AI models to expedite the creation of AI robots, including autonomous mobile robots, robotic arms, and humanoids. The platform features NVIDIA Isaac ROS, a collection of CUDA-accelerated computing packages and AI models built on the open source ROS 2 framework, designed to streamline the development of advanced AI robotics applications. Isaac Manipulator, built on Isaac ROS, enables the development of AI-powered robotic arms that can seamlessly perceive, understand, and interact with their environments. Isaac Perceptor facilitates the rapid development of advanced AMRs capable of operating in unstructured environments like warehouses or factories. For humanoid robotics, NVIDIA Isaac GR00T serves as a research initiative and development platform for general-purpose robot foundation models and data pipelines.
  • 35
    iZND GPS Tracking Solution
    iZND GPS Server is a leading web based GPS tracking platform successfully used by transportation companies, police and fire departments, utility companies, service organizations and other businesses with mobile workers around the world. iZND GPS Tracking Server scales from small installations to thousands of vehicles and operators. The platform supports hundreds of different tracking devices, a large number of maps and languages. Big companies and tracking partners can create separate tracking applications for different departments or customers on a single iZND GPS Server installation. The platform offers real time GPS tracking of multiple vehicles in a web browser. A live view of the entire vehicle fleet gives you control, the best vehicle can be dispatched for a job and specific vehicles can be followed when required. Alerts and notifications are instantly available in the application.
    Starting Price: $30 per month
  • 36
    Hivemind

    Hivemind

    Shield AI

    We’re building Hivemind, our AI pilot, to enable swarms of drones and aircraft to operate autonomously without GPS, communications, or a pilot. Our mission is to protect service members and civilians with intelligent systems. More than just preset behaviors & waypoints. Like a human pilot, Hivemind reads and reacts to the battlefield, and does not require GPS, waypoints, or prior knowledge to make decisions. It is the first and only fully autonomous AI pilot deployed in combat since 2018. From indoor building clearance with quadcopters to integrated air defense breach with fixed-wing drones, and F-16 dogfights, Hivemind™ learns and autonomously executes missions. A new generation of aircraft flown by Hivemind to provide persistent aerial dominance across sea, air, and land, on the tactical edge.
  • 37
    VLLM

    VLLM

    VLLM

    VLLM is a high-performance library designed to facilitate efficient inference and serving of Large Language Models (LLMs). Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has evolved into a community-driven project with contributions from both academia and industry. It offers state-of-the-art serving throughput by efficiently managing attention key and value memory through its PagedAttention mechanism. It supports continuous batching of incoming requests and utilizes optimized CUDA kernels, including integration with FlashAttention and FlashInfer, to enhance model execution speed. Additionally, vLLM provides quantization support for GPTQ, AWQ, INT4, INT8, and FP8, as well as speculative decoding capabilities. Users benefit from seamless integration with popular Hugging Face models, support for various decoding algorithms such as parallel sampling and beam search, and compatibility with NVIDIA GPUs, AMD CPUs and GPUs, Intel CPUs, and more.
  • 38
    CUDA

    CUDA

    NVIDIA

    CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs. In GPU-accelerated applications, the sequential part of the workload runs on the CPU – which is optimized for single-threaded performance – while the compute intensive portion of the application runs on thousands of GPU cores in parallel. When using CUDA, developers program in popular languages such as C, C++, Fortran, Python and MATLAB and express parallelism through extensions in the form of a few basic keywords. The CUDA Toolkit from NVIDIA provides everything you need to develop GPU-accelerated applications. The CUDA Toolkit includes GPU-accelerated libraries, a compiler, development tools and the CUDA runtime.
  • 39
    NVIDIA Isaac Sim
    NVIDIA Isaac Sim is an open source reference robotics simulation application built on NVIDIA Omniverse, enabling developers to design, simulate, test, and train AI-driven robots in physically realistic virtual environments. It is built atop Universal Scene Description (OpenUSD), offering full extensibility so developers can create custom simulators or seamlessly integrate Isaac Sim's capabilities into existing validation pipelines. The platform supports three essential workflows; large-scale synthetic data generation for training foundation models with photorealistic rendering and automatic ground truth labeling; software-in-the-loop testing, which connects actual robot software with simulated hardware to validate control and perception systems; and robot learning through NVIDIA’s Isaac Lab, which accelerates training of behaviors in simulation before real-world deployment. Isaac Sim delivers GPU-accelerated physics (via NVIDIA PhysX) and RTX-enabled sensor simulation.
  • 40
    NVIDIA Picasso
    NVIDIA Picasso is a cloud service for building generative AI–powered visual applications. Enterprises, software creators, and service providers can run inference on their models, train NVIDIA Edify foundation models on proprietary data, or start from pre-trained models to generate image, video, and 3D content from text prompts. Picasso service is fully optimized for GPUs and streamlines training, optimization, and inference on NVIDIA DGX Cloud. Organizations and developers can train NVIDIA’s Edify models on their proprietary data or get started with models pre-trained with our premier partners. Expert denoising network to generate photorealistic 4K images. Temporal layers and novel video denoiser generate high-fidelity videos with temporal consistency. A novel optimization framework for generating 3D objects and meshes with high-quality geometry. Cloud service for building and deploying generative AI-powered image, video, and 3D applications.
  • 41
    VMware Private AI Foundation
    VMware Private AI Foundation is a joint, on‑premises generative AI platform built on VMware Cloud Foundation (VCF) that enables enterprises to run retrieval‑augmented generation workflows, fine‑tune and customize large language models, and perform inference in their own data centers, addressing privacy, choice, cost, performance, and compliance requirements. It integrates the Private AI Package (including vector databases, deep learning VMs, data indexing and retrieval services, and AI agent‑builder tools) with NVIDIA AI Enterprise (comprising NVIDIA microservices like NIM, NVIDIA’s own LLMs, and third‑party/open source models from places like Hugging Face). It supports full GPU virtualization, monitoring, live migration, and efficient resource pooling on NVIDIA‑certified HGX servers with NVLink/NVSwitch acceleration. Deployable via GUI, CLI, and API, it offers unified management through self‑service provisioning, model store governance, and more.
  • 42
    Google Cloud AI Infrastructure
    Options for every business to train deep learning and machine learning models cost-effectively. AI accelerators for every use case, from low-cost inference to high-performance training. Simple to get started with a range of services for development and deployment. Tensor Processing Units (TPUs) are custom-built ASIC to train and execute deep neural networks. Train and run more powerful and accurate models cost-effectively with faster speed and scale. A range of NVIDIA GPUs to help with cost-effective inference or scale-up or scale-out training. Leverage RAPID and Spark with GPUs to execute deep learning. Run GPU workloads on Google Cloud where you have access to industry-leading storage, networking, and data analytics technologies. Access CPU platforms when you start a VM instance on Compute Engine. Compute Engine offers a range of both Intel and AMD processors for your VMs.
  • 43
    NVIDIA Cosmos
    NVIDIA Cosmos is a developer-first platform of state-of-the-art generative World Foundation Models (WFMs), advanced video tokenizers, guardrails, and an accelerated data processing and curation pipeline designed to supercharge physical AI development. It enables developers working on autonomous vehicles, robotics, and video analytics AI agents to generate photorealistic, physics-aware synthetic video data, trained on an immense dataset including 20 million hours of real-world and simulated video, to rapidly simulate future scenarios, train world models, and fine‑tune custom behaviors. It includes three core WFM types; Cosmos Predict, capable of generating up to 30 seconds of continuous video from multimodal inputs; Cosmos Transfer, which adapts simulations across environments and lighting for versatile domain augmentation; and Cosmos Reason, a vision-language model that applies structured reasoning to interpret spatial-temporal data for planning and decision-making.
  • 44
    NVIDIA DeepStream SDK
    NVIDIA's DeepStream SDK is a comprehensive streaming analytics toolkit based on GStreamer, designed for AI-based multi-sensor processing, including video, audio, and image understanding. It enables developers to create stream-processing pipelines that incorporate neural networks and complex tasks like tracking, video encoding/decoding, and rendering, facilitating real-time analytics on various data types. DeepStream is integral to NVIDIA Metropolis, a platform for building end-to-end services that transform pixel and sensor data into actionable insights. The SDK offers a powerful and flexible environment suitable for a wide range of industries, supporting multiple programming options such as C/C++, Python, and Graph Composer's intuitive UI. It allows for real-time insights by understanding rich, multi-modal sensor data at the edge and supports managed AI services through deployment in cloud-native containers orchestrated with Kubernetes.
  • 45
    NVIDIA Iray
    NVIDIA® Iray® is an intuitive physically based rendering technology that generates photorealistic imagery for interactive and batch rendering workflows. Leveraging AI denoising, CUDA®, NVIDIA OptiX™, and Material Definition Language (MDL), Iray delivers world-class performance and impeccable visuals—in record time—when paired with the newest NVIDIA RTX™-based hardware. The latest version of Iray adds support for RTX, which includes dedicated ray-tracing-acceleration hardware support (RT Cores) and an advanced acceleration structure to enable real-time ray tracing in your graphics applications. In the 2019 release of the Iray SDK, all render modes utilize NVIDIA RTX technology. In combination with AI denoising, this enables you to create photorealistic rendering in seconds instead of minutes. Using Tensor Cores on the newest NVIDIA hardware brings the power of deep learning to both final-frame and interactive photorealistic renderings.
  • 46
    NVIDIA Holoscan
    NVIDIA® Holoscan is a domain-agnostic AI computing platform that delivers the accelerated, full-stack infrastructure required for scalable, software-defined, and real-time processing of streaming data running at the edge or in the cloud. Holoscan supports a camera serial interface and front-end sensors for video capture, ultrasound research, data acquisition, and connection to legacy medical devices. Use the NVIDIA Holoscan SDK’s data transfer latency tool to measure complete, end-to-end latency for video processing applications. Access AI reference pipelines for radar, high-energy light sources, endoscopy, ultrasound, and other streaming video applications. NVIDIA Holoscan includes optimized libraries for network connectivity, data processing, and AI, as well as examples to create and run low-latency data-streaming applications using either C++, Python, or Graph Composer.
  • 47
    NVIDIA DGX Cloud Lepton
    NVIDIA DGX Cloud Lepton is an AI platform that connects developers to a global network of GPU compute across multiple cloud providers through a single platform. It offers a unified experience to discover and utilize GPU resources, along with integrated AI services to streamline the deployment lifecycle across multiple clouds. Developers can start building with instant access to NVIDIA’s accelerated APIs, including serverless endpoints, prebuilt NVIDIA Blueprints, and GPU-backed compute. When it’s time to scale, DGX Cloud Lepton powers seamless customization and deployment across a global network of GPU cloud providers. It enables frictionless deployment across any GPU cloud, allowing AI applications to be deployed across multi-cloud and hybrid environments with minimal operational burden, leveraging integrated services for inference, testing, and training workloads.
  • 48
    NVIDIA Blueprints
    NVIDIA Blueprints are reference workflows for agentic and generative AI use cases. Enterprises can build and operationalize custom AI applications, creating data-driven AI flywheels, using Blueprints along with NVIDIA AI and Omniverse libraries, SDKs, and microservices. Blueprints also include partner microservices, reference code, customization documentation, and a Helm chart for deployment at scale. With NVIDIA Blueprints, developers benefit from a unified experience across the NVIDIA stack, from cloud and data centers to NVIDIA RTX AI PCs and workstations. Use NVIDIA Blueprints to create AI agents that use sophisticated reasoning and iterative planning to solve complex problems. Check out new NVIDIA Blueprints, which equip millions of enterprise developers with reference workflows for building and deploying generative AI applications. Connect AI applications to enterprise data using industry-leading embedding and reranking models for information retrieval at scale.
  • 49
    Tencent Cloud GPU Service
    Cloud GPU Service is an elastic computing service that provides GPU computing power with high-performance parallel computing capabilities. As a powerful tool at the IaaS layer, it delivers high computing power for deep learning training, scientific computing, graphics and image processing, video encoding and decoding, and other highly intensive workloads. Improve your business efficiency and competitiveness with high-performance parallel computing capabilities. Set up your deployment environment quickly with auto-installed GPU drivers, CUDA, and cuDNN and preinstalled driver images. Accelerate distributed training and inference by using TACO Kit, an out-of-the-box computing acceleration engine provided by Tencent Cloud.
    Starting Price: $0.204/hour
  • 50
    NVIDIA Jetson
    NVIDIA's Jetson platform is a leading solution for embedded AI computing, utilized by professional developers to create breakthrough AI products across various industries, as well as by students and enthusiasts for hands-on AI learning and innovative projects. The platform comprises small, power-efficient production modules and developer kits, offering a comprehensive AI software stack for high-performance acceleration. This enables the deployment of generative AI at the edge, supporting applications like NVIDIA Metropolis and the Isaac platform. The Jetson family includes a range of modules tailored to different performance and power efficiency needs, such as the Jetson Nano, Jetson TX2, Jetson Xavier NX, and the Jetson Orin series. Each module is designed to meet specific AI computing requirements, from entry-level projects to advanced robotics and industrial applications.