Alternatives to NVIDIA Cloud Functions

Compare NVIDIA Cloud Functions alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to NVIDIA Cloud Functions in 2026. Compare features, ratings, user reviews, pricing, and more from NVIDIA Cloud Functions competitors and alternatives in order to make an informed decision for your business.

  • 1
    Google Cloud Run
    Cloud Run is a fully-managed compute platform that lets you run your code in a container directly on top of Google's scalable infrastructure. We’ve intentionally designed Cloud Run to make developers more productive - you get to focus on writing your code, using your favorite language, and Cloud Run takes care of operating your service. Fully managed compute platform for deploying and scaling containerized applications quickly and securely. Write code your way using your favorite languages (Go, Python, Java, Ruby, Node.js, and more). Abstract away all infrastructure management for a simple developer experience. Build applications in your favorite language, with your favorite dependencies and tools, and deploy them in seconds. Cloud Run abstracts away all infrastructure management by automatically scaling up and down from zero almost instantaneously—depending on traffic. Cloud Run only charges you for the exact resources you use. Cloud Run makes app development & deployment simpler.
    Compare vs. NVIDIA Cloud Functions View Software
    Visit Website
  • 2
    RunPod

    RunPod

    RunPod

    RunPod offers a cloud-based platform designed for running AI workloads, focusing on providing scalable, on-demand GPU resources to accelerate machine learning (ML) model training and inference. With its diverse selection of powerful GPUs like the NVIDIA A100, RTX 3090, and H100, RunPod supports a wide range of AI applications, from deep learning to data processing. The platform is designed to minimize startup time, providing near-instant access to GPU pods, and ensures scalability with autoscaling capabilities for real-time AI model deployment. RunPod also offers serverless functionality, job queuing, and real-time analytics, making it an ideal solution for businesses needing flexible, cost-effective GPU resources without the hassle of managing infrastructure.
    Compare vs. NVIDIA Cloud Functions View Software
    Visit Website
  • 3
    JFrog Artifactory
    The Industry Standard Universal Binary Repository Manager. Supports all major package types (over 27 and growing) such as Maven, npm, Python, NuGet, Gradle, Go, and Helm including Kubernetes and Docker as well as integration with leading CI servers and DevOps tools that you already use. Additional functionalities include: - High Availability that scales to infinity with active/active clustering of your DevOps environment and scales as business grows - On-Prem, Cloud, Hybrid, or Multi-Cloud Solution - De Facto Kubernetes Registry managing application packages, operating system’s component dependencies, open source libraries, Docker containers, and Helm charts with full visibility of all dependencies. Compatible with a growing list of Kubernetes cluster providers.
  • 4
    AWS Lambda
    Run code without thinking about servers. Pay only for the compute time you consume. AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume. With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app. AWS Lambda automatically runs your code without requiring you to provision or manage servers. Just write the code and upload it to Lambda. AWS Lambda automatically scales your application by running code in response to each trigger. Your code runs in parallel and processes each trigger individually, scaling precisely with the size of the workload.
  • 5
    Red Hat OpenShift
    The Kubernetes platform for big ideas. Empower developers to innovate and ship faster with the leading hybrid cloud, enterprise container platform. Red Hat OpenShift offers automated installation, upgrades, and lifecycle management throughout the container stack—the operating system, Kubernetes and cluster services, and applications—on any cloud. Red Hat OpenShift helps teams build with speed, agility, confidence, and choice. Code in production mode anywhere you choose to build. Get back to doing work that matters. Red Hat OpenShift is focused on security at every level of the container stack and throughout the application lifecycle. It includes long-term, enterprise support from one of the leading Kubernetes contributors and open source software companies. Support the most demanding workloads including AI/ML, Java, data analytics, databases, and more. Automate deployment and life-cycle management with our vast ecosystem of technology partners.
  • 6
    IronFunctions
    ​IronFunctions is an open source serverless platform, also known as a Functions-as-a-Service (FaaS) platform, that allows developers to write functions in any language and deploy them across various environments, including public, private, and hybrid clouds. It supports AWS Lambda function formats, enabling seamless import and execution of existing Lambda functions. Designed for both developers and operators, IronFunctions simplifies coding by allowing the creation of small, focused functions without the need to manage the underlying infrastructure. Operators benefit from efficient resource utilization, as functions consume resources only during execution, and the platform's scalability is managed by adding more IronFunctions nodes as needed. It is built using Go and leverages container technologies to handle incoming workloads by spinning up new containers, processing the payloads, and returning responses.
  • 7
    NVIDIA DGX Cloud Serverless Inference
    NVIDIA DGX Cloud Serverless Inference is a high-performance, serverless AI inference solution that accelerates AI innovation with auto-scaling, cost-efficient GPU utilization, multi-cloud flexibility, and seamless scalability. With NVIDIA DGX Cloud Serverless Inference, you can scale down to zero instances during periods of inactivity to optimize resource utilization and reduce costs. There's no extra cost for cold-boot start times, and the system is optimized to minimize them. NVIDIA DGX Cloud Serverless Inference is powered by NVIDIA Cloud Functions (NVCF), which offers robust observability features. It allows you to integrate your preferred monitoring tools, such as Splunk, for comprehensive insights into your AI workloads. NVCF offers flexible deployment options for NIM microservices while allowing you to bring your own containers, models, and Helm charts.
  • 8
    JFrog Container Registry
    The world’s most advanced, powerful, hybrid Docker and Helm registry. Power your world of Docker without limits. The JFrog Container Registry is the most comprehensive and advanced registry in the market today, supporting Docker containers and Helm Chart repositories for your Kubernetes deployments. Use it as your single access point to manage and organize your Docker images, while avoiding Docker Hub throttling or retention issues. JFrog provides reliable, consistent, and efficient access to remote Docker container registries with integration to your build ecosystem. Develop and deploy your way. Supports your current and future business model with on-prem / self-hosted, hybrid, and multi-cloud environments on your choice of AWS, Microsoft Azure, and Google Cloud. Built on JFrog Artifactory’s proven track record of power, stability, and resilience to easily manage and deploy your Docker images and provide your DevOps teams with full control over access and permissions.
    Starting Price: $98 per month
  • 9
    IBM Cloud Functions
    Based on Apache OpenWhisk, IBM Cloud Functions is a polyglot functions-as-a-service (FaaS) programming platform for developing lightweight code that scalably executes on demand. IBM Cloud Functions offers access to the Apache OpenWhisk ecosystem, where anyone can contribute code. IBM Cloud Functions enables developers to build apps with action sequences that execute in response to events. IBM Cloud Functions makes cognitive analysis of application data inherent to your workflows. Costs increase only as you construct more OpenWhisk-intensive solutions or need to support larger workloads.
  • 10
    Google Cloud Artifact Registry
    Artifact Registry is Google Cloud’s unified, fully managed package and container registry designed for high-performance artifact storage and dependency management. It centralizes host­ing of container images (Docker/OCI), Helm charts, language packages (Java/Maven, Node.js/npm, Python), and OS packages, offering fast, scalable, reliable, and secure handling with built-in vulnerability scanning and IAM-based access control. Integrated seamlessly with Google Cloud CI/CD tools like Cloud Build, Cloud Run, GKE, Compute Engine, and App Engine, it supports regional and virtual repositories with granular security via VPC Service Controls and customer-managed encryption keys. Developers benefit from standardized Docker Registry API support, comprehensive REST/RPC interfaces, and migration paths from Container Registry. Daily updated documentation includes quickstarts, repository management, access configuration, observability tools, and deep-dive guides.
  • 11
    KubeArmor

    KubeArmor

    AccuKnox

    KubeArmor is a cloud-native runtime security enforcement engine designed for Kubernetes workloads, containers, and virtual machines. It leverages eBPF and Linux Security Modules (LSMs) like AppArmor and SELinux to preemptively harden workloads and prevent attacks without modifying pods or containers. KubeArmor enforces real-time policy-based controls on process behavior, file access, networking, and resource usage. It simplifies complex security settings by providing Kubernetes-native policy management and detailed policy violation logging. Installation is straightforward via Helm charts, and it integrates seamlessly with multiple cloud marketplaces. KubeArmor’s proactive inline mitigation approach improves security beyond traditional post-attack responses.
  • 12
    OpenFaaS

    OpenFaaS

    OpenFaaS

    Serverless functions, made simple. OpenFaaS® makes it simple to deploy both functions and existing code to Kubernetes. Avoid lock-in through the use of Docker. Run-on any public or private cloud. Build both microservices & functions in any language. Legacy code and binaries. Auto-scale for demand or to zero when idle. Bring your laptop, your own on-prem hardware, or create a cluster in the cloud. Let Kubernetes do the heavy lifting enabling you to build a scalable, fault-tolerant event-driven serverless platform for your applications. You can try out OpenFaaS in 60 seconds or write and deploy your first Python function in around 10-15 minutes. From there you can take the OpenFaaS workshop, a series of tried-and-tested self-paced labs which teach you everything you need to know about functions, and more. An ecosystem for sharing, reusing, and collaborating on functions. Reduce boilerplate code, and share code in the templates store.
  • 13
    Azure Web App for Containers
    It has never been easier to deploy container-based web apps. Just pull container images from Docker Hub or a private Azure Container Registry, and Web App for Containers will deploy the containerized app with your preferred dependencies to production in seconds. The platform automatically takes care of OS patching, capacity provisioning, and load balancing. Automatically scale vertically and horizontally based on application needs. Granular scaling rules are available to handle peaks in workload automatically while minimizing costs during off-peak times. Deploy data and host services across multiple locations with just few mouse clicks.
  • 14
    Azure Functions
    Develop more efficiently with Functions, an event-driven serverless compute platform that can also solve complex orchestration problems. Build and debug locally without additional setup, deploy and operate at scale in the cloud, and integrate services using triggers and bindings. End-to-end development experience with integrated tools and built-in DevOps capabilities. Integrated programming model to respond to events and seamlessly connect to other services. Implement a variety of functions and scenarios, such as web apps and APIs with .NET, Node.js, or Java; machine learning workflows with Python; and cloud automation with PowerShell. Get a complete serverless application development experience—from building and debugging locally to deploying and monitoring in the cloud.
  • 15
    Movestax

    Movestax

    Movestax

    Movestax revolutionizes cloud infrastructure with a serverless-first platform for builders. From app deployment to serverless functions, databases, and authentication, Movestax helps you build, scale, and automate without the complexity of traditional cloud providers. Whether you’re just starting out or scaling fast, Movestax offers the services you need to grow. Deploy frontend and backend applications instantly, with integrated CI/CD. Fully managed, scalable PostgreSQL, MySQL, MongoDB, and Redis that just work. Create sophisticated workflows and integrations directly within your cloud infrastructure. Run scalable serverless functions, automating tasks without managing servers. Simplify user management with Movestax’s built-in authentication system. Access pre-built APIs and foster community collaboration to accelerate development. Store and retrieve files and backups with secure, scalable object storage.
  • 16
    Oracle Cloud Functions
    ​Oracle Cloud Infrastructure (OCI) Functions is a serverless computing service that enables developers to create, run, and scale applications without managing infrastructure. Built on the open source Fn Project, it supports multiple programming languages, including Python, Go, Java, Node.js, and C#, allowing for flexible function development. Developers can deploy code directly, with OCI handling automatic provisioning and scaling of resources. It offers provisioned concurrency to maintain low-latency execution, ensuring functions are ready to accept calls instantly. A catalog of prebuilt functions is available, enabling rapid deployment of common tasks without the need to write code from scratch. Functions are packaged as Docker images, and advanced users can utilize Dockerfiles to customize runtime environments. Integration with Oracle Identity and Access Management provides fine-grained access control, while OCI Vault securely stores sensitive configuration data.
    Starting Price: $0.0000002 per month
  • 17
    Celest

    Celest

    Celest

    Write your backend like a Flutter app, and deploy it like magic. Celest is a cloud platform tailored for Flutter developers, enabling them to build, deploy, and manage backends entirely in Dart. By annotating any Dart function with the cloud, developers can transform it into a serverless function, streamlining backend logic within the Flutter ecosystem. Celest seamlessly integrates with Drift schemas, automatically generating databases to simplify data management. Deployment is efficient, requiring just a single command. This process initializes Celest, migrates the project, warms up the engines, and deploys to Celest cloud, culminating in a live project URL. The platform supports features like Dart cloud functions, Flutter on the server, server-side widgets (coming soon), hot reload, auto-serialization, and client generation. Celest is designed to enhance the development experience for Flutter developers.
  • 18
    OpenShift Cloud Functions
    Red Hat Openshift Cloud Functions (OCF) is a FaaS - Function as a Service that can be deployed on Openshift and is based out of Knative a FaaS project in the Kubernetes community. It will enable developers to run code without needing to know anything about the underlying platform specifics. Developers need access to services faster, deploying backend services, platforms or applications can be time-consuming and tedious. This also means developers should not be restricted to any certain language or framework but also to create business value quickly and enhance services by way of FaaS i.e. Function as a Service that can scale a small unit of custom code while depending on other third-party/backend services. Serverless is an architectural model that provides an event-driven way to implement distributed applications that can auto-scale on demand.
  • 19
    Helm

    Helm

    The Linux Foundation

    Helm helps you manage Kubernetes applications, Helm charts help you define, install, and upgrade even the most complex Kubernetes application. Charts are easy to create, version, share, and publish, so start using Helm and stop the copy-and-paste. Charts describe even the most complex apps, provide repeatable application installation, and serve as a single point of authority. Take the pain out of updates with in-place upgrades and custom hooks. Charts are easy to version, share, and host on public or private servers. Use helm rollback to roll back to an older version of a release with ease. Helm uses a packaging format called charts. A chart is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.
  • 20
    Cloudflare Workers
    You write code. We handle the rest. Deploy serverless code instantly across the globe to give it exceptional performance, reliability, and scale. No more configuring auto-scaling, load balancers, or paying for capacity you don’t use. Traffic is automatically routed and load balanced across thousands of servers. Sleep well as your code scales effortlessly. Every deploy is made to a network of data centers running V8 isolates. Your code is powered by Cloudflare’s network which is milliseconds away from virtually every Internet user. Choose from a template in your language to kickstart building an app, creating a function, or writing an API. We have templates, tutorials, and a CLI to get you up and running in no time. Most serverless platforms experience a cold start every time you deploy or your service increases in popularity. Workers can run your code instantly, without cold starts. The first 100,000 requests each day are free and paid plans start at just $5/10 million requests.
    Starting Price: $5 per 10 million requests
  • 21
    Azure Container Registry
    Build, store, secure, scan, replicate, and manage container images and artifacts with a fully managed, geo-replicated instance of OCI distribution. Connect across environments, including Azure Kubernetes Service and Azure Red Hat OpenShift, and across Azure services like App Service, Machine Learning, and Batch. Geo-replication to efficiently manage a single registry across multiple regions. OCI artifact repository for adding helm charts, singularity support, and new OCI artifact-supported formats. Automated container building and patching including base image updates and task scheduling. Integrated security with Azure Active Directory (Azure AD) authentication, role-based access control, Docker content trust, and virtual network integration. Streamline building, testing, pushing, and deploying images to Azure with Azure Container Registry Tasks.
    Starting Price: $0.167 per day
  • 22
    Flux CD

    Flux CD

    Flux CD

    Flux is a set of continuous and progressive delivery solutions for Kubernetes that are open and extensible. The latest version of Flux brings many new features, making it more flexible and versatile. Flux is a CNCF Incubating project. Flux and Flagger deploy apps with canaries, feature flags, and A/B rollouts. Flux can also manage any Kubernetes resource. Infrastructure and workload dependency management are built-in. Flux enables application deployment (CD) and (with the help of Flagger) progressive delivery (PD) through automatic reconciliation. Flux can even push back to Git for you with automated container image updates to Git (image scanning and patching). Flux works with your Git providers (GitHub, GitLab, Bitbucket, can even use s3-compatible buckets as a source), all major container registries, and all CI workflow providers. Kustomize, Helm, RBAC, and policy-driven validation (OPA, Kyverno, admission controllers) so it simply falls into place.
  • 23
    Yandex Cloud Functions
    Run code as a function in a secure, fault-tolerant, and automatically scalable environment without creating or maintaining VMs. As the number of function calls increases, the service automatically creates additional instances of your function. All functions run in parallel. The runtime environment is hosted in three availability zones, ensuring availability even if one zone fails. Configure and prepare instances of functions always ready to process loads. This mode allows you to avoid cold starts and quickly process loads of any size. Give functions access to your VPC to accelerate interactions with private resources, database clusters, virtual machines, Kubernetes nodes, etc. Serverless Functions tracks and logs information about function calls and analyzes execution flow and performance. You can also describe logging mechanisms in your function code. Launch cloud functions in synchronized mode and delayed execution mode.
    Starting Price: $0.012240 per GB
  • 24
    IBM Cloud Code Engine
    IBM Cloud® Code Engine is a fully managed, serverless platform. Bring your container images, batch jobs, or source code and let IBM Cloud Code Engine manage and secure the underlying infrastructure for you. There is no need to size, deploy, or scale container clusters yourself. And no networking skills are required either. IBM Cloud Code Engine will deploy, manage and autoscale it for you. No cluster administration, sizing, or over-provisioning worries. You pay only for what you actually use. Build great apps in the language of your choice, then deploy them in seconds on a serverless platform. No infrastructure management is needed. Cluster sizing, scaling and networking covered. Your apps are automatically secured with TLS and isolated from other workloads. Deploy and more securely integrate web apps, containers, batch jobs, and functions.
    Starting Price: $.5 per 1 million HTTP request
  • 25
    Google Cloud Container Security
    Secure your container environment on GCP, GKE, or Anthos. Containerization allows development teams to move fast, deploy software efficiently, and operate at an unprecedented scale. As enterprises create more containerized workloads, security must be integrated at each stage of the build-and-deploy life cycle. Infrastructure security means that your container management platform provides the right security features. Kubernetes includes security features to protect your identities, secrets, and network, and Google Kubernetes Engine uses native GCP functionality—like Cloud IAM, Cloud Audit Logging, and Virtual Private Clouds—and GKE-specific features like application layer secrets encryption and workload identity to bring the best of Google security to your workloads. Securing the software supply chain means that container images are safe to deploy. This is how you make sure your container images are vulnerability free and that the images you build aren't modified.
  • 26
    Beam Cloud

    Beam Cloud

    Beam Cloud

    Beam is a serverless GPU platform designed for developers to deploy AI workloads with minimal configuration and rapid iteration. It enables running custom models with sub-second container starts and zero idle GPU costs, allowing users to bring their code while Beam manages the infrastructure. It supports launching containers in 200ms using a custom runc runtime, facilitating parallelization and concurrency by fanning out workloads to hundreds of containers. Beam offers a first-class developer experience with features like hot-reloading, webhooks, and scheduled jobs, and supports scale-to-zero workloads by default. It provides volume storage options, GPU support, including running on Beam's cloud with GPUs like 4090s and H100s or bringing your own, and Python-native deployment without the need for YAML or config files.
  • 27
    AtomicWP Workload Protection
    AtomicWP Workload Security helps to secure workloads in a variety of environments while enhancing security. Meets virtually all cloud workload protection and compliance requirements in a single lightweight agent. AtomicWP secures workloads running in Amazon AWS, Google Cloud Platform (GCP), Microsoft Azure, IBM Cloud, or in any hybrid environment. AtomicWP secures both VM-based and container-based workloads. - Comprehensive Security in a Single Lightweight Agent - Automate Cloud Compliance - Automated Intrusion Prevention and Adaptive Security - Reduce Cloud Security Costs
  • 28
    Cloud Foundry

    Cloud Foundry

    Cloud Foundry

    Cloud Foundry makes it faster and easier to build, test, deploy and scale applications, providing a choice of clouds, developer frameworks, and application services. It is an open source project and is available through a variety of private cloud distributions and public cloud instances. Cloud Foundry has a container-based architecture that runs apps in any programming language. Deploy apps to CF using your existing tools and with zero modification to the code. Instantiate, deploy, and manage high-availability Kubernetes clusters with CF BOSH on any cloud. By decoupling applications from infrastructure, you can make individual decisions about where to host workloads – on premise, in public clouds, or in managed infrastructures – and move those workloads as necessary in minutes, with no changes to the app.
  • 29
    Alibaba Function Compute
    Alibaba Cloud Function Compute is a fully managed, event-driven compute service. Function Compute allows you to focus on writing and uploading code without having to manage infrastructure such as servers. Function Compute provides compute resources to run code flexibly and reliably. Additionally, Function Compute provides a generous amount of free resources. No fees are incurred for up to 1,000,000 invocations and 400,000 CU-second compute resources per month.
  • 30
    Flatcar Container Linux
    The introduction of container-based infrastructure was a paradigm shift. A Container-optimized Linux distribution is the best foundation for cloud native infrastructure. A minimal OS image only includes the tools needed to run containers. No package manager, no configuration drift. Delivering the OS on an immutable filesystem eliminates a whole category of security vulnerabilities. Automated atomic updates mean you get the latest security updates and open source technologies. Flatcar Container Linux is designed from the ground up for running container workloads. It fully embraces the container paradigm, including only what is required to run containers. Your immutable infrastructure deserves an immutable Linux OS. With Flatcar Container Linux, you manage your infrastructure, not your configuration.
  • 31
    Container Service for Kubernetes (ACK)
    Container Service for Kubernetes (ACK) from Alibaba Cloud is a fully managed service. ACK is integrated with services such as virtualization, storage, network and security, providing user a high performance and scalable Kubernetes environments for containerized applications. Alibaba Cloud is a Kubernetes Certified Service Provider (KCSP) and ACK is certified by Certified Kubernetes Conformance Program which ensures consistent experience of Kubernetes and workload portability. Kubernetes Certified Service Provider (KCSP) and qualified by Certified Kubernetes Conformance Program. Ensures Kubernetes consistent experience, workload portability. Provides deep and rich enterprise-class cloud native abilities. Ensures end-to-end application security and provides fine-grained access control. Allows you to quickly create Kubernetes clusters. Provides container-based management of applications throughout the application lifecycle.
  • 32
    Tencent Cloud Serverless Cloud Function
    By just writing the most important "core code" without concern for peripheral components, you can greatly reduce the complexity of the service architecture. SCF can scale up and down based on the number of requests with no manual configuration required. Regardless of the volume of requests to your application at any given time, SCF can automatically arrange suitable computing resources to meet business needs. If an available zone is down due to a natural disaster or power failure, SCF can automatically utilize the infrastructure of other available zones for code execution, eliminating the risk of service interruptions inherent in single-availability zone operations. Event-triggered workloads can be achieved using SCF that leverages different cloud services to meet the requirements of different business scenarios and further strengthen your service architecture.
  • 33
    EdgeWorkers
    Akamai's EdgeWorkers is a serverless computing platform that enables developers to deploy custom JavaScript code at the edge, optimizing user experiences by executing logic closer to end users. This approach reduces latency by eliminating high-latency calls to origin servers, enhancing performance, and improving security by moving exposed client-side logic to the edge. EdgeWorkers supports various use cases, including AB testing, geolocation-based content delivery, data protection, privacy compliance, dynamic website personalization, traffic management, and device-based personalization. Developers can write JavaScript code and deploy it via API, CLI, or GUI, leveraging Akamai's scalable architecture that automatically manages infrastructure during growth or traffic spikes. The platform integrates with Akamai's EdgeKV, a distributed key-value store, enabling data-driven applications with low-latency data access.
  • 34
    Aqua

    Aqua

    Aqua Security

    Full lifecycle security for container-based and serverless applications, from your CI/CD pipeline to runtime production environments. Aqua runs on-prem or in the cloud, at any scale. Prevent them before they happen, stop them when they happen. Aqua Security’s Team Nautilus focuses on uncovering new threats and attacks that target the cloud native stack. By researching emerging cloud threats, we aspire to create methods and tools that enable organizations to stop cloud native attacks. Aqua protects applications from development to production, across VMs, containers, and serverless workloads, up and down the stack. Release and update software at DevOps speed with security automation. Detect vulnerabilities and malware early and fix them fast, and allow only safe artifacts to progress through your CI/CD pipeline. Protect cloud native applications by minimizing their attack surface, detecting vulnerabilities, embedded secrets, and other security issues during the development cycle.
  • 35
    Azure Container Instances
    Develop apps fast without managing virtual machines or having to learn new tools—it's just your application, in a container, running in the cloud. By running your workloads in Azure Container Instances (ACI), you can focus on designing and building your applications instead of managing the infrastructure that runs them. Deploy containers to the cloud with unprecedented simplicity and speed—with a single command. Use ACI to provision additional compute for demanding workloads whenever you need. For example, with the Virtual Kubelet, use ACI to elastically burst from your Azure Kubernetes Service (AKS) cluster when traffic comes in spikes. Gain the security of virtual machines for your container workloads, while preserving the efficiency of lightweight containers. ACI provides hypervisor isolation for each container group to ensure containers run in isolation without sharing a kernel.
  • 36
    Oracle Cloud Infrastructure Compute
    Oracle Cloud Infrastructure provides fast, flexible, and affordable compute capacity to fit any workload need from performant bare metal servers and VMs to lightweight containers. OCI Compute provides uniquely flexible VM and bare metal instances for optimal price-performance. Select exactly the number of cores and the memory your applications need. Delivering high performance for enterprise workloads. Simplify application development with serverless computing. Your choice of technologies includes Kubernetes and containers. NVIDIA GPUs for machine learning, scientific visualization, and other graphics processing. Capabilities such as RDMA, high-performance storage, and network traffic isolation. Oracle Cloud Infrastructure consistently delivers better price performance than other cloud providers. Virtual machine-based (VM) shapes offer customizable core and memory combinations. Customers can optimize costs by choosing a specific number of cores.
  • 37
    openSUSE MicroOS
    Microservice OS providing transactional (Atomic) updates upon a read-only btrfs root filesystem. Designed to host container workloads with automated administration & patching. Installing openSUSE MicroOS you get a quick, small environment for deploying containers, or any other workload that benefits from transactional updates. As rolling release distribution, the software is always up-to-date. MicroOS offers an offline image. The main difference between the offline and self-install/raw images is that the offline image has an installer. Raw and self-install allows for customization via combustion or manually in the image after it is written to the disk. There is an option for a real-time kernel. Try MicroOS in VMs running on either Xen or KVM. Using a Raspberry Pi or other system-on-chip hardware may use the preconfigured image together with the combustion functionality for the boot process.
  • 38
    Rowy

    Rowy

    Rowy

    Manage your database on a spreadsheet-UI and build powerful backend cloud functions, scalably without leaving your browser. Start like no-code, extend with code.
    Starting Price: $12 per seat per month
  • 39
    KubeVirt

    KubeVirt

    KubeVirt

    KubeVirt technology addresses the needs of development teams that have adopted or want to adopt Kubernetes but possess existing Virtual Machine-based workloads that cannot be easily containerized. More specifically, the technology provides a unified development platform where developers can build, modify, and deploy applications residing in both application containers as well as virtual machines in a common, shared environment. The benefits are broad and significant. Teams with a reliance on existing virtual machine-based workloads are empowered to rapidly containerize applications. With virtualized workloads placed directly in development workflows, teams can decompose them over time while still leveraging the remaining virtualized components as is comfortably desired. Combine existing virtualized workloads with new container workloads on the one platform. Support development of new microservice applications in containers that interact with existing virtualized applications.
  • 40
    F5 BIG-IP Container Ingress Services
    Organizations are adopting containerized environments to speed app development. But these apps still need services, such as routing, SSL offload, scale, and security. F5 Container Ingress Services makes it easy to deliver advanced application services to your container deployments, enabling Ingress control HTTP routing, load balancing, and application delivery performance, as well as robust security services. Container Ingress Services easily integrates BIG-IP solutions with native container environments, such as Kubernetes, and PaaS container orchestration and management systems, such as RedHat OpenShift. Scale apps to meet container workloads and enable security services to protect container data. Container Ingress Services enables self-service app performance and security services within your orchestration by integrating BIG-IP platforms with your container environment.
  • 41
    Apprenda

    Apprenda

    Apprenda

    Apprenda Cloud Platform empowers enterprise IT to create a Kubernetes-enabled shared service on the infrastructures of their choice and offer it to developers across business units. ACP supports your entire custom application portfolio. Rapidly build, deploy, run, and manage cloud-native, microservices, and container-based .NET and Java applications or modernize traditional workloads. ACP gives your developers self-service access to the tools they need to rapidly build applications, while IT operators can very easily orchestrate the environments and workflows. Enterprise IT becomes a true service provider. ACP is a single platform spanning your multiple data- centers and clouds. Run ACP on-premise or consume it as a managed service on the public cloud; both with the assurance of complete infrastructure independence. ACP enables policy-driven control over all of your application workloads' infrastructure utilization and DevOps processes.
  • 42
    Oracle Cloud Container Registry
    Oracle Cloud Infrastructure Container Registry is an open standards-based, Oracle-managed Docker registry service for securely storing and sharing container images. Engineers can easily push and pull Docker images with the familiar Docker Command Line Interface (CLI) and API. To support container lifecycles, Registry works with Container Engine for Kubernetes, Identity and Access Management (IAM), Visual Builder Studio, and third-party developer and DevOps tools. Work with Docker images and container repositories using familiar Docker CLI commands and Docker HTTP API V2. Oracle takes care of operating and patching the service, so that developers can focus on building and deploying containerized applications. Built using object storage, Container Registry provides data durability and high service availability with automatic replication across fault domains. Oracle does not charge separately for the service. Users pay only for the associated storage and network resources they consume.
  • 43
    Apache OpenWhisk

    Apache OpenWhisk

    The Apache Software Foundation

    Apache OpenWhisk is an open source, distributed Serverless platform that executes functions (fx) in response to events at any scale. OpenWhisk manages the infrastructure, servers and scaling using Docker containers so you can focus on building amazing and efficient applications. The OpenWhisk platform supports a programming model in which developers write functional logic (called Actions), in any supported programming language, that can be dynamically scheduled and run in response to associated events (via Triggers) from external sources (Feeds) or from HTTP requests. The project includes a REST API-based Command Line Interface (CLI) along with other tooling to support packaging, catalog services and many popular container deployment options. Since Apache OpenWhisk builds its components using containers it easily supports many deployment options both locally and within Cloud infrastructures. Options include many of today's popular Container frameworks.
  • 44
    dstack

    dstack

    dstack

    dstack is an orchestration layer designed for modern ML teams, providing a unified control plane for development, training, and inference on GPUs across cloud, Kubernetes, or on-prem environments. By simplifying cluster management and workload scheduling, it eliminates the complexity of Helm charts and Kubernetes operators. The platform supports both cloud-native and on-prem clusters, with quick connections via Kubernetes or SSH fleets. Developers can spin up containerized environments that link directly to their IDEs, streamlining the machine learning workflow from prototyping to deployment. dstack also enables seamless scaling from single-node experiments to distributed training while optimizing GPU usage and costs. With secure, auto-scaling endpoints compatible with OpenAI standards, it empowers teams to deploy models quickly and reliably.
  • 45
    Google Cloud Functions
    Google Cloud Functions has a simple and intuitive developer experience. Just write your code and let Google Cloud handle the operational infrastructure. Develop faster by writing and running small code snippets that respond to events. Connect to Google Cloud or third-party cloud services via triggers to streamline challenging orchestration problems.
  • 46
    NVIDIA GPU-Optimized AMI
    The NVIDIA GPU-Optimized AMI is a virtual machine image for accelerating your GPU accelerated Machine Learning, Deep Learning, Data Science and HPC workloads. Using this AMI, you can spin up a GPU-accelerated EC2 VM instance in minutes with a pre-installed Ubuntu OS, GPU driver, Docker and NVIDIA container toolkit. This AMI provides easy access to NVIDIA's NGC Catalog, a hub for GPU-optimized software, for pulling & running performance-tuned, tested, and NVIDIA certified docker containers. The NGC catalog provides free access to containerized AI, Data Science, and HPC applications, pre-trained models, AI SDKs and other resources to enable data scientists, developers, and researchers to focus on building and deploying solutions. This GPU-optimized AMI is free with an option to purchase enterprise support offered through NVIDIA AI Enterprise. For how to get support for this AMI, scroll down to 'Support Information'
    Starting Price: $3.06 per hour
  • 47
    Kubescape
    A Kubernetes open-source platform providing developers and DevOps an end-to-end security solution, including risk analysis, security compliance, RBAC visualizer, and image vulnerabilities scanning. Kubescape scans K8s clusters, Kubernetes manifest files (YAML files, and HELM charts), code repositories, container registries and images, detecting misconfigurations according to multiple frameworks (such as the NSA-CISA, MITRE ATT&CK®), finding software vulnerabilities, and showing RBAC (role-based-access-control) violations at early stages of the CI/CD pipeline. It calculates risk scores instantly and shows risk trends over time. Kubescape has became one of the fastest-growing Kubernetes security compliance tools among developers due to its easy-to-use CLI interface, flexible output formats, and automated scanning capabilities, saving Kubernetes users and admins precious time, effort, and resources.
  • 48
    VMware Cloud
    Build, run, manage, connect and protect all of your apps on any cloud. The Multi-Cloud solutions from VMware deliver a cloud operating model for all applications. Support your digital business initiatives with the world’s most proven and widely deployed cloud infrastructure. Leverage the same skills you use in the data center, while tapping into the depth and breadth of six global hyperscale public cloud providers and 4,000+ VMware Cloud Provider Partners. With hybrid cloud built on VMware Cloud Foundation, you get consistent infrastructure and operations for new and existing cloud native applications, from data center to cloud to edge. This consistency improves agility and reduces complexity, cost and risk. Build, run and manage modern apps on any cloud, meeting diverse needs with on-premises and public cloud resources. Manage both container-based workloads and traditional VM-based workloads on a single platform.
  • 49
    Werf

    Werf

    Werf

    The CLI tool gluing Git, Docker, Helm & Kubernetes with any CI system to implement CI/CD and Giterminism. Establish and benefit from efficient, robust, and integrated CI/CD pipelines on top of proven technologies. With Werf, it’s easy to start, apply best practices, and avoid reinventing the wheel. Werf not only builds & deploys but also continuously syncs the current Kubernetes state with changes made in Git. Werf introduces Giterminism, use git as a single source of truth, and make the entire delivery pipeline deterministic and idempotent. Werf supports 2 ways to deploy an application. converge application from git commit into the Kubernetes, publish application from git commit into the container registry as a bundle, then deploy bundle into the Kubernetes. Werf just works out of the box with a minimal configuration. You don't even need to be a DevOps/SRE engineer to use werf. Many guides are provided to quickly deploy your app into Kubernetes.
  • 50
    Amazon Elastic File System (EFS)
    Amazon Elastic File System (Amazon EFS) automatically grows and shrinks as you add and remove files with no need for management or provisioning. Share code and other files in a secure, organized way to increase DevOps agility and respond faster to customer feedback. Persist and share data from your AWS containers and serverless applications with zero management required. Easier to use and scale, Amazon EFS offers the performance and consistency needed for machine learning and big data analytics workloads. Simplify persistent storage for modern content management system workloads. Get your products and services to market faster, more reliably, and securely at a lower cost. Create and configure shared file systems simply and quickly for AWS compute services, with no provisioning, deploying, patching, or maintenance required. Scale workloads on-demand to petabytes of storage and gigabytes per second of throughput out of the box.