Alternatives to Alibaba Auto Scaling
Compare Alibaba Auto Scaling alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to Alibaba Auto Scaling in 2026. Compare features, ratings, user reviews, pricing, and more from Alibaba Auto Scaling competitors and alternatives in order to make an informed decision for your business.
-
1
Google Compute Engine
Google
Compute Engine is Google's infrastructure as a service (IaaS) platform for organizations to create and run cloud-based virtual machines. Computing infrastructure in predefined or custom machine sizes to accelerate your cloud transformation. General purpose (E2, N1, N2, N2D) machines provide a good balance of price and performance. Compute optimized (C2) machines offer high-end vCPU performance for compute-intensive workloads. Memory optimized (M2) machines offer the highest memory and are great for in-memory databases. Accelerator optimized (A2) machines are based on the A100 GPU, for very demanding applications. Integrate Compute with other Google Cloud services such as AI/ML and data analytics. Make reservations to help ensure your applications have the capacity they need as they scale. Save money just for running Compute with sustained-use discounts, and achieve greater savings when you use committed-use discounts. -
2
RunPod
RunPod
RunPod offers a cloud-based platform designed for running AI workloads, focusing on providing scalable, on-demand GPU resources to accelerate machine learning (ML) model training and inference. With its diverse selection of powerful GPUs like the NVIDIA A100, RTX 3090, and H100, RunPod supports a wide range of AI applications, from deep learning to data processing. The platform is designed to minimize startup time, providing near-instant access to GPU pods, and ensures scalability with autoscaling capabilities for real-time AI model deployment. RunPod also offers serverless functionality, job queuing, and real-time analytics, making it an ideal solution for businesses needing flexible, cost-effective GPU resources without the hassle of managing infrastructure. -
3
StarTree
StarTree
StarTree, powered by Apache Pinot™, is a fully managed real-time analytics platform built for customer-facing applications that demand instant insights on the freshest data. Unlike traditional data warehouses or OLTP databases—optimized for back-office reporting or transactions—StarTree is engineered for real-time OLAP at true scale, meaning: - Data Volume: query performance sustained at petabyte scale - Ingest Rates: millions of events per second, continuously indexed for freshness - Concurrency: thousands to millions of simultaneous users served with sub-second latency With StarTree, businesses deliver always-fresh insights at interactive speed, enabling applications that personalize, monitor, and act in real time.Starting Price: Free -
4
AWS Auto Scaling
Amazon
AWS Auto Scaling monitors your applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost. Using AWS Auto Scaling, it’s easy to setup application scaling for multiple resources across multiple services in minutes. The service provides a simple, powerful user interface that lets you build scaling plans for resources including Amazon EC2 instances and Spot Fleets, Amazon ECS tasks, Amazon DynamoDB tables and indexes, and Amazon Aurora Replicas. AWS Auto Scaling makes scaling simple with recommendations that allow you to optimize performance, costs, or balance between them. If you’re already using Amazon EC2 Auto Scaling to dynamically scale your Amazon EC2 instances, you can now combine it with AWS Auto Scaling to scale additional resources for other AWS services. With AWS Auto Scaling, your applications always have the right resources at the right time. -
5
AWS Fargate
Amazon
AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Fargate makes it easy for you to focus on building your applications. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design. Fargate allocates the right amount of compute, eliminating the need to choose instances and scale cluster capacity. You only pay for the resources required to run your containers, so there is no over-provisioning and paying for additional servers. Fargate runs each task or pod in its own kernel providing the tasks and pods their own isolated compute environment. This enables your application to have workload isolation and improved security by design. -
6
Amazon EC2 Auto Scaling
Amazon
Amazon EC2 Auto Scaling helps you maintain application availability and lets you automatically add or remove EC2 instances using scaling policies that you define. Dynamic or predictive scaling policies let you add or remove EC2 instance capacity to service established or real-time demand patterns. The fleet management features of Amazon EC2 Auto Scaling help maintain the health and availability of your fleet. Automation is vital to efficient DevOps, and getting your fleets of Amazon EC2 instances to launch, provision software, and self-heal automatically is a key challenge. Amazon EC2 Auto Scaling provides essential features for each of these instance lifecycle automation steps. Use machine learning to predict and schedule the right number of EC2 instances to anticipate approaching traffic changes. -
7
NVIDIA DGX Cloud Serverless Inference is a high-performance, serverless AI inference solution that accelerates AI innovation with auto-scaling, cost-efficient GPU utilization, multi-cloud flexibility, and seamless scalability. With NVIDIA DGX Cloud Serverless Inference, you can scale down to zero instances during periods of inactivity to optimize resource utilization and reduce costs. There's no extra cost for cold-boot start times, and the system is optimized to minimize them. NVIDIA DGX Cloud Serverless Inference is powered by NVIDIA Cloud Functions (NVCF), which offers robust observability features. It allows you to integrate your preferred monitoring tools, such as Splunk, for comprehensive insights into your AI workloads. NVCF offers flexible deployment options for NIM microservices while allowing you to bring your own containers, models, and Helm charts.
-
8
Tencent Cloud Load Balancer
Tencent
One CLB cluster consists of 4 physical servers, offering an availability of up to 99.95%. In the extreme case where only one CLB instance is available, it can still support over 30 million concurrent connections. The cluster system will quickly remove faulty instances and keep healthy instances to ensure that the backend server continues to operate properly. The CLB cluster scales the service capabilities of the application system elastically according to the business load, and automatically creates and releases CVM instances through the dynamic scaling group of Auto Scaling. These features, in conjunction with a dynamic monitoring system and a billing system that is accurate to the second, eliminate the need to manually intervene or estimate resource requirements, helping you efficiently allocate computing resources and prevent resource waste. -
9
StormForge
StormForge
StormForge Optimize Live continuously rightsizes Kubernetes workloads to ensure cloud-native applications are both cost effective and performant while removing developer toil. As a vertical rightsizing solution, Optimize Live is autonomous, tunable, and works seamlessly with the Kubernetes horizontal pod autoscaler (HPA) at enterprise scale. Optimize Live addresses both over- and under-provisioned workloads by analyzing usage data with advanced machine learning to recommend optimal resource requests and limits. Recommendations can be deployed automatically on a flexible schedule, accounting for changes in traffic patterns or application resource requirements, ensuring that workloads are always right-sized, and freeing developers from the toil and cognitive load of infrastructure sizing. Organizations see immediate benefits from the reduction of wasted resources — leading to cost savings of 40-60% along with performance and reliability improvements across the entire estate.Starting Price: Free -
10
Zipher
Zipher
Zipher is an autonomous optimization platform specifically designed to improve the performance and cost efficiency of Databricks workloads by eliminating manual tuning and resource management and continuously adjusting clusters in real time. It uses proprietary machine learning models and the only Spark-aware scaler that actively learns and profiles workloads to adjust cluster resources, select optimal configurations for every job run, and dynamically tune settings like hardware, Spark configs, and availability zones to maximize efficiency and cut waste. Zipher continuously monitors evolving workloads to adapt configurations, optimize scheduling, and allocate shared compute resources to meet SLAs, while providing detailed cost visibility that breaks down Databricks and cloud provider costs so teams can identify key cost drivers. It integrates seamlessly with major cloud service providers including AWS, Azure, and Google Cloud and works with common orchestration and IaC tools. -
11
Xosphere
Xosphere
Xosphere Instance Orchestrator automatically performs spot optimization by leveraging AWS Spot instances to optimize the cost of your infrastructure while maintaining the same level of reliability as on-demand instances. Spot instances are diversified amongst family, size, and availability zones to minimize any impact when Spot instances are reclaimed. Instances utilizing reservations will not be replaced by Spot instances. Automatically respond to Spot termination notifications and fast-track replacement on-demand instances. EBS volumes can be configured to be attached to new replacement instances enabling stateful applications to work seamlessly. -
12
Zerops
Zerops
Zerops.io is a cloud platform designed for developers building modern applications, offering automatic vertical and horizontal autoscaling, granular control over resources, and no vendor lock-in. It simplifies infrastructure management with features like automated backups and failover, CI/CD integration, and full observability. Zerops.io scales seamlessly with your project’s needs, ensuring optimal performance and cost-efficiency from development to production, all while supporting microservices and complex architectures. Ideal for developers who want flexibility, scalability, and powerful automation without the complexity.Starting Price: $0 -
13
Nerdio
Adar
Empowering Managed Service Providers & Enterprise IT Professionals to quickly and easily deploy Azure Virtual Desktop and Windows 365, manage all environments from one simple platform, and optimize costs by saving up to 75% on Azure compute and storage. Nerdio Manager for Enterprise extends the native Azure Virtual Desktop and Windows 365 admin capabilities with automatic and fast virtual desktop deployment, simple management in just a few clicks, and cost-optimization features for savings of up to 75% – paired with the unmatched security of Microsoft Azure and expert-level Nerdio support. Nerdio Manager for MSP is a multi-tenant Azure Virtual Desktop and Windows 365 deployment, management, and optimization platform for Managed Service Providers that allows for automatic provisioning in under an hour (or connect to an existing deployment in minutes), management of all customers in a simple admin portal, and cost-optimization with Nerdio’s Advanced Auto-scaling.Starting Price: $100 per month -
14
Enterpristore
Logistica Solutions
Enterpristore for Infor ERP is fully integrated with Amazon Web Services offering an ecommerce cloud computing solution to small and large businesses that want a flexible, secured, highly scalable, and low-cost solution for online sales and retailing. Cloud computing is the on-demand delivery of compute power, database storage, applications, and other IT resources through a cloud services platform via the internet with pay-as-you-go pricing. Experience the power and reliability of AWS. Deploy in seconds and manage from the intuitive Lightsail setup for smaller requirements. Amazon EC2 Auto Scaling ensures that your application always has the right amount of compute capacity. Amazon EC2 Auto Scaling adds new instances only when necessary and terminates them when no longer needed. -
15
Maxta
Maxta
Maxta Hyperconvergence software gives IT the freedom to choose servers and hypervisors, scale storage independent of compute, and run mixed workloads on the same cluster. Unlike hyperconverged appliances, with Maxta there’s no vendor lock-in, no refresh tax and no upgrade tax. Use existing servers, buy pre-configured servers, or a combination of both. Appliances have a hidden cost. Never repay for software when you refresh hardware. Most storage and even hyperconverged solutions can only manage policies at the LUN, volume, or cluster level. Maxta lets you run multiple applications on the same cluster without sacrificing performance or availability. Appliance-based hyperconverged solutions make you repurchase the software license when you refresh hardware and add storage capacity only by adding additional appliances. You own your Maxta software forever and can add storage capacity by adding additional drives to servers. -
16
Convox
Convox
Convox is a powerful platform-as-a-service (PaaS) that simplifies deploying, scaling, and managing cloud applications by abstracting infrastructure complexity and letting teams focus on shipping code. It runs directly within your cloud account and integrates with major cloud providers such as AWS, Google Cloud, Azure, and DigitalOcean, giving you full control and cost efficiency while avoiding extra hosting fees. Convox supports seamless continuous integration and delivery pipelines, auto-scaling policies, and zero-downtime deployments, with tools for environment configuration, role-based access controls, and secure workflows. It includes a developer-friendly CLI, flexible deployment configuration, and integration with common tools like GitHub, GitLab, Slack, and monitoring services, streamlining workflows and boosting productivity. Convox also offers real-time monitoring, detailed logs, and one-click rollbacks for reliable performance and easier troubleshooting.Starting Price: Free -
17
Microsoft Hyper-V
Microsoft
Hyper-V is Microsoft's hardware virtualization product. It lets you create and run a software version of a computer, called a virtual machine. Each virtual machine acts like a complete computer, running an operating system and programs. When you need computing resources, virtual machines give you more flexibility, help save time and money, and are a more efficient way to use hardware than just running one operating system on physical hardware. Each supported guest operating system has a customized set of services and drivers, called integration services, that make it easier to use the operating system in a Hyper-V virtual machine. Hyper-V includes Virtual Machine Connection, a remote connection tool for use with both Windows and Linux. Unlike Remote Desktop, this tool gives you console access, so you can see what's happening in the guest even when the operating system isn't booted yet. -
18
Azure Virtual Machines
Microsoft
Migrate your business- and mission-critical workloads to Azure infrastructure and improve operational efficiency. Run SQL Server, SAP, Oracle® software and high-performance computing applications on Azure Virtual Machines. Choose your favorite Linux distribution or Windows Server. Deploy virtual machines featuring up to 416 vCPUs and 12 TB of memory. Get up to 3.7 million local storage IOPS per VM. Take advantage of up to 30 Gbps Ethernet and cloud’s first deployment of 200 Gbps InfiniBand. Select the underlying processors – AMD, Ampere (Arm-based), or Intel - that best meet your requirements. Encrypt sensitive data, protect VMs from malicious threats, secure network traffic, and meet regulatory and compliance requirements. Use Virtual Machine Scale Sets to build scalable applications. Reduce your cloud spend with Azure Spot Virtual Machines and reserved instances. Build your private cloud with Azure Dedicated Host. Run mission-critical applications in Azure to increase resiliency. -
19
Syself
Syself
Managing Kubernetes shouldn't be a headache. With Syself Autopilot, both beginners and experts can deploy and maintain enterprise-grade clusters with ease. Say goodbye to downtime and complexity—our platform ensures automated upgrades, self-healing capabilities, and GitOps compatibility. Whether you're running on bare metal or cloud infrastructure, Syself Autopilot is designed to handle your needs, all while maintaining GDPR-compliant data protection. Syself Autopilot integrates with leading DevOps and infrastructure solutions, allowing you to build and scale applications effortlessly. Our platform supports: - Argo CD, Flux (GitOps & CI/CD) - MariaDB, PostgreSQL, MySQL, MongoDB, ClickHouse (Databases) - Grafana, Istio, Redis, NATS (Monitoring & Service Mesh) Need additional solutions? Our team helps you deploy, configure, and optimize your infrastructure for peak performance.Starting Price: €299/month -
20
Scale Computing Platform
Scale Computing
SC//Platform brings faster time to value in the data center, in the distributed enterprise, and at the edge. Scale Computing Platform brings simplicity, high availability and scalability together, replacing the existing infrastructure and providing high availability for running VMs in a single, easy-to-manage platform. Run your applications in a fully integrated platform. Regardless of your hardware requirements, the same innovative software and simple user interface give you the power to run infrastructure efficiently at the edge. Eliminate mundane management tasks and save the valuable time of IT administrators. The simplicity of SC//Platform directly impacts IT with higher productivity and lower costs. Plan the perfect future by not predicting it. Simply mix and match old and new hardware and applications on the same infrastructure for a future-proof environment that can scale up or down as needed. -
21
By just writing the most important "core code" without concern for peripheral components, you can greatly reduce the complexity of the service architecture. SCF can scale up and down based on the number of requests with no manual configuration required. Regardless of the volume of requests to your application at any given time, SCF can automatically arrange suitable computing resources to meet business needs. If an available zone is down due to a natural disaster or power failure, SCF can automatically utilize the infrastructure of other available zones for code execution, eliminating the risk of service interruptions inherent in single-availability zone operations. Event-triggered workloads can be achieved using SCF that leverages different cloud services to meet the requirements of different business scenarios and further strengthen your service architecture.
-
22
IBM PowerVM
IBM
IBM® PowerVM® is server virtualization without limits. Businesses are turning to PowerVM server virtualization to consolidate multiple workloads onto fewer systems, increasing server utilization and reducing cost. PowerVM provides a secure and scalable server virtualization environment for AIX®, IBM i and Linux applications built upon the advanced RAS features and leading performance of the Power Systems™ platform. Secure your enterprise environments with industry-leading hypervisor technology that ensures the integrity and isolation of critical applications and I/O. Scale out or scale up your virtualized deployments without paying underlying performance penalties. Provide services built for the cloud faster by automating deployment of virtual machines (VMs) and storage. Help eliminate scheduled downtime by deploying live mobility between servers. Optimize utilization of server and storage resources to control cost and boost return on investment. -
23
Google Cloud Load Balancer
Google
Scale your applications on Compute Engine from zero to full throttle with Cloud Load Balancing, with no pre-warming needed. Distribute your load-balanced compute resources in single or multiple regions—close to your users—and to meet your high availability requirements. Cloud Load Balancing can put your resources behind a single anycast IP and scale your resources up or down with intelligent autoscaling. Cloud Load Balancing comes in a variety of flavors and is integrated with Cloud CDN for optimal application and content delivery. With Cloud Load Balancing, a single anycast IP front-ends all your backend instances in regions around the world. It provides cross-region load balancing, including automatic multi-region failover, which gently moves traffic in fractions if backends become unhealthy. In contrast to DNS-based global load balancing solutions, Cloud Load Balancing reacts instantaneously to changes in users, traffic, network, backend health, and other related conditions.Starting Price: $0.025 per hour -
24
Lucidity
Lucidity
Lucidity is a multi-cloud storage management platform that dynamically resizes block storage across AWS, Azure, and Google Cloud without downtime, enabling enterprises to save up to 70% on storage costs. Lucidity automates the expansion and contraction of storage volumes based on real-time data demands, ensuring optimal disk utilization between 75-80%. This autonomous, application-agnostic solution integrates seamlessly with existing applications and environments, requiring no code changes or manual provisioning efforts. Lucidity's AutoScaler is available on the AWS Marketplace, offering enterprises an automated solution to expand and shrink live EBS volumes based on workload without downtime. By streamlining operations, Lucidity enables IT and DevOps teams to reclaim hundreds of hours, allowing them to focus on higher-impact initiatives that drive innovation and efficiency. -
25
BidElastic
BidElastic
It isn’t always straightforward to benefit from the rich features of cloud services. To make it easier for businesses to use the cloud, we developed BidElastic as a resource provisioning tool with two components: BidElastic BidServer cuts computational costs; BidElastic Intelligent Auto Scaler (IAS) streamlines management and monitoring of your cloud provider. The BidServer uses simulation and advanced optimization routines to anticipate market movements and to design a robust infrastructure for cloud providers’ spot instances. To match demand in volatile workloads, you need to scale your cloud infrastructure dynamically. But that’s easier said than done. There’s a traffic spike and only 10 minutes later are new servers online. In the meantime you’ve lost customers who may never come back. To scale your resources properly you need to be able to predict computational workloads. CloudPredict does exactly that; it uses machine learning to predict computational workloads. -
26
NVIDIA virtual GPU
NVIDIA
NVIDIA virtual GPU (vGPU) software enables powerful GPU performance for workloads ranging from graphics-rich virtual workstations to data science and AI, enabling IT to leverage the management and security benefits of virtualization as well as the performance of NVIDIA GPUs required for modern workloads. Installed on a physical GPU in a cloud or enterprise data center server, NVIDIA vGPU software creates virtual GPUs that can be shared across multiple virtual machines, and accessed by any device, anywhere. Deliver performance virtually indistinguishable from a bare metal environment. Leverage common data center management tools such as live migration. Provision GPU resources with fractional or multi-GPU virtual machine (VM) instances. Responsive to changing business requirements and remote teams. -
27
Pepperdata
Pepperdata, Inc.
Pepperdata autonomous cost optimization for data-intensive workloads such as Apache Spark is the only solution that delivers 30-47% greater cost savings continuously and in real time with no application changes or manual tuning. Deployed on over 20,000+ clusters, Pepperdata Capacity Optimizer provides resource optimization and full-stack observability in some of the largest and most complex environments in the world, enabling customers to run Spark on 30% less infrastructure on average. In the last decade, Pepperdata has helped top enterprises such as Citibank, Autodesk, Royal Bank of Canada, members of the Fortune 10, and mid-sized companies save over $250 million. -
28
AWS ParallelCluster
Amazon
AWS ParallelCluster is an open-source cluster management tool that simplifies the deployment and management of High-Performance Computing (HPC) clusters on AWS. It automates the setup of required resources, including compute nodes, a shared filesystem, and a job scheduler, supporting multiple instance types and job submission queues. Users can interact with ParallelCluster through a graphical user interface, command-line interface, or API, enabling flexible cluster configuration and management. The tool integrates with job schedulers like AWS Batch and Slurm, facilitating seamless migration of existing HPC workloads to the cloud with minimal modifications. AWS ParallelCluster is available at no additional charge; users only pay for the AWS resources consumed by their applications. With AWS ParallelCluster, you can use a simple text file to model, provision, and dynamically scale the resources needed for your applications in an automated and secure manner. -
29
Ori GPU Cloud
Ori
Launch GPU-accelerated instances highly configurable to your AI workload & budget. Reserve thousands of GPUs in a next-gen AI data center for training and inference at scale. The AI world is shifting to GPU clouds for building and launching groundbreaking models without the pain of managing infrastructure and scarcity of resources. AI-centric cloud providers outpace traditional hyperscalers on availability, compute costs and scaling GPU utilization to fit complex AI workloads. Ori houses a large pool of various GPU types tailored for different processing needs. This ensures a higher concentration of more powerful GPUs readily available for allocation compared to general-purpose clouds. Ori is able to offer more competitive pricing year-on-year, across on-demand instances or dedicated servers. When compared to per-hour or per-usage pricing of legacy clouds, our GPU compute costs are unequivocally cheaper to run large-scale AI workloads.Starting Price: $3.24 per month -
30
Zesty
Zesty
Zesty’s cloud infrastructure optimization platform helps companies efficiently allocate resources and reduce cloud spend, with solutions for containers, compute, storage, and databases. Zesty Kompass automatically reduces K8s costs by up to 70% with no compromise on SLA. The platform enables nodes deployment in 30s, eliminating the need for node headroom, and expanding the confident usage of Spot Instances. Zesty Commitment Manager automatically optimizes EC2 and RDS discount plans, ensuring maximum coverage and deeper savings with minimal financial risk and no manual effort. Zesty Disk automatically scales up or down PVCs to match real-time application needs, optimizing storage utilization, eliminating the risk of downtime, and reducing costs by up to 70%. Zesty Insights provides a clear overview of potential savings and unused resources, and actionable recommendations that help you focus on the most efficient savings opportunities. -
31
AWS Batch
Amazon
AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted. With AWS Batch, there is no need to install and manage batch computing software or server clusters that you use to run your jobs, allowing you to focus on analyzing results and solving problems. AWS Batch plans, schedules, and executes your batch computing workloads across the full range of AWS compute services and features, such as AWS Fargate, Amazon EC2 and Spot Instances. There is no additional charge for AWS Batch. You only pay for the AWS resources (e.g. EC2 instances or Fargate jobs) you create to store and run your batch jobs. -
32
IONOS Cloud Cubes
IONOS
IONOS Cloud Cubes are cost-effective virtual server instances designed to provide flexible computing capacity for a wide range of cloud workloads. Each Cube functions as a virtual machine that includes virtual CPU resources, RAM, and a directly attached NVMe storage volume to deliver fast performance for applications and services. It allows users to deploy independent computing environments that can be used for development, testing, staging, or running lightweight production workloads such as web applications. Cloud Cubes are integrated into the IONOS Cloud infrastructure and can operate alongside other services like the Compute Engine within the same virtual data center, enabling businesses to combine resources and scale their environments as needed. Users can configure and manage Cubes visually through the Data Center Designer interface or automate their creation and management through APIs, SDKs, and configuration management tools.Starting Price: $0.008 per hour -
33
Amazon SageMaker Model Training reduces the time and cost to train and tune machine learning (ML) models at scale without the need to manage infrastructure. You can take advantage of the highest-performing ML compute infrastructure currently available, and SageMaker can automatically scale infrastructure up or down, from one to thousands of GPUs. Since you pay only for what you use, you can manage your training costs more effectively. To train deep learning models faster, SageMaker distributed training libraries can automatically split large models and training datasets across AWS GPU instances, or you can use third-party libraries, such as DeepSpeed, Horovod, or Megatron. Efficiently manage system resources with a wide choice of GPUs and CPUs including P4d.24xl instances, which are the fastest training instances currently available in the cloud. Specify the location of data, indicate the type of SageMaker instances, and get started with a single click.
-
34
Yandex API Gateway
Yandex
Requests to service APIs are processed with minimum delay. Under peak loads, the service is automatically scaled to minimize response latency. You can use Certificate Manager domains when accessing the API. In this case, a certificate linked to the domain is used to provide a TLS connection. Extend specifications with a click in the management console and integrate your applications with Yandex Cloud services. Canary releases in API Gateway allow you to apply changes to the OpenAPI specifications of the API gateway gradually, to a portion of incoming queries. Limit the number of queries to the API gateway per unit of time to defend against DDoS attacks and control consumption of cloud resources. -
35
Oracle VM
Oracle
Designed for efficiency and optimized for performance, Oracle's server virtualization products support x86 and SPARC architectures and a variety of workloads such as Linux, Windows and Oracle Solaris. In addition to solutions that are hypervisor-based, Oracle also offers virtualization built in to hardware and Oracle operating systems to deliver the most complete and optimized solution for your entire computing environment. -
36
VMware ESXi
Broadcom
Discover a robust, bare-metal hypervisor that installs directly onto your physical server. With direct access to and control of underlying resources, VMware ESXi effectively partitions hardware to consolidate applications and cut costs. It’s the industry leader for efficient architecture, setting the standard for reliability, performance, and support. IT teams are under constant pressure to meet fluctuating market trends and heightened customer demands. At the same time, they must stretch IT resources to accommodate increasingly complex projects. Fortunately, ESXi helps balance the need for both better business outcomes and IT savings. -
37
Oblivus
Oblivus
Our infrastructure is equipped to meet your computing requirements, be it one or thousands of GPUs, or one vCPU to tens of thousands of vCPUs, we've got you covered. Our resources are readily available to cater to your needs, whenever you need them. Switching between GPU and CPU instances is a breeze with our platform. You have the flexibility to deploy, modify, and rescale your instances according to your needs, without any hassle. Outstanding machine learning performance without breaking the bank. The latest technology at a significantly lower cost. Cutting-edge GPUs are designed to meet the demands of your workloads. Gain access to computational resources that are tailored to suit the intricacies of your models. Leverage our infrastructure to perform large-scale inference and access necessary libraries with our OblivusAI OS. Unleash the full potential of your gaming experience by utilizing our robust infrastructure to play games in the settings of your choice.Starting Price: $0.29 per hour -
38
MapReduce
Baidu AI Cloud
You can perform on-demand deployment and automatic scaling of the cluster, and focus on the big data processing, analysis, and reporting only. Thanks to many years’ of massively distributed computing technology accumulation, Our operations team can undertake the cluster operations. It automatically scales up clusters to improve the computing ability in peak periods and scales down clusters to reduce the cost in the valley period. It provides the management console to facilitate cluster management, template customization, task submission, and alarm monitoring. By deploying together with the BCC, it focuses on its own business in a busy time and helps the BMR to compute the big data in free time, reducing the overall IT expenditure. -
39
Elastic GPU Service
Alibaba
Elastic computing instances with GPU computing accelerators suitable for scenarios (such as artificial intelligence (specifically deep learning and machine learning), high-performance computing, and professional graphics processing). Elastic GPU Service provides a complete service system that combines software and hardware to help you flexibly allocate resources, elastically scale your system, improve computing power, and lower the cost of your AI-related business. It applies to scenarios (such as deep learning, video encoding and decoding, video processing, scientific computing, graphical visualization, and cloud gaming). Elastic GPU Service provides GPU-accelerated computing capabilities and ready-to-use, scalable GPU computing resources. GPUs have unique advantages in performing mathematical and geometric computing, especially floating-point and parallel computing. GPUs provide 100 times the computing power of their CPU counterparts.Starting Price: $69.51 per month -
40
Create and manage cloud resources with simple templates. Google Cloud Deployment Manager allows you to specify all the resources needed for your application in a declarative format using yaml. You can also use Python or Jinja2 templates to parameterize the configuration and allow reuse of common deployment paradigms such as a load balanced, auto-scaled instance group. Treat your configuration as code and perform repeatable deployments. By creating configuration files which define the resources, the process of creating those resources can be repeated over and over with consistent results. Many tools use an imperative approach, requiring the user to define the steps to take to create and configure resources. A declarative approach allows the user to specify what the configuration should be and let the system figure out the steps to take. The user can focus on the set of resources which comprise the application or service instead of deploying each resource separately.
-
41
EC2 Spot
Amazon
Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices. You can use Spot Instances for various stateless, fault-tolerant, or flexible applications such as big data, containerized workloads, CI/CD, web servers, high-performance computing (HPC), and test & development workloads. Because Spot Instances are tightly integrated with AWS services such as Auto Scaling, EMR, ECS, CloudFormation, Data Pipeline and AWS Batch, you can choose how to launch and maintain your applications running on Spot Instances. Moreover, you can easily combine Spot Instances with On-Demand, RIs and Savings Plans Instances to further optimize workload cost with performance. Due to the operating scale of AWS, Spot Instances can offer the scale and cost savings to run hyper-scale workloads.Starting Price: $0.01 per user, one-time payment, -
42
UbiOps
UbiOps
UbiOps is an AI infrastructure platform that helps teams to quickly run their AI & ML workloads as reliable and secure microservices, without upending their existing workflows. Integrate UbiOps seamlessly into your data science workbench within minutes, and avoid the time-consuming burden of setting up and managing expensive cloud infrastructure. Whether you are a start-up looking to launch an AI product, or a data science team at a large organization. UbiOps will be there for you as a reliable backbone for any AI or ML service. Scale your AI workloads dynamically with usage without paying for idle time. Accelerate model training and inference with instant on-demand access to powerful GPUs enhanced with serverless, multi-cloud workload distribution. -
43
Huawei FusionStorage
Huawei Technologies
Huawei FusionStorage fully converged cloud storage features massive scale-out capabilities designed for cloud-based architectures. The on-board storage system software combines the local storage resources of standard x86 servers into fully distributed storage pools, allowing a single system to provide block, file, and object storage services to the upper layer. An enterprise can easily obtain the flexibility and efficiency in data storage required to keep up with the ever-changing dynamics of business. Convergence of multiple storage services: Distributed block, file, and object storage services are now fully converged onto one platform with unified hardware and shared resources, simplifying O&M. On-demand resources: Automatic data services and on-demand application-oriented storage resource supplies reduce business TTM from one week to one hour. -
44
Nutanix Files Storage
Nutanix
Nutanix Files Storage is a simple, flexible and intelligent scale-out file storage service for the data driven era. Update non-disruptively with a single click, and manage all storage from a single pane of glass. Scale-up or scale-out flexibly on the hardware of your choice and enjoy cloud-like consumption. Know your data, who’s using it, and how—and then drive automated management and control. IDC study shows how Nutanix Files Storage reduces operational overhead by 66% over traditional siloed storage resulting in 414% ROI and 7 month pay back. Nutanix Files Storage is built to handle billions of files and tens of thousands of user sessions. As your environment grows, just one click will elastically scale your cluster up by adding more compute and/or memory to the file server VMs, or out by adding more file server VMs. All from a single platform. You can also provide object and block storage using the same resources. -
45
AdroitLogic Integration Platform Server (IPS)
AdroitLogic
Easily deploy any number of ESB instances on the Integration Platform with just a few mouse clicks. Monitor and debug individual instances as well as entire clusters via a single dashboard. ESB instances are spawned in lightweight Docker containers, which provides better resource utilization and responsiveness than virtual machines. The platform detects and re-spawns failed instances within a matter of seconds, using the powerful Kubernetes framework. Adjust computing power of the platform by adding or removing physical or virtual machines, with zero impact on existing components. Easily manage ESB clusters, projects, configurations and user permissions, monitor statistics and debug ESB instances via the IPS dashboard. Plug in project-specific dashboards and seamlessly manage and monitor the platform as well as individual projects via a single, unified dashboard. -
46
Cloud Ops Group
Cloud Ops Group
Increase on-demand access to production, development, and test environments that allow you to innovate better, accelerate delivery of the application and streamline delivery to production. We design and implement infrastructure in the cloud to serve your business needs of today and tomorrow. We specialize in designing Web-scale architectures that are load-balanced, auto-scaled, self-healing, and cost-effective. You pay for only the resources you need while still responding to spikes in demand. We embrace the Infrastructure as Code philosophy to ensure infrastructure that is self-documenting, versioned, and automatic. Gain the insights into your applications to identify performance bottle-necks, understand resource requirements, automatically scale if and when needed, and alert appropriate stakeholders. We work with your developers to develop your application's build and deployment pipeline. -
47
Red Hat Virtualization
Red Hat
Red Hat® Virtualization is an enterprise virtualization platform that supports key virtualization workloads including resource-intensive and critical applications, built on Red Hat Enterprise Linux® and KVM and fully supported by Red Hat. Virtualize your resources, processes, and applications with a stable foundation for a cloud-native and containerized future. Automate, manage, and modernize your virtualization workloads. Whether automating daily operations or managing your VMs in Red Hat OpenShift, Red Hat Virtualization uses the Linux® skills your team knows and will build upon for future business needs. Built on an ecosystem of platform and partner solutions and integrated with Red Hat Enterprise Linux, Red Hat Ansible Automation Platform, Red Hat OpenStack® Platform, and Red Hat OpenShift to improve overall IT productivity and drive a higher return on investment. -
48
Yandex Data Proc
Yandex
You select the size of the cluster, node capacity, and a set of services, and Yandex Data Proc automatically creates and configures Spark and Hadoop clusters and other components. Collaborate by using Zeppelin notebooks and other web apps via a UI proxy. You get full control of your cluster with root permissions for each VM. Install your own applications and libraries on running clusters without having to restart them. Yandex Data Proc uses instance groups to automatically increase or decrease computing resources of compute subclusters based on CPU usage indicators. Data Proc allows you to create managed Hive clusters, which can reduce the probability of failures and losses caused by metadata unavailability. Save time on building ETL pipelines and pipelines for training and developing models, as well as describing other iterative tasks. The Data Proc operator is already built into Apache Airflow.Starting Price: $0.19 per hour -
49
NexaStack
NexaStack
Provide resources according to the requirement and scale with ease. Plan and Implement your Infrastructure as code with the same workflow across multiple cloud providers. Automated configurations and pipelines, for standardization and decreased configuration drift. Creates a code Git-based source code repository for each workflow facilitating Infrastructure audibility. Supports for Terraform, Ansible, Helm features to empower teams to build and provision highly efficient Infrastructure. Connect ready-made modules to configure in the workflows of IaC. Enterprises minimize issues at deployment, safety prospects and decrease configuration drift with NexaStack. Empowers enterprises to minimize issues at deployment and faster time to production. Effortless Infrastructure Audit and Decreased Configuration Inconsistency. Faster time to Setup Infrastructure and scale resources effortlessly.Starting Price: $20 per month -
50
Civo
Civo
Civo is a cloud-native platform designed to simplify cloud computing for developers and businesses, offering fast, predictable, and scalable infrastructure. It provides managed Kubernetes clusters with industry-leading launch times of around 90 seconds, enabling users to deploy and scale applications efficiently. Civo’s offering includes enterprise-class compute instances, managed databases, object storage, load balancers, and cloud GPUs powered by NVIDIA A100 for AI and machine learning workloads. Their billing model is transparent and usage-based, allowing customers to pay only for the resources they consume with no hidden fees. Civo also emphasizes sustainability with carbon-neutral GPU options. The platform is trusted by industry-leading companies and offers a robust developer experience through easy-to-use dashboards, APIs, and educational resources.Starting Price: $250 per month