Alternatives to Oracle Big Data Service

Compare Oracle Big Data Service alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to Oracle Big Data Service in 2026. Compare features, ratings, user reviews, pricing, and more from Oracle Big Data Service competitors and alternatives in order to make an informed decision for your business.

  • 1
    Google Cloud Platform
    Google Cloud is a cloud-based service that allows you to create anything from simple websites to complex applications for businesses of all sizes. New customers get $300 in free credits to run, test, and deploy workloads. All customers can use 25+ products for free, up to monthly usage limits. Use Google's core infrastructure, data analytics & machine learning. Secure and fully featured for all enterprises. Tap into big data to find answers faster and build better products. Grow from prototype to production to planet-scale, without having to think about capacity, reliability or performance. From virtual machines with proven price/performance advantages to a fully managed app development platform. Scalable, resilient, high performance object storage and databases for your applications. State-of-the-art software-defined networking products on Google’s private fiber network. Fully managed data warehousing, batch and stream processing, data exploration, Hadoop/Spark, and messaging.
    Leader badge
    Compare vs. Oracle Big Data Service View Software
    Visit Website
  • 2
    Hadoop

    Hadoop

    Apache Software Foundation

    The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures. A wide variety of companies and organizations use Hadoop for both research and production. Users are encouraged to add themselves to the Hadoop PoweredBy wiki page. Apache Hadoop 3.3.4 incorporates a number of significant enhancements over the previous major release line (hadoop-3.2).
  • 3
    Tencent Cloud Elastic MapReduce
    EMR enables you to scale the managed Hadoop clusters manually or automatically according to your business curves or monitoring metrics. EMR's storage-computation separation even allows you to terminate a cluster to maximize resource efficiency. EMR supports hot failover for CBS-based nodes. It features a primary/secondary disaster recovery mechanism where the secondary node starts within seconds when the primary node fails, ensuring the high availability of big data services. The metadata of its components such as Hive supports remote disaster recovery. Computation-storage separation ensures high data persistence for COS data storage. EMR is equipped with a comprehensive monitoring system that helps you quickly identify and locate cluster exceptions to ensure stable cluster operations. VPCs provide a convenient network isolation method that facilitates your network policy planning for managed Hadoop clusters.
  • 4
    E-MapReduce
    EMR is an all-in-one enterprise-ready big data platform that provides cluster, job, and data management services based on open-source ecosystems, such as Hadoop, Spark, Kafka, Flink, and Storm. Alibaba Cloud Elastic MapReduce (EMR) is a big data processing solution that runs on the Alibaba Cloud platform. EMR is built on Alibaba Cloud ECS instances and is based on open-source Apache Hadoop and Apache Spark. EMR allows you to use the Hadoop and Spark ecosystem components, such as Apache Hive, Apache Kafka, Flink, Druid, and TensorFlow, to analyze and process data. You can use EMR to process data stored on different Alibaba Cloud data storage service, such as Object Storage Service (OSS), Log Service (SLS), and Relational Database Service (RDS). You can quickly create clusters without the need to configure hardware and software. All maintenance operations are completed on its Web interface.
  • 5
    Apache Gobblin

    Apache Gobblin

    Apache Software Foundation

    A distributed data integration framework that simplifies common aspects of Big Data integration such as data ingestion, replication, organization, and lifecycle management for both streaming and batch data ecosystems. Runs as a standalone application on a single box. Also supports embedded mode. Runs as an mapreduce application on multiple Hadoop versions. Also supports Azkaban for launching mapreduce jobs. Runs as a standalone cluster with primary and worker nodes. This mode supports high availability and can run on bare metals as well. Runs as an elastic cluster on public cloud. This mode supports high availability. Gobblin as it exists today is a framework that can be used to build different data integration applications like ingest, replication, etc. Each of these applications is typically configured as a separate job and executed through a scheduler like Azkaban.
  • 6
    Azure HDInsight
    Run popular open-source frameworks—including Apache Hadoop, Spark, Hive, Kafka, and more—using Azure HDInsight, a customizable, enterprise-grade service for open-source analytics. Effortlessly process massive amounts of data and get all the benefits of the broad open-source project ecosystem with the global scale of Azure. Easily migrate your big data workloads and processing to the cloud. Open-source projects and clusters are easy to spin up quickly without the need to install hardware or manage infrastructure. Big data clusters reduce costs through autoscaling and pricing tiers that allow you to pay for only what you use. Enterprise-grade security and industry-leading compliance with more than 30 certifications helps protect your data. Optimized components for open-source technologies such as Hadoop and Spark keep you up to date.
  • 7
    Oracle Big Data SQL Cloud Service
    Oracle Big Data SQL Cloud Service enables organizations to immediately analyze data across Apache Hadoop, NoSQL and Oracle Database leveraging their existing SQL skills, security policies and applications with extreme performance. From simplifying data science efforts to unlocking data lakes, Big Data SQL makes the benefits of Big Data available to the largest group of end users possible. Big Data SQL gives users a single location to catalog and secure data in Hadoop and NoSQL systems, Oracle Database. Seamless metadata integration and queries which join data from Oracle Database with data from Hadoop and NoSQL databases. Utilities and conversion routines support automatic mappings from metadata stored in HCatalog (or the Hive Metastore) to Oracle Tables. Enhanced access parameters give administrators the flexibility to control column mapping and data access behavior. Multiple cluster support enables one Oracle Database to query multiple Hadoop clusters and/or NoSQL systems.
  • 8
    Amazon Elastic Block Store (EBS)
    Amazon Elastic Block Store (EBS) is an easy to use, high-performance, block-storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction intensive workloads at any scale. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS. You can choose from six different volume types to balance optimal price and performance. You can achieve single-digit-millisecond latency for high-performance database workloads such as SAP HANA or gigabyte per second throughput for large, sequential workloads such as Hadoop. You can change volume types, tune performance, or increase volume size without disrupting your critical applications, so you have cost-effective storage when you need it.
  • 9
    SAS Data Loader for Hadoop
    Load your data into or out of Hadoop and data lakes. Prep it so it's ready for reports, visualizations or advanced analytics – all inside the data lakes. And do it all yourself, quickly and easily. Makes it easy to access, transform and manage data stored in Hadoop or data lakes with a web-based interface that reduces training requirements. Built from the ground up to manage big data on Hadoop or in data lakes; not repurposed from existing IT-focused tools. Lets you group multiple directives to run simultaneously or one after the other. Schedule and automate directives using the exposed Public API. Enables you to share and secure directives. Call them from SAS Data Integration Studio, uniting technical and nontechnical user activities. Includes built-in directives – casing, gender and pattern analysis, field extraction, match-merge and cluster-survive. Profiling runs in-parallel on the Hadoop cluster for better performance.
  • 10
    Apache Spark

    Apache Spark

    Apache Software Foundation

    Apache Spark™ is a unified analytics engine for large-scale data processing. Apache Spark achieves high performance for both batch and streaming data, using a state-of-the-art DAG scheduler, a query optimizer, and a physical execution engine. Spark offers over 80 high-level operators that make it easy to build parallel apps. And you can use it interactively from the Scala, Python, R, and SQL shells. Spark powers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application. Spark runs on Hadoop, Apache Mesos, Kubernetes, standalone, or in the cloud. It can access diverse data sources. You can run Spark using its standalone cluster mode, on EC2, on Hadoop YARN, on Mesos, or on Kubernetes. Access data in HDFS, Alluxio, Apache Cassandra, Apache HBase, Apache Hive, and hundreds of other data sources.
  • 11
    IBM Analytics Engine
    IBM Analytics Engine provides an architecture for Hadoop clusters that decouples the compute and storage tiers. Instead of a permanent cluster formed of dual-purpose nodes, the Analytics Engine allows users to store data in an object storage layer such as IBM Cloud Object Storage and spins up clusters of computing notes when needed. Separating compute from storage helps to transform the flexibility, scalability and maintainability of big data analytics platforms. Build on an ODPi compliant stack with pioneering data science tools with the broader Apache Hadoop and Apache Spark ecosystem. Define clusters based on your application's requirements. Choose the appropriate software pack, version, and size of the cluster. Use as long as required and delete as soon as an application finishes jobs. Configure clusters with third-party analytics libraries and packages. Deploy workloads from IBM Cloud services like machine learning.
    Starting Price: $0.014 per hour
  • 12
    IBM Db2 Big SQL
    A hybrid SQL-on-Hadoop engine delivering advanced, security-rich data query across enterprise big data sources, including Hadoop, object storage and data warehouses. IBM Db2 Big SQL is an enterprise-grade, hybrid ANSI-compliant SQL-on-Hadoop engine, delivering massively parallel processing (MPP) and advanced data query. Db2 Big SQL offers a single database connection or query for disparate sources such as Hadoop HDFS and WebHDFS, RDMS, NoSQL databases, and object stores. Benefit from low latency, high performance, data security, SQL compatibility, and federation capabilities to do ad hoc and complex queries. Db2 Big SQL is now available in 2 variations. It can be integrated with Cloudera Data Platform, or accessed as a cloud-native service on the IBM Cloud Pak® for Data platform. Access and analyze data and perform queries on batch and real-time data across sources, like Hadoop, object stores and data warehouses.
  • 13
    Apache Sentry

    Apache Sentry

    Apache Software Foundation

    Apache Sentry™ is a system for enforcing fine grained role based authorization to data and metadata stored on a Hadoop cluster. Apache Sentry has successfully graduated from the Incubator in March of 2016 and is now a Top-Level Apache project. Apache Sentry is a granular, role-based authorization module for Hadoop. Sentry provides the ability to control and enforce precise levels of privileges on data for authenticated users and applications on a Hadoop cluster. Sentry currently works out of the box with Apache Hive, Hive Metastore/HCatalog, Apache Solr, Impala and HDFS (limited to Hive table data). Sentry is designed to be a pluggable authorization engine for Hadoop components. It allows you to define authorization rules to validate a user or application’s access requests for Hadoop resources. Sentry is highly modular and can support authorization for a wide variety of data models in Hadoop.
  • 14
    WANdisco

    WANdisco

    WANdisco

    Since 2010 we have seen Hadoop become an essential part of the data management landscape. Over the decade the majority of organizations have adopted Hadoop to build out their data lake infrastructure. However, while Hadoop offered a cost-effective way to store petabytes of data across a distributed environment, it introduced many complexities. The systems required specialized IT skills and the on-premises environments lacked the flexibility to easily scale the systems up and down as usage demands changed. The management complexity and flexibility challenges associated with on-premises Hadoop environments are much more optimally addressed in the cloud. To minimize the risks and costs associated with these data modernization efforts, many companies have selected to automate their cloud data migration with WANdisco. LiveData Migrator is a fully self-service solution requiring no WANdisco expertise or services.
  • 15
    Azure Disk Storage
    Designed to be used with Azure Virtual Machines and Azure VMware Solution (in preview), Azure Disk Storage offers high-performance, durable block storage for your mission- and business-critical applications. Confidently migrate to Azure infrastructure with four disk storage options for the cloud—–Ultra Disk Storage, Premium SSD, Standard SSD, and Standard HDD—to optimize costs and performance for your workload. Get high performance with sub-millisecond latency for throughput and transaction-intensive workloads such as SAP HANA, SQL Server, and Oracle. Run clustered or high-availability applications cost effectively in the cloud using shared disks. Get consistent enterprise-grade durability with a 0% annual failure rate. Meet demand without performance disruption by using Ultra Disk Storage. Secure your data with automatic encryption using Microsoft-managed keys or your own.
  • 16
    Apache Bigtop

    Apache Bigtop

    Apache Software Foundation

    Bigtop is an Apache Foundation project for Infrastructure Engineers and Data Scientists looking for comprehensive packaging, testing, and configuration of the leading open source big data components. Bigtop supports a wide range of components/projects, including, but not limited to, Hadoop, HBase and Spark. Bigtop packages Hadoop RPMs and DEBs, so that you can manage and maintain your Hadoop cluster. Bigtop provides an integrated smoke testing framework, alongside a suite of over 50 test files. Bigtop provides vagrant recipes, raw images, and (work-in-progress) docker recipes for deploying Hadoop from zero. Bigtop support many Operating Systems, including Debian, Ubuntu, CentOS, Fedora, openSUSE and many others. Bigtop includes tools and a framework for testing at various levels (packaging, platform, runtime, etc.) for both initial deployments as well as upgrade scenarios for the entire data platform, not just the individual components.
  • 17
    Longhorn

    Longhorn

    Longhorn

    In the past, ITOps and DevOps have found it hard to add replicated storage to Kubernetes clusters. As a result many non-cloud-hosted Kubernetes clusters don’t support persistent storage. External storage arrays are non-portable and can be extremely expensive. Longhorn delivers simplified, easy to deploy and upgrade, 100% open source, cloud-native persistent block storage without the cost overhead of open core or proprietary alternatives. Longhorn’s built-in incremental snapshot and backup features keep the volume data safe in or out of the Kubernetes cluster. Scheduled backups of persistent storage volumes in Kubernetes clusters is simplified with Longhorn’s intuitive, free management UI. External replication solutions will recover from a disk failure by re-replicating the entire data store. This can take days, during which time the cluster performs poorly and has a higher risk of failure.
  • 18
    HorizonIQ

    HorizonIQ

    HorizonIQ

    HorizonIQ is a comprehensive IT infrastructure provider offering managed private cloud, bare metal servers, GPU clusters, and hybrid cloud solutions designed for performance, security, and cost efficiency. Our managed private cloud services, powered by Proxmox VE or VMware, deliver dedicated virtualized environments ideal for AI workloads, general computing, and enterprise applications. HorizonIQ's hybrid cloud solutions enable seamless integration between private infrastructure and over 280 public cloud providers, facilitating real-time scalability and cost optimization. Our packages offer all-in-one solutions combining compute, network, storage, and security, tailored for various workloads from web applications to high-performance computing. With a focus on single-tenant environments, HorizonIQ ensures compliance with standards like HIPAA, SOC 2, and PCI DSS, while providing 1a 00% uptime SLA and proactive management through their Compass portal.
  • 19
    Lentiq

    Lentiq

    Lentiq

    Lentiq is a collaborative data lake as a service environment that’s built to enable small teams to do big things. Quickly run data science, machine learning and data analysis at scale in the cloud of your choice. With Lentiq, your teams can ingest data in real time and then process, clean and share it. From there, Lentiq makes it possible to build, train and share models internally. Simply put, data teams can collaborate with Lentiq and innovate with no restrictions. Data lakes are storage and processing environments, which provide ML, ETL, schema-on-read querying capabilities and so much more. Are you working on some data science magic? You definitely need a data lake. In the Post-Hadoop era, the big, centralized data lake is a thing of the past. With Lentiq, we use data pools, which are multi-cloud, interconnected mini-data lakes. They work together to give you a stable, secure and fast data science environment.
  • 20
    QuerySurge
    QuerySurge leverages AI to automate the data validation and ETL testing of Big Data, Data Warehouses, Business Intelligence Reports and Enterprise Apps/ERPs with full DevOps functionality for continuous testing. Use Cases - Data Warehouse & ETL Testing - Hadoop & NoSQL Testing - DevOps for Data / Continuous Testing - Data Migration Testing - BI Report Testing - Enterprise App/ERP Testing QuerySurge Features - Projects: Multi-project support - AI: automatically create datas validation tests based on data mappings - Smart Query Wizards: Create tests visually, without writing SQL - Data Quality at Speed: Automate the launch, execution, comparison & see results quickly - Test across 200+ platforms: Data Warehouses, Hadoop & NoSQL lakes, databases, flat files, XML, JSON, BI Reports - DevOps for Data & Continuous Testing: RESTful API with 60+ calls & integration with all mainstream solutions - Data Analytics & Data Intelligence:  Analytics dashboard & reports
  • 21
    Apache Knox

    Apache Knox

    Apache Software Foundation

    The Knox API Gateway is designed as a reverse proxy with consideration for pluggability in the areas of policy enforcement, through providers and the backend services for which it proxies requests. Policy enforcement ranges from authentication/federation, authorization, audit, dispatch, hostmapping and content rewrite rules. Policy is enforced through a chain of providers that are defined within the topology deployment descriptor for each Apache Hadoop cluster gated by Knox. The cluster definition is also defined within the topology deployment descriptor and provides the Knox Gateway with the layout of the cluster for purposes of routing and translation between user facing URLs and cluster internals. Each Apache Hadoop cluster that is protected by Knox has its set of REST APIs represented by a single cluster specific application context path. This allows the Knox Gateway to both protect multiple clusters and present the REST API consumer with a single endpoint.
  • 22
    Red Hat Ceph Storage
    Red Hat® Ceph Storage is an open, massively scalable, simplified storage solution for modern data pipelines. Engineered for data analytics, artificial intelligence/machine learning (AI/ML), and emerging workloads, Red Hat Ceph Storage delivers software-defined storage on your choice of industry-standard hardware. Scale to unprecedented levels, up to 1 billion objects and beyond, without compromising on performance. Expand or shrink storage clusters with no downtime. Gain the agility you need to get to market faster. Start faster with dramatically simplified installation. Quickly gain insights from massive amounts of unstructured data with streamlined operation, monitoring, and capacity management. Safeguard your data from outside threats and hardware failures with integrated data protection and data security features, including client-side and object-level encryption. Easily handle backup and recovery with a single point of control and administration.
  • 23
    jethro

    jethro

    jethro

    Data-driven decision-making has unleashed a surge of business data and a rise in user demand to analyze it. This trend drives IT departments to migrate off expensive Enterprise Data Warehouses (EDW) toward cost-effective Big Data platforms like Hadoop or AWS. These new platforms come with a Total Cost of Ownership (TCO) that is about 10 times lower. They are not ideal for interactive BI applications, however, as they fail to match the high performance and user concurrency of legacy EDWs. For this exact reason, we developed Jethro. Customers use Jethro for interactive BI on Big Data. Jethro is a transparent middle tier that requires no changes to existing apps or data. It is self-driving with no maintenance required. Jethro is compatible with BI tools like Tableau, Qlik, and Microstrategy and is data source agnostic. Jethro delivers on the demands of business users allowing for thousands of concurrent users to run complicated queries over billions of records.
  • 24
    Yandex Data Proc
    You select the size of the cluster, node capacity, and a set of services, and Yandex Data Proc automatically creates and configures Spark and Hadoop clusters and other components. Collaborate by using Zeppelin notebooks and other web apps via a UI proxy. You get full control of your cluster with root permissions for each VM. Install your own applications and libraries on running clusters without having to restart them. Yandex Data Proc uses instance groups to automatically increase or decrease computing resources of compute subclusters based on CPU usage indicators. Data Proc allows you to create managed Hive clusters, which can reduce the probability of failures and losses caused by metadata unavailability. Save time on building ETL pipelines and pipelines for training and developing models, as well as describing other iterative tasks. The Data Proc operator is already built into Apache Airflow.
    Starting Price: $0.19 per hour
  • 25
    doolytic

    doolytic

    doolytic

    doolytic is leading the way in big data discovery, the convergence of data discovery, advanced analytics, and big data. doolytic is rallying expert BI users to the revolution in self-service exploration of big data, revealing the data scientist in all of us. doolytic is an enterprise software solution for native discovery on big data. doolytic is based on best-of-breed, scalable, open-source technologies. Lightening performance on billions of records and petabytes of data. Structured, unstructured and real-time data from any source. Sophisticated advanced query capabilities for expert users, Integration with R for advanced and predictive applications. Search, analyze, and visualize data from any format, any source in real-time with the flexibility of Elastic. Leverage the power of Hadoop data lakes with no latency and concurrency issues. doolytic solves common BI problems and enables big data discovery without clumsy and inefficient workarounds.
  • 26
    Apache Mahout

    Apache Mahout

    Apache Software Foundation

    Apache Mahout is a powerful, scalable, and versatile machine learning library designed for distributed data processing. It offers a comprehensive set of algorithms for various tasks, including classification, clustering, recommendation, and pattern mining. Built on top of the Apache Hadoop ecosystem, Mahout leverages MapReduce and Spark to enable data processing on large-scale datasets. Apache Mahout(TM) is a distributed linear algebra framework and mathematically expressive Scala DSL designed to let mathematicians, statisticians, and data scientists quickly implement their own algorithms. Apache Spark is the recommended out-of-the-box distributed back-end or can be extended to other distributed backends. Matrix computations are a fundamental part of many scientific and engineering applications, including machine learning, computer vision, and data analysis. Apache Mahout is designed to handle large-scale data processing by leveraging the power of Hadoop and Spark.
  • 27
    StorPool Storage
    StorPool is a fully managed primary storage platform for businesses that host many mission-critical workloads in their own datacenters. We provide the easiest way to convert standard servers stacked with NVMe SSDs into high-performance, linearly scalable primary storage systems. Companies building their own public or private clouds use StorPool as a superior alternative to mid-range and high-end SANs and All-Flash Arrays (AFA). StorPool delivers above and beyond what is possible with other primary storage products in terms of reliability, agility, speed, and cost-effectiveness. It’s an excellent replacement for legacy storage architectures like mid- or high-end primary storage arrays and products that just use a software-defined approach to copy the base array, expansion shelf architecture. Exceptional performance, reliability and increased ROI from your cloud computing offering.
  • 28
    Apache Ranger

    Apache Ranger

    The Apache Software Foundation

    Apache Ranger™ is a framework to enable, monitor and manage comprehensive data security across the Hadoop platform. The vision with Ranger is to provide comprehensive security across the Apache Hadoop ecosystem. With the advent of Apache YARN, the Hadoop platform can now support a true data lake architecture. Enterprises can potentially run multiple workloads, in a multi tenant environment. Data security within Hadoop needs to evolve to support multiple use cases for data access, while also providing a framework for central administration of security policies and monitoring of user access. Centralized security administration to manage all security related tasks in a central UI or using REST APIs. Fine grained authorization to do a specific action and/or operation with Hadoop component/tool and managed through a central administration tool. Standardize authorization method across all Hadoop components. Enhanced support for different authorization methods - Role based access control etc.
  • 29
    Oracle Cloud Infrastructure Block Volume
    Oracle Cloud Infrastructure Block Volume provide customers reliable, high-performance block storage designed to work with a range of virtual machines and bare metal instances. With built-in redundancy, Block Volumes are persistent and durable beyond the lifespan of a virtual machine and can scale to 1 PB per compute instance. All volumes have built-in durability and run on redundant hardware to achieve high reliability. Back up block and boot volumes to Oracle Cloud Infrastructure (OCI) Object Storage to enable frequent recovery points. Easily optimize storage size without provisioning constraints. Extend existing block and boot volumes from 50 GB to 32 TB while they are online, and without any impact to applications and workloads. Clone existing volumes or restore from backups to move to larger volumes. Easily clone existing block volumes without initiating the backup and restore process.
  • 30
    ZetaAnalytics

    ZetaAnalytics

    Halliburton

    The ZetaAnalytics product requires a compatible database appliance for its Data Warehouse. Landmark has qualified the ZetaAnalytics software using Teradata, EMC Greenplum, and IBM Netezza. Please see the ZetaAnalytics Release Notes for the most up to date qualified versions. Before installing and configuring ZetaAnalytics software, ensure that the Data Warehouse you use for drilling data is created and running. Scripts to create the various Zeta-specific database components within the Data Warehouse will need to be run as part of the installation process. These require database administrator (DBA) rights. The ZetaAnalytics product requires Apache Hadoop for model scoring and real-time streaming. If you do not already have an Apache Hadoop cluster installed in your environment, please install it before running the ZetaAnalytics installer, which will prompt you for the name and port number of your Hadoop Name Server and Map Reducer.
  • 31
    MinIO

    MinIO

    MinIO

    MinIO's high-performance object storage suite is software defined and enables customers to build cloud-native data infrastructure for machine learning, analytics and application data workloads. MinIO object storage is fundamentally different. Designed for performance and the S3 API, it is 100% open-source. MinIO is ideal for large, private cloud environments with stringent security requirements and delivers mission-critical availability across a diverse range of workloads. MinIO is the world's fastest object storage server. With READ/WRITE speeds of 183 GB/s and 171 GB/s on standard hardware, object storage can operate as the primary storage tier for a diverse set of workloads ranging from Spark, Presto, TensorFlow, H2O.ai as well as a replacement for Hadoop HDFS. MinIO leverages the hard won knowledge of the web scalers to bring a simple scaling model to object storage. At MinIO, scaling starts with a single cluster which can be federated with other MinIO clusters.
  • 32
    simplyblock

    simplyblock

    simplyblock

    Simplyblock provides a distributed storage solution for IO-intensive and latency-sensitive container workloads in the cloud, offering an alternative to Elastic Block Storage services. The storage solution enables thin provisioning, encryption, compression, storage virtualization, and more. Ultra-high performance at low TCO, offering available for AWS, fully containerized, deployment. Up to 100x improved cost-to-performance over currently prevailing software-defined storage technologies like Ceph. Start from single node, grow to 255 nodes in a single cluster. Scales safely with zero downtime. Performance scales linearly. Storage entities (logical volumes) are provisioned and attached on cluster-level, no manual configuration required. Drop-in replacement for your current k8s storage solution. Offers easy integration via StorageClass. Write concurrently on multiple containers and nodes via distributed file system support.
  • 33
    MayaScale

    MayaScale

    ZettaLane Systems

    Build powerful NVMe over Fabrics high-performance shared storage solution with MayaScale. Consolidate direct attached NVMe resources into shared storage pool and provision flexible NVMe namespaces to clients demanding high performance with low latency. When finished the clients can return NVMe storage space back to storage pool. No more over-provisioning or stranded NVMe storage space as is the case with Direct Attached storage. This network agnostic solution uses RDMA for on-premises and standard TCP on cloud. True NVMe device is visible to client using standard NVMe driver stack without need for any proprietary driver extensions. Configure and deploy NVMe over Fabrics SAN infrastructure at rack scale in your data center by pooling heterogeneous NVMe devices over RDMA capable interconnect. This includes ROCE, iWARP or Infiniband. Even on public Cloud, experience NVMe over Fabrics using the standard TCP/IP protocol without need for any specialized RDMA hardware or SRIOV virtualization.
  • 34
    Oracle Big Data Discovery
    Oracle Big Data Discovery is a stunningly visual, intuitive product that leverages the power of Hadoop to transform raw data into business insight in minutes, without the need to learn complex tools or rely only on highly specialized resources. With Oracle Big Data Discovery, customers can easily find relevant data sets in Hadoop, explore the data and quickly understand its potential, transform and enrich data to make it better, analyze the data to discover new insights, share results and publish back to Hadoop for use across the enterprise. In your organization, use BDD as the center of your data lab, as a unified environment for navigating and exploring all of your data sources in Hadoop, and to create projects and BDD applications. In BDD, a wider number of people can work with big data, compared with traditional analytics tools. You spend less time on data loading and updates, and can focus on actual data analysis of big data.
  • 35
    Constant

    Constant

    Constant

    Instantly deploy and hyperscale bare metal, virtual servers, and storage around the world. Our passion is helping developers build and scale applications using the most efficient global cloud infrastructure. Spend less time managing your infrastructure and more time developing. Accelerate your development with flexible, reliable cloud infrastructure deployed in seconds. Build, deploy, and scale with CI/CD on our infrastructure. Deliver compute and storage resources where they are needed most. Scale your platform and deliver optimal performance to players around the globe. Build a global application backend to connect customers. Seamlessly manage dynamic and rapidly growing resource demands. Constant's flagship product, Vultr is a favorite with the developer community, serves over 1.5 million customers with flexible, scalable, global bare metal, cloud computing, and storage solutions.
  • 36
    Zadara

    Zadara

    Zadara Storage

    Zadara is enterprise storage made easy. Any data type. Any protocol. Any location. Get Zadara — on your premises or with your chosen cloud provider — and you get more than industry-leading enterprise storage. You get a fully-managed, pay-only-for-what-you-use service that eliminates the cost and complexity typically associated with enterprise storage.
    Starting Price: $0.02/GB/month
  • 37
    EspressReport ES

    EspressReport ES

    Quadbase Systems

    EspressRepot ES (Enterprise Server) is a web and desktop-based software that allows users to develop stunning and interactive data visualization and reporting. The platform offers full Java EE integration, to draw data from data sources such as Bid Data (Hadoop, Spark, and MongoDB), ad-hoc queries and reports, online map support, mobile compatibility, alert monitor, and many other amazing features.
  • 38
    Apache Atlas

    Apache Atlas

    Apache Software Foundation

    Atlas is a scalable and extensible set of core foundational governance services – enabling enterprises to effectively and efficiently meet their compliance requirements within Hadoop and allows integration with the whole enterprise data ecosystem. Apache Atlas provides open metadata management and governance capabilities for organizations to build a catalog of their data assets, classify and govern these assets and provide collaboration capabilities around these data assets for data scientists, analysts and the data governance team. Pre-defined types for various Hadoop and non-Hadoop metadata. Ability to define new types for the metadata to be managed. Types can have primitive attributes, complex attributes, object references; can inherit from other types. Instances of types, called entities, capture metadata object details and their relationships. REST APIs to work with types and instances allow easier integration.
  • 39
    MLlib

    MLlib

    Apache Software Foundation

    ​Apache Spark's MLlib is a scalable machine learning library that integrates seamlessly with Spark's APIs, supporting Java, Scala, Python, and R. It offers a comprehensive suite of algorithms and utilities, including classification, regression, clustering, collaborative filtering, and tools for constructing machine learning pipelines. MLlib's high-quality algorithms leverage Spark's iterative computation capabilities, delivering performance up to 100 times faster than traditional MapReduce implementations. It is designed to operate across diverse environments, running on Hadoop, Apache Mesos, Kubernetes, standalone clusters, or in the cloud, and accessing various data sources such as HDFS, HBase, and local files. This flexibility makes MLlib a robust solution for scalable and efficient machine learning tasks within the Apache Spark ecosystem. ​
  • 40
    DRBD

    DRBD

    LINBIT

    DRBD® (Distributed Replicated Block Device) is an open source, software‑based, shared‑nothing block storage replication solution for Linux, designed primarily to deliver high-performance, high‑availability (HA) data services by mirroring local block devices between nodes in real time, either synchronously or asynchronously. Implemented deep in the Linux kernel as a virtual block‑device driver, DRBD ensures local read performance with efficient write‑through replication to peer(s). User‑space utilities like drbdadm, drbdsetup, and drbdmeta enable declarative configuration, metadata management, and administration across installations. Originally built for two‑node HA clusters, DRBD 9.x extends support to multi‑node replication and integration into software‑defined storage (SDS) systems such as LINSTOR, making it suitable for cloud‑native environments.
  • 41
    SpectX

    SpectX

    SpectX

    SpectX is a powerful log analyzer for incident investigation and data exploration. It does not ingest or index data but runs queries directly on log files stored in file systems or blob storage. Local log servers, cloud storage, Hadoop clusters, JDBC-databases, production servers, Elastic clusters, or anything that speaks HTTP - SpectX turns any text-based log files into structured virtual views. SpectX query language is inspired by piping in Unix. An extensive library of built-in query functions allows analysts to compose complex queries and get advanced insights. In addition to the browser-based interface, every query can be easily executed via RESTful API, with advanced options to customize the resultset. This makes it easy to integrate SpectX with other applications in need of clean and structured data. SpectX easy-to-read pattern matching language can flexibly match any data, no need to read or write regex.
  • 42
    Alibaba Cloud Data Integration
    Alibaba Cloud Data Integration is a comprehensive data synchronization platform that facilitates both real-time and offline data exchange across various data sources, networks, and locations. It supports data synchronization between more than 400 pairs of disparate data sources, including RDS databases, semi-structured storage, non-structured storage (such as audio, video, and images), NoSQL databases, and big data storage. The platform also enables real-time data reading and writing between data sources such as Oracle, MySQL, and DataHub. Data Integration allows users to schedule offline tasks by setting specific trigger times, including year, month, day, hour, and minute, simplifying the configuration of periodic incremental data extraction. It integrates seamlessly with DataWorks data modeling, providing an operations and maintenance integrated workflow. The platform leverages the computing capability of Hadoop clusters to synchronize HDFS data to MaxCompute.
  • 43
    CloudPe

    CloudPe

    Leapswitch Networks

    CloudPe is a global cloud solutions provider offering scalable and secure cloud technologies tailored for businesses of all sizes. As a collaborative venture between Leapswitch Networks and Strad Solutions, CloudPe combines extensive industry expertise to deliver innovative services. Key Offerings: Virtual Machines: High-performance VMs designed for various business needs, including hosting websites, building applications, and data processing. GPU Instances: NVIDIA-powered GPUs for AI, machine learning, and high-performance computing, available on-demand. Kubernetes-as-a-Service: Simplified container orchestration for deploying and managing containerized applications efficiently. S3-Compatible Storage: Highly scalable and cost-effective storage solutions. Load Balancers: Intelligent load balancing to distribute traffic evenly across resources, ensuring fast and reliable performance. Why Choose CloudPe? 1. Reliability 2. Cost Efficiency 3. Instant Deployment
  • 44
    Scaleway

    Scaleway

    Scaleway

    The Cloud that makes sense. From high-performance cloud ecosystem to hyperscale green datacenters, Scaleway provides the foundation for digital success. Cloud platform designed for developers & growing companies. All you need to create, deploy and scale your infrastructure in the cloud. Compute, GPU, Bare Metal & Containers. Evolutive & Managed Storage. Network. IoT. The largest choice of dedicated servers to succeed in the most demanding projects. High-end dedicated servers Web Hosting. Domain Names Services. Take advantage of our cutting-edge expertise to host your hardware in our resilient, high-performance and secure data centers. Private Suite & Cage. Rack, 1/2 & 1/4 Rack. Scaleway data centers. Scaleway is driving 6 data centers in Europe and offers cloud solutions to customers in more that 160 countries around the world. Our Excellence team: Experts by your side 24/7 year round Discover how we help our customers to use, tune & optimize their platforms with skilled expert
  • 45
    Lunavi

    Lunavi

    Lunavi

    Lunavi builds cloud storage solutions around your application requirements and IT architecture for an optimized and reliable cloud environment that allows synchronized storage access across your portfolio. Different types of cloud storage are best suited to different types of applications. For common apps, shared drives, and regular read/write operations, simple file storage often does the trick. For high-performance applications and portability, block storage might work better. For very large storage needs and wide compatibility, object storage could be the answer. Whatever your application or platform of choice, Lunavi helps guide your way. Object storage is a highly scalable, cost-effective cloud storage for large sets of unstructured data including images, video, documents, and other media-rich environments. Lunavi offers file and block storage with a range of performance levels to optimize your storage budget while supporting specific cloud workloads.
  • 46
    Apache Mesos

    Apache Mesos

    Apache Software Foundation

    Mesos is built using the same principles as the Linux kernel, only at a different level of abstraction. The Mesos kernel runs on every machine and provides applications (e.g., Hadoop, Spark, Kafka, Elasticsearch) with API’s for resource management and scheduling across entire datacenter and cloud environments. Native support for launching containers with Docker and AppC images.Support for running cloud native and legacy applications in the same cluster with pluggable scheduling policies. HTTP APIs for developing new distributed applications, for operating the cluster, and for monitoring. Built-in Web UI for viewing cluster state and navigating container sandboxes.
  • 47
    Trino

    Trino

    Trino

    Trino is a query engine that runs at ludicrous speed. Fast-distributed SQL query engine for big data analytics that helps you explore your data universe. Trino is a highly parallel and distributed query engine, that is built from the ground up for efficient, low-latency analytics. The largest organizations in the world use Trino to query exabyte-scale data lakes and massive data warehouses alike. Supports diverse use cases, ad-hoc analytics at interactive speeds, massive multi-hour batch queries, and high-volume apps that perform sub-second queries. Trino is an ANSI SQL-compliant query engine, that works with BI tools such as R, Tableau, Power BI, Superset, and many others. You can natively query data in Hadoop, S3, Cassandra, MySQL, and many others, without the need for complex, slow, and error-prone processes for copying the data. Access data from multiple systems within a single query.
  • 48
    Apache Trafodion

    Apache Trafodion

    Apache Software Foundation

    Apache Trafodion is a webscale SQL-on-Hadoop solution enabling transactional or operational workloads on Apache Hadoop. Trafodion builds on the scalability, elasticity, and flexibility of Hadoop. Trafodion extends Hadoop to provide guaranteed transactional integrity, enabling new kinds of big data applications to run on Hadoop. Full-functioned ANSI SQL language support. JDBC/ODBC connectivity for Linux/Windows clients. Distributed ACID transaction protection across multiple statements, tables, and rows. Performance improvements for OLTP workloads with compile-time and run-time optimizations. Support for large data sets using a parallel-aware query optimizer. Reuse existing SQL skills and improve developer productivity. Distributed ACID transactions guarantee data consistency across multiple rows and tables. Interoperability with existing tools and applications. Hadoop and Linux distribution neutral. Easy to add to your existing Hadoop infrastructure.
  • 49
    Nutanix Files Storage
    Nutanix Files Storage is a simple, flexible and intelligent scale-out file storage service for the data driven era. Update non-disruptively with a single click, and manage all storage from a single pane of glass. Scale-up or scale-out flexibly on the hardware of your choice and enjoy cloud-like consumption. Know your data, who’s using it, and how—and then drive automated management and control. IDC study shows how Nutanix Files Storage reduces operational overhead by 66% over traditional siloed storage resulting in 414% ROI and 7 month pay back. Nutanix Files Storage is built to handle billions of files and tens of thousands of user sessions. As your environment grows, just one click will elastically scale your cluster up by adding more compute and/or memory to the file server VMs, or out by adding more file server VMs. All from a single platform. You can also provide object and block storage using the same resources.
  • 50
    Akamai Cloud
    Akamai Cloud (formerly Linode) is the world’s most distributed cloud computing platform, designed to help businesses deploy low-latency, high-performance applications anywhere. It delivers GPU acceleration, managed Kubernetes, object storage, and compute instances optimized for AI, media, and SaaS workloads. With flat, predictable pricing and low egress fees, Akamai Cloud offers a transparent and cost-effective alternative to traditional hyperscalers. Its global infrastructure ensures faster response times, improved reliability, and data sovereignty across key regions. Developers can scale securely using Akamai’s firewall, database, and networking solutions, all managed through an intuitive interface or API. Backed by enterprise-grade support and compliance, Akamai Cloud empowers organizations to innovate confidently at the edge.