Alternatives to Oracle Cloud Infrastructure Data Lakehouse
Compare Oracle Cloud Infrastructure Data Lakehouse alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to Oracle Cloud Infrastructure Data Lakehouse in 2025. Compare features, ratings, user reviews, pricing, and more from Oracle Cloud Infrastructure Data Lakehouse competitors and alternatives in order to make an informed decision for your business.
-
1
Teradata VantageCloud
Teradata
Teradata VantageCloud: The complete cloud analytics and data platform for AI. Teradata VantageCloud is an enterprise-grade, cloud-native data and analytics platform that unifies data management, advanced analytics, and AI/ML capabilities in a single environment. Designed for scalability and flexibility, VantageCloud supports multi-cloud and hybrid deployments, enabling organizations to manage structured and semi-structured data across AWS, Azure, Google Cloud, and on-premises systems. It offers full ANSI SQL support, integrates with open-source tools like Python and R, and provides built-in governance for secure, trusted AI. VantageCloud empowers users to run complex queries, build data pipelines, and operationalize machine learning models—all while maintaining interoperability with modern data ecosystems. -
2
AnalyticsCreator
AnalyticsCreator
AnalyticsCreator is a metadata-driven data warehouse automation solution built specifically for teams working within the Microsoft data ecosystem. It helps organizations speed up the delivery of production-ready data products by automating the entire data engineering lifecycle—from ELT pipeline generation and dimensional modeling to historization and semantic model creation for platforms like Microsoft SQL Server, Azure Synapse Analytics, and Microsoft Fabric. By eliminating repetitive manual coding and reducing the need for multiple disconnected tools, AnalyticsCreator helps data teams reduce tool sprawl and enforce consistent modeling standards across projects. The solution includes built-in support for automated documentation, lineage tracking, schema evolution, and CI/CD integration with Azure DevOps and GitHub. Whether you’re working on data marts, data products, or full-scale enterprise data warehouses, AnalyticsCreator allows you to build faster, govern better, and deliver -
3
Snowflake
Snowflake
Snowflake is a comprehensive AI Data Cloud platform designed to eliminate data silos and simplify data architectures, enabling organizations to get more value from their data. The platform offers interoperable storage that provides near-infinite scale and access to diverse data sources, both inside and outside Snowflake. Its elastic compute engine delivers high performance for any number of users, workloads, and data volumes with seamless scalability. Snowflake’s Cortex AI accelerates enterprise AI by providing secure access to leading large language models (LLMs) and data chat services. The platform’s cloud services automate complex resource management, ensuring reliability and cost efficiency. Trusted by over 11,000 global customers across industries, Snowflake helps businesses collaborate on data, build data applications, and maintain a competitive edge.Starting Price: $2 compute/month -
4
Amazon Redshift
Amazon
More customers pick Amazon Redshift than any other cloud data warehouse. Redshift powers analytical workloads for Fortune 500 companies, startups, and everything in between. Companies like Lyft have grown with Redshift from startups to multi-billion dollar enterprises. No other data warehouse makes it as easy to gain new insights from all your data. With Redshift you can query petabytes of structured and semi-structured data across your data warehouse, operational database, and your data lake using standard SQL. Redshift lets you easily save the results of your queries back to your S3 data lake using open formats like Apache Parquet to further analyze from other analytics services like Amazon EMR, Amazon Athena, and Amazon SageMaker. Redshift is the world’s fastest cloud data warehouse and gets faster every year. For performance intensive workloads you can use the new RA3 instances to get up to 3x the performance of any cloud data warehouse.Starting Price: $0.25 per hour -
5
CelerData Cloud
CelerData
CelerData is a high-performance SQL engine built to power analytics directly on data lakehouses, eliminating the need for traditional data‐warehouse ingestion pipelines. It delivers sub-second query performance at scale, supports on-the‐fly JOINs without costly denormalization, and simplifies architecture by allowing users to run demanding workloads on open format tables. Built on the open source engine StarRocks, the platform outperforms legacy query engines like Trino, ClickHouse, and Apache Druid in latency, concurrency, and cost-efficiency. With a cloud-managed service that runs in your own VPC, you retain infrastructure control and data ownership while CelerData handles maintenance and optimization. The platform is positioned to power real-time OLAP, business intelligence, and customer-facing analytics use cases and is trusted by enterprise customers (including names such as Pinterest, Coinbase, and Fanatics) who have achieved significant latency reductions and cost savings. -
6
IBM watsonx.data
IBM
Put your data to work, wherever it resides, with the open, hybrid data lakehouse for AI and analytics. Connect your data from anywhere, in any format, and access through a single point of entry with a shared metadata layer. Optimize workloads for price and performance by pairing the right workloads with the right query engine. Embed natural-language semantic search without the need for SQL, so you can unlock generative AI insights faster. Manage and prepare trusted data to improve the relevance and precision of your AI applications. Use all your data, everywhere. With the speed of a data warehouse, the flexibility of a data lake, and special features to support AI, watsonx.data can help you scale AI and analytics across your business. Choose the right engines for your workloads. Flexibly manage cost, performance, and capability with access to multiple open engines including Presto, Presto C++, Spark Milvus, and more. -
7
Archon Data Store
Platform 3 Solutions
Archon Data Store™ is a powerful and secure open-source based archive lakehouse platform designed to store, manage, and provide insights from massive volumes of data. With its compliance features and minimal footprint, it enables large-scale search, processing, and analysis of structured, unstructured, & semi-structured data across your organization. Archon Data Store combines the best features of data warehouses and data lakes into a single, simplified platform. This unified approach eliminates data silos, streamlining data engineering, analytics, data science, and machine learning workflows. Through metadata centralization, optimized data storage, and distributed computing, Archon Data Store maintains data integrity. Its common approach to data management, security, and governance helps you operate more efficiently and innovate faster. Archon Data Store provides a single platform for archiving and analyzing all your organization's data while delivering operational efficiencies. -
8
BigLake
Google
BigLake is a storage engine that unifies data warehouses and lakes by enabling BigQuery and open-source frameworks like Spark to access data with fine-grained access control. BigLake provides accelerated query performance across multi-cloud storage and open formats such as Apache Iceberg. Store a single copy of data with uniform features across data warehouses & lakes. Fine-grained access control and multi-cloud governance over distributed data. Seamless integration with open-source analytics tools and open data formats. Unlock analytics on distributed data regardless of where and how it’s stored, while choosing the best analytics tools, open source or cloud-native over a single copy of data. Fine-grained access control across open source engines like Apache Spark, Presto, and Trino, and open formats such as Parquet. Performant queries over data lakes powered by BigQuery. Integrates with Dataplex to provide management at scale, including logical data organization.Starting Price: $5 per TB -
9
Databricks Data Intelligence Platform
Databricks
The Databricks Data Intelligence Platform allows your entire organization to use data and AI. It’s built on a lakehouse to provide an open, unified foundation for all data and governance, and is powered by a Data Intelligence Engine that understands the uniqueness of your data. The winners in every industry will be data and AI companies. From ETL to data warehousing to generative AI, Databricks helps you simplify and accelerate your data and AI goals. Databricks combines generative AI with the unification benefits of a lakehouse to power a Data Intelligence Engine that understands the unique semantics of your data. This allows the Databricks Platform to automatically optimize performance and manage infrastructure in ways unique to your business. The Data Intelligence Engine understands your organization’s language, so search and discovery of new data is as easy as asking a question like you would to a coworker. -
10
e6data
e6data
Limited competition due to deep barriers to entry, specialized know-how, massive capital needs, and long time-to-market. Existing platforms are indistinguishable in price, and performance reducing the incentive to switch. Migrating from one engine’s SQL dialect to another engine’s SQL involves months of effort. Truly format-neutral computing, interoperable with all major open standards. Enterprise data leaders are hit by an unprecedented explosion in computing demand for data intelligence. They are surprised to find that 10% of their heavy, compute-intensive use cases consume 80% of the cost, engineering effort and stakeholder complaints. Unfortunately, such workloads are also mission-critical and non-discretionary. e6data amplifies ROI on enterprises' existing data platforms and architecture. e6data’s truly format-neutral compute has the unique distinction of being equally efficient and performant across leading data lakehouse table formats. -
11
Lyftrondata
Lyftrondata
Whether you want to build a governed delta lake, data warehouse, or simply want to migrate from your traditional database to a modern cloud data warehouse, do it all with Lyftrondata. Simply create and manage all of your data workloads on one platform by automatically building your pipeline and warehouse. Analyze it instantly with ANSI SQL, BI/ML tools, and share it without worrying about writing any custom code. Boost the productivity of your data professionals and shorten your time to value. Define, categorize, and find all data sets in one place. Share these data sets with other experts with zero codings and drive data-driven insights. This data sharing ability is perfect for companies that want to store their data once, share it with other experts, and use it multiple times, now and in the future. Define dataset, apply SQL transformations or simply migrate your SQL data processing logic to any cloud data warehouse. -
12
Cloudera
Cloudera
Manage and secure the data lifecycle from the Edge to AI in any cloud or data center. Operates across all major public clouds and the private cloud with a public cloud experience everywhere. Integrates data management and analytic experiences across the data lifecycle for data anywhere. Delivers security, compliance, migration, and metadata management across all environments. Open source, open integrations, extensible, & open to multiple data stores and compute architectures. Deliver easier, faster, and safer self-service analytics experiences. Provide self-service access to integrated, multi-function analytics on centrally managed and secured business data while deploying a consistent experience anywhere—on premises or in hybrid and multi-cloud. Enjoy consistent data security, governance, lineage, and control, while deploying the powerful, easy-to-use cloud analytics experiences business users require and eliminating their need for shadow IT solutions. -
13
Onehouse
Onehouse
The only fully managed cloud data lakehouse designed to ingest from all your data sources in minutes and support all your query engines at scale, for a fraction of the cost. Ingest from databases and event streams at TB-scale in near real-time, with the simplicity of fully managed pipelines. Query your data with any engine, and support all your use cases including BI, real-time analytics, and AI/ML. Cut your costs by 50% or more compared to cloud data warehouses and ETL tools with simple usage-based pricing. Deploy in minutes without engineering overhead with a fully managed, highly optimized cloud service. Unify your data in a single source of truth and eliminate the need to copy data across data warehouses and lakes. Use the right table format for the job, with omnidirectional interoperability between Apache Hudi, Apache Iceberg, and Delta Lake. Quickly configure managed pipelines for database CDC and streaming ingestion. -
14
DataLakeHouse.io
DataLakeHouse.io
DataLakeHouse.io (DLH.io) Data Sync provides replication and synchronization of operational systems (on-premise and cloud-based SaaS) data into destinations of their choosing, primarily Cloud Data Warehouses. Built for marketing teams and really any data team at any size organization, DLH.io enables business cases for building single source of truth data repositories, such as dimensional data warehouses, data vault 2.0, and other machine learning workloads. Use cases are technical and functional including: ELT, ETL, Data Warehouse, Pipeline, Analytics, AI & Machine Learning, Data, Marketing, Sales, Retail, FinTech, Restaurant, Manufacturing, Public Sector, and more. DataLakeHouse.io is on a mission to orchestrate data for every organization particularly those desiring to become data-driven, or those that are continuing their data driven strategy journey. DataLakeHouse.io (aka DLH.io) enables hundreds of companies to managed their cloud data warehousing and analytics solutions.Starting Price: $99 -
15
Dremio
Dremio
Dremio delivers lightning-fast queries and a self-service semantic layer directly on your data lake storage. No moving data to proprietary data warehouses, no cubes, no aggregation tables or extracts. Just flexibility and control for data architects, and self-service for data consumers. Dremio technologies like Data Reflections, Columnar Cloud Cache (C3) and Predictive Pipelining work alongside Apache Arrow to make queries on your data lake storage very, very fast. An abstraction layer enables IT to apply security and business meaning, while enabling analysts and data scientists to explore data and derive new virtual datasets. Dremio’s semantic layer is an integrated, searchable catalog that indexes all of your metadata, so business users can easily make sense of your data. Virtual datasets and spaces make up the semantic layer, and are all indexed and searchable. -
16
FutureAnalytica
FutureAnalytica
Ours is the world’s first & only end-to-end platform for all your AI-powered innovation needs — right from data cleansing & structuring, to creating & deploying advanced data-science models, to infusing advanced analytics algorithms with built-in Recommendation AI, to deducing the outcomes with easy-to-deduce visualization dashboards, as well as Explainable AI to backtrack how the outcomes were derived, our no-code AI platform can do it all! Our platform offers a holistic, seamless data science experience. With key features like a robust Data Lakehouse, a unique AI Studio, a comprehensive AI Marketplace, and a world-class data-science support team (on a need basis), FutureAnalytica is geared to reduce your time, efforts & costs across your data-science & AI journey. Initiate discussions with the leadership, followed by a quick technology assessment in 1–3 days. Build ready-to-integrate AI solutions using FA's fully automated data science & AI platform in 10–18 days. -
17
BryteFlow
BryteFlow
BryteFlow builds the most efficient automated environments for analytics ever. It converts Amazon S3 into an awesome analytics platform by leveraging the AWS ecosystem intelligently to deliver data at lightning speeds. It complements AWS Lake Formation and automates the Modern Data Architecture providing performance and productivity. You can completely automate data ingestion with BryteFlow Ingest’s simple point-and-click interface while BryteFlow XL Ingest is great for the initial full ingest for very large datasets. No coding is needed! With BryteFlow Blend you can merge data from varied sources like Oracle, SQL Server, Salesforce and SAP etc. and transform it to make it ready for Analytics and Machine Learning. BryteFlow TruData reconciles the data at the destination with the source continually or at a frequency you select. If data is missing or incomplete you get an alert so you can fix the issue easily. -
18
Mozart Data
Mozart Data
Mozart Data is the all-in-one modern data platform that makes it easy to consolidate, organize, and analyze data. Start making data-driven decisions by setting up a modern data stack in an hour - no engineering required. -
19
Openbridge
Openbridge
Uncover insights to supercharge sales growth using code-free, fully-automated data pipelines to data lakes or cloud warehouses. A flexible, standards-based platform to unify sales and marketing data for automating insights and smarter growth. Say goodbye to messy, expensive manual data downloads. Always know what you’ll pay and only pay for what you use. Fuel your tools with quick access to analytics-ready data. As certified developers, we only work with secure, official APIs. Get started quickly with data pipelines from popular sources. Pre-built, pre-transformed, and ready-to-go data pipelines. Unlock data from Amazon Vendor Central, Amazon Seller Central, Instagram Stories, Facebook, Amazon Advertising, Google Ads, and many others. Code-free data ingestion and transformation processes allow teams to realize value from their data quickly and cost-effectively. Data is always securely stored directly in a trusted, customer-owned data destination like Databricks, Amazon Redshift, etc.Starting Price: $149 per month -
20
Talend Data Fabric
Qlik
Talend Data Fabric’s suite of cloud services efficiently handles all your integration and integrity challenges — on-premises or in the cloud, any source, any endpoint. Deliver trusted data at the moment you need it — for every user, every time. Ingest and integrate data, applications, files, events and APIs from any source or endpoint to any location, on-premise and in the cloud, easier and faster with an intuitive interface and no coding. Embed quality into data management and guarantee ironclad regulatory compliance with a thoroughly collaborative, pervasive and cohesive approach to data governance. Make the most informed decisions based on high quality, trustworthy data derived from batch and real-time processing and bolstered with market-leading data cleaning and enrichment tools. Get more value from your data by making it available internally and externally. Extensive self-service capabilities make building APIs easy— improve customer engagement. -
21
OpenText Analytics Database is a high-performance, scalable analytics platform that enables organizations to analyze massive data sets quickly and cost-effectively. It supports real-time analytics and in-database machine learning to deliver actionable business insights. The platform can be deployed flexibly across hybrid, multi-cloud, and on-premises environments to optimize infrastructure and reduce total cost of ownership. Its massively parallel processing (MPP) architecture handles complex queries efficiently, regardless of data size. OpenText Analytics Database also features compatibility with data lakehouse architectures, supporting formats like Parquet and ORC. With built-in machine learning and broad language support, it empowers users from SQL experts to Python developers to derive predictive insights.
-
22
iomete
iomete
Modern lakehouse built on top of Apache Iceberg and Apache Spark. Includes: Serverless lakehouse, Serverless Spark Jobs, SQL editor, Advanced data catalog and built-in BI (or connect 3rd party BI e.g. Tableau, Looker). iomete has an extreme value proposition with compute prices is equal to AWS on-demand pricing. No mark-ups. AWS users get our platform basically for free.Starting Price: Free -
23
Cloudera Data Warehouse
Cloudera
Cloudera Data Warehouse is a cloud-native, self-service analytics solution that lets IT rapidly deliver query capabilities to BI analysts, enabling users to go from zero to query in minutes. It supports all data types, structured, semi-structured, unstructured, real-time, and batch, and scales cost-effectively from gigabytes to petabytes. It is fully integrated with streaming, data engineering, and AI services, and enforces a unified security, governance, and metadata framework across private, public, or hybrid cloud deployments. Each virtual warehouse (data warehouse or mart) is isolated and automatically configured and optimized, ensuring that workloads do not interfere with each other. Cloudera leverages open source engines such as Hive, Impala, Kudu, and Druid, along with tools like Hue and more, to handle diverse analytics, from dashboards and operational analytics to research and discovery over vast event or time-series data. -
24
Amazon Security Lake
Amazon
Amazon Security Lake automatically centralizes security data from AWS environments, SaaS providers, on-premises, and cloud sources into a purpose-built data lake stored in your account. With Security Lake, you can get a more complete understanding of your security data across your entire organization. You can also improve the protection of your workloads, applications, and data. Security Lake has adopted the Open Cybersecurity Schema Framework (OCSF), an open standard. With OCSF support, the service normalizes and combines security data from AWS and a broad range of enterprise security data sources. Use your preferred analytics tools to analyze your security data while retaining complete control and ownership over that data. Centralize data visibility from cloud and on-premises sources across your accounts and AWS Regions. Streamline your data management at scale by normalizing your security data to an open standard.Starting Price: $0.75 per GB per month -
25
Qlik Compose
Qlik
Qlik Compose for Data Warehouses provides a modern approach by automating and optimizing data warehouse creation and operation. Qlik Compose automates designing the warehouse, generating ETL code, and quickly applying updates, all whilst leveraging best practices and proven design patterns. Qlik Compose for Data Warehouses dramatically reduces the time, cost and risk of BI projects, whether on-premises or in the cloud. Qlik Compose for Data Lakes automates your data pipelines to create analytics-ready data sets. By automating data ingestion, schema creation, and continual updates, organizations realize faster time-to-value from their existing data lake investments. -
26
Sesame Software
Sesame Software
Sesame Software specializes in secure, efficient data integration and replication across diverse cloud, hybrid, and on-premise sources. Our patented scalability ensures comprehensive access to critical business data, facilitating a holistic view in the BI tools of your choice. This unified perspective empowers your own robust reporting and analytics, enabling your organization to regain control of your data with confidence. At Sesame Software, we understand what’s at stake when you need to move a massive amount of data between environments quickly—while keeping it protected, maintaining centralized access, and ensuring compliance with regulations. Over the past 23+ years, we’ve helped hundreds of organizations like Proctor & Gamble, Bank of America, and the U.S. government connect, move, store, and protect their data. -
27
Delta Lake
Delta Lake
Delta Lake is an open-source storage layer that brings ACID transactions to Apache Spark™ and big data workloads. Data lakes typically have multiple data pipelines reading and writing data concurrently, and data engineers have to go through a tedious process to ensure data integrity, due to the lack of transactions. Delta Lake brings ACID transactions to your data lakes. It provides serializability, the strongest level of isolation level. Learn more at Diving into Delta Lake: Unpacking the Transaction Log. In big data, even the metadata itself can be "big data". Delta Lake treats metadata just like data, leveraging Spark's distributed processing power to handle all its metadata. As a result, Delta Lake can handle petabyte-scale tables with billions of partitions and files at ease. Delta Lake provides snapshots of data enabling developers to access and revert to earlier versions of data for audits, rollbacks or to reproduce experiments. -
28
AnalyticDB
Alibaba Cloud
AnalyticDB for MySQL is a high-performance data warehousing service that is secure, stable, and easy to use. It allows you to easily create online statistical reports, multidimensional analysis solutions, and real-time data warehouses. AnalyticDB for MySQL uses a distributed computing architecture that enables it to use the elastic scaling capability of the cloud to compute tens of billions of data records in real time. AnalyticDB for MySQL stores data based on relational models and can use SQL to flexibly compute and analyze data. AnalyticDB for MySQL also allows you to easily manage databases, scale in or out nodes, and scale up or down instances. It provides various visualization and ETL tools to make enterprise data processing easier. Provides instant multidimensional analysis and can explore large amounts of data in milliseconds.Starting Price: $0.248 per hour -
29
Oracle Autonomous Data Warehouse is a cloud data warehouse service that eliminates all the complexities of operating a data warehouse, dw cloud, data warehouse center, securing data, and developing data-driven applications. It automates provisioning, configuring, securing, tuning, scaling, and backing up of the data warehouse. It includes tools for self-service data loading, data transformations, business models, automatic insights, and built-in converged database capabilities that enable simpler queries across multiple data types and machine learning analysis. It’s available in both the Oracle public cloud and customers' data centers with Oracle Cloud@Customer. Detailed analysis by industry expert DSC illustrates why Oracle Autonomous Data Warehouse is a better pick for the majority of global organizations. Learn about applications and tools that are compatible with Autonomous Data Warehouse.
-
30
Panoply
SQream
Panoply brings together a managed data warehouse with included, pre-built ELT data connectors, making it the easiest way to store, sync, and access all your business data. Our cloud data warehouse (built on Redshift or BigQuery), along with built-in data integrations to all major CRMs, databases, file systems, ad networks, web analytics tools, and more, will have you accessing usable data in less time, with a lower total cost of ownership. One platform with one easy price is all you need to get your business data up and running today. Panoply gives you unlimited access to data sources with prebuilt Snap Connectors and a Flex Connector that can bring in data from nearly any RestAPI. Panoply can be set up in minutes, requires zero ongoing maintenance, and provides online support including access to experienced data architects.Starting Price: $299 per month -
31
GeoSpock
GeoSpock
GeoSpock enables data fusion for the connected world with GeoSpock DB – the space-time analytics database. GeoSpock DB is a unique, cloud-native database optimised for querying for real-world use cases, able to fuse multiple sources of Internet of Things (IoT) data together to unlock its full value, whilst simultaneously reducing complexity and cost. GeoSpock DB enables efficient storage, data fusion, and rapid programmatic access to data, and allows you to run ANSI SQL queries and connect to analytics tools via JDBC/ODBC connectors. Users are able to perform analysis and share insights using familiar toolsets, with support for common BI tools (such as Tableau™, Amazon QuickSight™, and Microsoft Power BI™), and Data Science and Machine Learning environments (including Python Notebooks and Apache Spark). The database can also be integrated with internal applications and web services – with compatibility for open-source and visualisation libraries such as Kepler and Cesium.js. -
32
An industry data model from IBM acts as a blueprint with common elements based on best practices, government regulations and the complex data and analytic needs of the industry. A model can help you manage data warehouses and data lakes to gather deeper insights for better decisions. The models include warehouse design models, business terminology and business intelligence templates in a predesigned framework for an industry-specific organization to accelerate your analytics journey. Analyze and design functional requirements faster using industry-specific information infrastructures. Create and rationalize data warehouses using a consistent architecture to model changing requirements. Reduce risk and delivery better data to apps across the organization to accelerate transformation. Create enterprise-wide KPIs and address compliance, reporting and analysis requirements. Use industry data model vocabularies and templates for regulatory reporting to govern your data.
-
33
Databend
Databend
Databend is a modern, cloud-native data warehouse built to deliver high-performance, cost-efficient analytics for large-scale data processing. It is designed with an elastic architecture that scales dynamically to meet the demands of different workloads, ensuring efficient resource utilization and lower operational costs. Written in Rust, Databend offers exceptional performance through features like vectorized query execution and columnar storage, which optimize data retrieval and processing speeds. Its cloud-first design enables seamless integration with cloud platforms, and it emphasizes reliability, data consistency, and fault tolerance. Databend is an open source solution, making it a flexible and accessible choice for data teams looking to handle big data analytics in the cloud.Starting Price: Free -
34
dashDB Local
IBM
As the newest edition to the IBM dashDB family, dashDB Local rounds out IBM's hybrid data warehouse strategy, providing organizations the most flexible architecture needed to lower the cost model of analytics in the dynamic world of big data and the cloud. How is this possible? Through a common analytics engine, with different deployment options across private and public clouds, analytics workloads can be moved and optimized with ease. dashDB Local is now an option when you prefer deployment on a hosted private cloud or on-premises private cloud through a software-defined infrastructure. From an IT standpoint, dashDB Local simplifies deployment and management through container technology, with elastic scaling and easy maintenance. From a user standpoint, dashDB Local provides the speed needed to quickly cycle through the process of data acquisition, applies the right analytics to meet a specific use case, and operationalizes the insights. -
35
VeloDB
VeloDB
Powered by Apache Doris, VeloDB is a modern data warehouse for lightning-fast analytics on real-time data at scale. Push-based micro-batch and pull-based streaming data ingestion within seconds. Storage engine with real-time upsert、append and pre-aggregation. Unparalleled performance in both real-time data serving and interactive ad-hoc queries. Not just structured but also semi-structured data. Not just real-time analytics but also batch processing. Not just run queries against internal data but also work as a federate query engine to access external data lakes and databases. Distributed design to support linear scalability. Whether on-premise deployment or cloud service, separation or integration of storage and compute, resource usage can be flexibly and efficiently adjusted according to workload requirements. Built on and fully compatible with open source Apache Doris. Support MySQL protocol, functions, and SQL for easy integration with other data tools. -
36
Narrative
Narrative
Create new streams of revenue using the data you already collect with your own branded data shop. Narrative is focused on the fundamental principles that make buying and selling data easier, safer, and more strategic. Ensure that the data you access meets your standards, whatever they may be. Know exactly who you’re working with and how the data was collected. Easily access new supply and demand for a more agile and accessible data strategy. Own your data strategy entirely with end-to-end control of inputs and outputs. Our platform simplifies and automates the most time- and labor-intensive aspects of data acquisition, so you can access new data sources in days, not months. With filters, budget controls, and automatic deduplication, you’ll only ever pay for the data you need, and nothing that you don’t.Starting Price: $0 -
37
Utilihive
Greenbird Integration Technology
Utilihive is a cloud-native big data integration platform, purpose-built for the digital data-driven utility, offered as a managed service (SaaS). Utilihive is the leading Enterprise-iPaaS (iPaaS) that is purpose-built for energy and utility usage scenarios. Utilihive provides both the technical infrastructure platform (connectivity, integration, data ingestion, data lake, API management) and pre-configured integration content or accelerators (connectors, data flows, orchestrations, utility data model, energy data services, monitoring and reporting dashboards) to speed up the delivery of innovative data driven services and simplify operations. Utilities play a vital role towards achieving the Sustainable Development Goals and now have the opportunity to build universal platforms to facilitate the data economy in a new world including renewable energy. Seamless access to data is crucial to accelerate the digital transformation. -
38
Tweakstreet
Twineworks
Automate your Data Science. Create data automation workflows. Design on your desktop — run anywhere. A tool for modern data integration. Tweakstreet is a tool you run on your computers. It is not a service. You are always in complete control of your data. Design using a desktop app and run anywhere: your desktop, data center, or cloud servers. Connect to anything. Tweakstreet has connectors for many common data sources such as file formats, databases, and online services. We're regularly adding new connectors to new releases. File formats. Out of the box support for common data exchange formats such as: CSV, XML, and JSON files. SQL databases. You can work with popular SQL databases like Postgres, MariaDB, SQL Server, Oracle, MySQL, or DB2. In addition Tweakstreet offers generic support for any database that has JDBC drivers. Web APIs Tweakstreet supports HTTP interfaces such as REST-style APIs. First class support for OAuth 2.0 authentication enables access to popular APIs -
39
Qubole
Qubole
Qubole is a simple, open, and secure Data Lake Platform for machine learning, streaming, and ad-hoc analytics. Our platform provides end-to-end services that reduce the time and effort required to run Data pipelines, Streaming Analytics, and Machine Learning workloads on any cloud. No other platform offers the openness and data workload flexibility of Qubole while lowering cloud data lake costs by over 50 percent. Qubole delivers faster access to petabytes of secure, reliable and trusted datasets of structured and unstructured data for Analytics and Machine Learning. Users conduct ETL, analytics, and AI/ML workloads efficiently in end-to-end fashion across best-of-breed open source engines, multiple formats, libraries, and languages adapted to data volume, variety, SLAs and organizational policies. -
40
Data Lakes on AWS
Amazon
Many Amazon Web Services (AWS) customers require a data storage and analytics solution that offers more agility and flexibility than traditional data management systems. A data lake is a new and increasingly popular way to store and analyze data because it allows companies to manage multiple data types from a wide variety of sources, and store this data, structured and unstructured, in a centralized repository. The AWS Cloud provides many of the building blocks required to help customers implement a secure, flexible, and cost-effective data lake. These include AWS managed services that help ingest, store, find, process, and analyze both structured and unstructured data. To support our customers as they build data lakes, AWS offers the data lake solution, which is an automated reference implementation that deploys a highly available, cost-effective data lake architecture on the AWS Cloud along with a user-friendly console for searching and requesting datasets. -
41
Harbr
Harbr
Create data products from any source in seconds, without moving the data. Make them available to anyone, while maintaining complete control. Deliver powerful experiences to unlock value. Enhance your data mesh by seamlessly sharing, discovering, and governing data across domains. Foster collaboration and accelerate innovation with unified access to high-quality data products. Provide governed access to AI models for any user. Control how data interacts with AI to safeguard intellectual property. Automate AI workflows to rapidly integrate and iterate new capabilities. Access and build data products from Snowflake without moving any data. Experience the ease of getting more from your data. Make it easy for anyone to analyze data and remove the need for centralized provisioning of infrastructure and tools. Data products are magically integrated with tools, to ensure governance and accelerate outcomes. -
42
Alibaba Cloud Data Lake Formation
Alibaba Cloud
A data lake is a centralized repository used for big data and AI computing. It allows you to store structured and unstructured data at any scale. Data Lake Formation (DLF) is a key component of the cloud-native data lake framework. DLF provides an easy way to build a cloud-native data lake. It seamlessly integrates with a variety of compute engines and allows you to manage the metadata in data lakes in a centralized manner and control enterprise-class permissions. Systematically collects structured, semi-structured, and unstructured data and supports massive data storage. Uses an architecture that separates computing from storage. You can plan resources on demand at low costs. This improves data processing efficiency to meet the rapidly changing business requirements. DLF can automatically discover and collect metadata from multiple engines and manage the metadata in a centralized manner to solve the data silo issues. -
43
Kylo
Teradata
Kylo is an open source enterprise-ready data lake management software platform for self-service data ingest and data preparation with integrated metadata management, governance, security and best practices inspired by Think Big's 150+ big data implementation projects. Self-service data ingest with data cleansing, validation, and automatic profiling. Wrangle data with visual sql and an interactive transform through a simple user interface. Search and explore data and metadata, view lineage, and profile statistics. Monitor health of feeds and services in the data lake. Track SLAs and troubleshoot performance. Design batch or streaming pipeline templates in Apache NiFi and register with Kylo to enable user self-service. Organizations can expend significant engineering effort moving data into Hadoop yet struggle to maintain governance and data quality. Kylo dramatically simplifies data ingest by shifting ingest to data owners through a simple guided UI. -
44
Infor Data Lake
Infor
Solving today’s enterprise and industry challenges requires big data. The ability to capture data from across your enterprise—whether generated by disparate applications, people, or IoT infrastructure–offers tremendous potential. Infor’s Data Lake tools deliver schema-on-read intelligence along with a fast, flexible data consumption framework to enable new ways of making key decisions. With leveraged access to your entire Infor ecosystem, you can start capturing and delivering big data to power your next generation analytics and machine learning strategies. Infinitely scalable, the Infor Data Lake provides a unified repository for capturing all of your enterprise data. Grow with your insights and investments, ingest more content for better informed decisions, improve your analytics profiles, and provide rich data sets to build more powerful machine learning processes. -
45
IBM Storage Scale is software-defined file and object storage that enables organizations to build a global data platform for artificial intelligence (AI), high-performance computing (HPC), advanced analytics, and other demanding workloads. Unlike traditional applications that work with structured data, today’s performance-intensive AI and analytics workloads operate on unstructured data, such as documents, audio, images, videos, and other objects. IBM Storage Scale software provides global data abstraction services that seamlessly connect multiple data sources across multiple locations, including non-IBM storage environments. It’s based on a massively parallel file system and can be deployed on multiple hardware platforms including x86, IBM Power, IBM zSystem mainframes, ARM-based POSIX client, virtual machines, and Kubernetes.Starting Price: $19.10 per terabyte
-
46
Cribl Lake
Cribl
Storage that doesn’t lock data in. Get up and running fast with a managed data lake. Easily store, access, and retrieve data, without being a data expert. Cribl Lake keeps you from drowning in data. Easily store, manage, enforce policy on, and access data when you need. Dive into the future with open formats and unified retention, security, and access control policies. Let Cribl handle the heavy lifting so data can be usable and valuable to the teams and tools that need it. Minutes, not months to get up and running with Cribl Lake. Zero configuration with automated provisioning and out-of-the-box integrations. Streamline workflows with Stream and Edge for powerful data ingestion and routing. Cribl Search unifies queries no matter where data is stored, so you can get value from data without delays. Take an easy path to collect and store data for long-term retention. Comply with legal and business requirements for data retention by defining specific retention periods. -
47
Cortex Data Lake
Cortex
Collect, transform and integrate your enterprise’s security data to enable Palo Alto Networks solutions. Radically simplify security operations by collecting, transforming and integrating your enterprise’s security data. Facilitate AI and machine learning with access to rich data at cloud native scale. Significantly improve detection accuracy with trillions of multi-source artifacts. Cortex XDR™ is the industry’s only prevention, detection, and response platform that runs on fully integrated endpoint, network and cloud data. Prisma™ Access protects your applications, remote networks and mobile users in a consistent manner, wherever they are. A cloud-delivered architecture connects all users to all applications, whether they’re at headquarters, branch offices or on the road. The combination of Cortex™ Data Lake and Panorama™ management delivers an economical, cloud-based logging solution for Palo Alto Networks Next-Generation Firewalls. Zero hardware, cloud scale, available anywhere. -
48
Zaloni Arena
Zaloni
End-to-end DataOps built on an agile platform that improves and safeguards your data assets. Arena is the premier augmented data management platform. Our active data catalog enables self-service data enrichment and consumption to quickly control complex data environments. Customizable workflows that increase the accuracy and reliability of every data set. Use machine-learning to identify and align master data assets for better data decisioning. Complete lineage with detailed visualizations alongside masking and tokenization for superior security. We make data management easy. Arena catalogs your data, wherever it is and our extensible connections enable analytics to happen across your preferred tools. Conquer data sprawl challenges: Our software drives business and analytics success while providing the controls and extensibility needed across today’s decentralized, multi-cloud data complexity. -
49
Apache Kylin
Apache Software Foundation
Apache Kylin™ is an open source, distributed Analytical Data Warehouse for Big Data; it was designed to provide OLAP (Online Analytical Processing) capability in the big data era. By renovating the multi-dimensional cube and precalculation technology on Hadoop and Spark, Kylin is able to achieve near constant query speed regardless of the ever-growing data volume. Reducing query latency from minutes to sub-second, Kylin brings online analytics back to big data. Kylin can analyze 10+ billions of rows in less than a second. No more waiting on reports for critical decisions. Kylin connects data on Hadoop to BI tools like Tableau, PowerBI/Excel, MSTR, QlikSense, Hue and SuperSet, making the BI on Hadoop faster than ever. As an Analytical Data Warehouse, Kylin offers ANSI SQL on Hadoop/Spark and supports most ANSI SQL query functions. Kylin can support thousands of interactive queries at the same time, thanks to the low resource consumption of each query. -
50
IBM® Db2® Warehouse provides a client-managed, preconfigured data warehouse that runs in private clouds, virtual private clouds and other container-supported infrastructures. It is designed to be the ideal hybrid cloud solution when you must maintain control of your data but want cloud-like flexibility. With built-in machine learning, automated scaling, built-in analytics, and SMP and MPP processing, Db2 Warehouse enables you to bring AI to your business faster and easier. Deploy a pre-configured data warehouse in minutes on your supported infrastructure of choice with elastic scaling for easier updates and upgrades. Apply in-database analytics where the data resides, allowing enterprise AI to operate faster and more efficiently. Write your application once and move that workload to the right location, whether public cloud, private cloud or on-premises — with minimal or no changes required.