Alternatives to DeltaStream

Compare DeltaStream alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to DeltaStream in 2025. Compare features, ratings, user reviews, pricing, and more from DeltaStream competitors and alternatives in order to make an informed decision for your business.

  • 1
    StarTree

    StarTree

    StarTree

    StarTree Cloud is a fully-managed real-time analytics platform designed for OLAP at massive speed and scale for user-facing applications. Powered by Apache Pinot, StarTree Cloud provides enterprise-grade reliability and advanced capabilities such as tiered storage, scalable upserts, plus additional indexes and connectors. It integrates seamlessly with transactional databases and event streaming platforms, ingesting data at millions of events per second and indexing it for lightning-fast query responses. StarTree Cloud is available on your favorite public cloud or for private SaaS deployment. • Gain critical real-time insights to run your business • Seamlessly integrate data streaming and batch data • High performance in throughput and low-latency at petabyte scale • Fully-managed cloud service • Tiered storage to optimize cloud performance & spend • Fully-secure & enterprise-ready
    Compare vs. DeltaStream View Software
    Visit Website
  • 2
    Striim

    Striim

    Striim

    Data integration for your hybrid cloud. Modern, reliable data integration across your private and public cloud. All in real-time with change data capture and data streams. Built by the executive & technical team from GoldenGate Software, Striim brings decades of experience in mission-critical enterprise workloads. Striim scales out as a distributed platform in your environment or in the cloud. Scalability is fully configurable by your team. Striim is fully secure with HIPAA and GDPR compliance. Built ground up for modern enterprise workloads in the cloud or on-premise. Drag and drop to create data flows between your sources and targets. Process, enrich, and analyze your streaming data with real-time SQL queries.
  • 3
    Oracle Cloud Infrastructure Streaming
    Streaming service is a real-time, serverless, Apache Kafka-compatible event streaming platform for developers and data scientists. Streaming is tightly integrated with Oracle Cloud Infrastructure (OCI), Database, GoldenGate, and Integration Cloud. The service also provides out-of-the-box integrations for hundreds of third-party products across categories such as DevOps, databases, big data, and SaaS applications. Data engineers can easily set up and operate big data pipelines. Oracle handles all infrastructure and platform management for event streaming, including provisioning, scaling, and security patching. With the help of consumer groups, Streaming can provide state management for thousands of consumers. This helps developers easily build applications at scale.
  • 4
    Informatica Data Engineering Streaming
    AI-powered Informatica Data Engineering Streaming enables data engineers to ingest, process, and analyze real-time streaming data for actionable insights. Advanced serverless deployment option​ with integrated metering dashboard cuts admin overhead. Rapidly build intelligent data pipelines with CLAIRE®-powered automation, including automatic change data capture (CDC). Ingest thousands of databases and millions of files, and streaming events. Efficiently ingest databases, files, and streaming data for real-time data replication and streaming analytics. Find and inventory all data assets throughout your organization. Intelligently discover and prepare trusted data for advanced analytics and AI/ML projects.
  • 5
    Google Cloud Dataflow
    Unified stream and batch data processing that's serverless, fast, and cost-effective. Fully managed data processing service. Automated provisioning and management of processing resources. Horizontal autoscaling of worker resources to maximize resource utilization. OSS community-driven innovation with Apache Beam SDK. Reliable and consistent exactly-once processing. Streaming data analytics with speed. Dataflow enables fast, simplified streaming data pipeline development with lower data latency. Allow teams to focus on programming instead of managing server clusters as Dataflow’s serverless approach removes operational overhead from data engineering workloads. Allow teams to focus on programming instead of managing server clusters as Dataflow’s serverless approach removes operational overhead from data engineering workloads. Dataflow automates provisioning and management of processing resources to minimize latency and maximize utilization.
  • 6
    WarpStream

    WarpStream

    WarpStream

    WarpStream is an Apache Kafka-compatible data streaming platform built directly on top of object storage, with no inter-AZ networking costs, no disks to manage, and infinitely scalable, all within your VPC. WarpStream is deployed as a stateless and auto-scaling agent binary in your VPC with no local disks to manage. Agents stream data directly to and from object storage with no buffering on local disks and no data tiering. Create new “virtual clusters” in our control plane instantly. Support different environments, teams, or projects without managing any dedicated infrastructure. WarpStream is protocol compatible with Apache Kafka, so you can keep using all your favorite tools and software. No need to rewrite your application or use a proprietary SDK. Just change the URL in your favorite Kafka client library and start streaming. Never again have to choose between reliability and your budget.
    Starting Price: $2,987 per month
  • 7
    Confluent

    Confluent

    Confluent

    Infinite retention for Apache Kafka® with Confluent. Be infrastructure-enabled, not infrastructure-restricted Legacy technologies require you to choose between being real-time or highly-scalable. Event streaming enables you to innovate and win - by being both real-time and highly-scalable. Ever wonder how your rideshare app analyzes massive amounts of data from multiple sources to calculate real-time ETA? Ever wonder how your credit card company analyzes millions of credit card transactions across the globe and sends fraud notifications in real-time? The answer is event streaming. Move to microservices. Enable your hybrid strategy through a persistent bridge to cloud. Break down silos to demonstrate compliance. Gain real-time, persistent event transport. The list is endless.
  • 8
    Cloudera DataFlow
    Cloudera DataFlow for the Public Cloud (CDF-PC) is a cloud-native universal data distribution service powered by Apache NiFi ​​that lets developers connect to any data source anywhere with any structure, process it, and deliver to any destination. CDF-PC offers a flow-based low-code development paradigm that aligns best with how developers design, develop, and test data distribution pipelines. With over 400+ connectors and processors across the ecosystem of hybrid cloud services—including data lakes, lakehouses, cloud warehouses, and on-premises sources—CDF-PC provides indiscriminate data distribution. These data distribution flows can then be version-controlled into a catalog where operators can self-serve deployments to different runtimes.
  • 9
    Amazon Kinesis
    Easily collect, process, and analyze video and data streams in real time. Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost-effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as video, audio, application logs, website clickstreams, and IoT telemetry data for machine learning, analytics, and other applications. Amazon Kinesis enables you to process and analyze data as it arrives and respond instantly instead of having to wait until all your data is collected before the processing can begin. Amazon Kinesis enables you to ingest, buffer, and process streaming data in real-time, so you can derive insights in seconds or minutes instead of hours or days.
  • 10
    Apache Flink

    Apache Flink

    Apache Software Foundation

    Apache Flink is a framework and distributed processing engine for stateful computations over unbounded and bounded data streams. Flink has been designed to run in all common cluster environments, perform computations at in-memory speed and at any scale. Any kind of data is produced as a stream of events. Credit card transactions, sensor measurements, machine logs, or user interactions on a website or mobile application, all of these data are generated as a stream. Apache Flink excels at processing unbounded and bounded data sets. Precise control of time and state enable Flink’s runtime to run any kind of application on unbounded streams. Bounded streams are internally processed by algorithms and data structures that are specifically designed for fixed sized data sets, yielding excellent performance. Flink is designed to work well each of the previously listed resource managers.
  • 11
    Rockset

    Rockset

    Rockset

    Real-Time Analytics on Raw Data. Live ingest from S3, Kafka, DynamoDB & more. Explore raw data as SQL tables. Build amazing data-driven applications & live dashboards in minutes. Rockset is a serverless search and analytics engine that powers real-time apps and live dashboards. Operate directly on raw data, including JSON, XML, CSV, Parquet, XLSX or PDF. Plug data from real-time streams, data lakes, databases, and data warehouses into Rockset. Ingest real-time data without building pipelines. Rockset continuously syncs new data as it lands in your data sources without the need for a fixed schema. Use familiar SQL, including joins, filters, and aggregations. It’s blazing fast, as Rockset automatically indexes all fields in your data. Serve fast queries that power the apps, microservices, live dashboards, and data science notebooks you build. Scale without worrying about servers, shards, or pagers.
  • 12
    Amazon MSK
    Amazon Managed Streaming for Apache Kafka (Amazon MSK) is a fully managed service that makes it easy for you to build and run applications that use Apache Kafka to process streaming data. Apache Kafka is an open-source platform for building real-time streaming data pipelines and applications. With Amazon MSK, you can use native Apache Kafka APIs to populate data lakes, stream changes to and from databases, and power machine learning and analytics applications. Apache Kafka clusters are challenging to setup, scale, and manage in production. When you run Apache Kafka on your own, you need to provision servers, configure Apache Kafka manually, replace servers when they fail, orchestrate server patches and upgrades, architect the cluster for high availability, ensure data is durably stored and secured, setup monitoring and alarms, and carefully plan scaling events to support load changes.
    Starting Price: $0.0543 per hour
  • 13
    Apache Kafka

    Apache Kafka

    The Apache Software Foundation

    Apache Kafka® is an open-source, distributed streaming platform. Scale production clusters up to a thousand brokers, trillions of messages per day, petabytes of data, hundreds of thousands of partitions. Elastically expand and contract storage and processing. Stretch clusters efficiently over availability zones or connect separate clusters across geographic regions. Process streams of events with joins, aggregations, filters, transformations, and more, using event-time and exactly-once processing. Kafka’s out-of-the-box Connect interface integrates with hundreds of event sources and event sinks including Postgres, JMS, Elasticsearch, AWS S3, and more. Read, write, and process streams of events in a vast array of programming languages.
  • 14
    Nussknacker

    Nussknacker

    Nussknacker

    Nussknacker is a low-code visual tool for domain experts to define and run real-time decisioning algorithms instead of implementing them in the code. It serves where real-time actions on data have to be made: real-time marketing, fraud detection, Internet of Things, Customer 360, and Machine Learning inferring. An essential part of Nussknacker is a visual design tool for decision algorithms. It allows not-so-technical users – analysts or business people – to define decision logic in an imperative, easy-to-follow, and understandable way. Once authored, with a click of a button, scenarios are deployed for execution. And can be changed and redeployed anytime there’s a need. Nussknacker supports two processing modes: streaming and request-response. In streaming mode, it uses Kafka as its primary interface. It supports both stateful and stateless processing.
  • 15
    Lenses

    Lenses

    Lenses.io

    Enable everyone to discover and observe streaming data. Sharing, documenting and cataloging your data can increase productivity by up to 95%. Then from data, build apps for production use cases. Apply a data-centric security model to cover all the gaps of open source technology, and address data privacy. Provide secure and low-code data pipeline capabilities. Eliminate all darkness and offer unparalleled observability in data and apps. Unify your data mesh and data technologies and be confident with open source in production. Lenses is the highest rated product for real-time stream analytics according to independent third party reviews. With feedback from our community and thousands of engineering hours invested, we've built features that ensure you can focus on what drives value from your real time data. Deploy and run SQL-based real time applications over any Kafka Connect or Kubernetes infrastructure including AWS EKS.
    Starting Price: $49 per month
  • 16
    Astra Streaming
    Responsive applications keep users engaged and developers inspired. Rise to meet these ever-increasing expectations with the DataStax Astra Streaming service platform. DataStax Astra Streaming is a cloud-native messaging and event streaming platform powered by Apache Pulsar. Astra Streaming allows you to build streaming applications on top of an elastically scalable, multi-cloud messaging and event streaming platform. Astra Streaming is powered by Apache Pulsar, the next-generation event streaming platform which provides a unified solution for streaming, queuing, pub/sub, and stream processing. Astra Streaming is a natural complement to Astra DB. Using Astra Streaming, existing Astra DB users can easily build real-time data pipelines into and out of their Astra DB instances. With Astra Streaming, avoid vendor lock-in and deploy on any of the major public clouds (AWS, GCP, Azure) compatible with open-source Apache Pulsar.
  • 17
    IBM Event Streams
    IBM Event Streams is a fully managed event streaming platform built on Apache Kafka, designed to help enterprises process and respond to real-time data streams. With capabilities for machine learning integration, high availability, and secure cloud deployment, it enables organizations to create intelligent applications that react to events as they happen. The platform supports multi-cloud environments, disaster recovery, and geo-replication, making it ideal for mission-critical workloads. IBM Event Streams simplifies building and scaling real-time, event-driven solutions, ensuring data is processed quickly and efficiently.
  • 18
    Axual

    Axual

    Axual

    Axual is Kafka-as-a-Service for DevOps teams. Empower your team to unlock insights and drive decisions with our intuitive Kafka platform. Axual offers the ultimate solution for enterprises looking to seamlessly integrate data streaming into their core IT infrastructure. Our all-in-one Kafka platform is designed to eliminate the need for extensive technical knowledge or skills, and provides a ready-made solution that delivers all the benefits of event streaming without the hassle. The Axual Platform is a all-in-one solution, designed to help you simplify and enhance the deployment, management, and utilization of real-time data streaming with Apache Kafka. By providing an array of features that cater to the diverse needs of modern enterprises, the Axual Platform enables organizations to harness the full potential of data streaming while minimizing complexity and operational overhead.
  • 19
    Materialize

    Materialize

    Materialize

    Materialize is a reactive database that delivers incremental view updates. We help developers easily build with streaming data using standard SQL. Materialize can connect to many different external sources of data without pre-processing. Connect directly to streaming sources like Kafka, Postgres databases, CDC, or historical sources of data like files or S3. Materialize allows you to query, join, and transform data sources in standard SQL - and presents the results as incrementally-updated Materialized views. Queries are maintained and continually updated as new data streams in. With incrementally-updated views, developers can easily build data visualizations or real-time applications. Building with streaming data can be as simple as writing a few lines of SQL.
    Starting Price: $0.98 per hour
  • 20
    Azure Event Hubs
    Event Hubs is a fully managed, real-time data ingestion service that’s simple, trusted, and scalable. Stream millions of events per second from any source to build dynamic data pipelines and immediately respond to business challenges. Keep processing data during emergencies using the geo-disaster recovery and geo-replication features. Integrate seamlessly with other Azure services to unlock valuable insights. Allow existing Apache Kafka clients and applications to talk to Event Hubs without any code changes—you get a managed Kafka experience without having to manage your own clusters. Experience real-time data ingestion and microbatching on the same stream. Focus on drawing insights from your data instead of managing infrastructure. Build real-time big data pipelines and respond to business challenges right away.
    Starting Price: $0.03 per hour
  • 21
    Spark Streaming

    Spark Streaming

    Apache Software Foundation

    Spark Streaming brings Apache Spark's language-integrated API to stream processing, letting you write streaming jobs the same way you write batch jobs. It supports Java, Scala and Python. Spark Streaming recovers both lost work and operator state (e.g. sliding windows) out of the box, without any extra code on your part. By running on Spark, Spark Streaming lets you reuse the same code for batch processing, join streams against historical data, or run ad-hoc queries on stream state. Build powerful interactive applications, not just analytics. Spark Streaming is developed as part of Apache Spark. It thus gets tested and updated with each Spark release. You can run Spark Streaming on Spark's standalone cluster mode or other supported cluster resource managers. It also includes a local run mode for development. In production, Spark Streaming uses ZooKeeper and HDFS for high availability.
  • 22
    Arroyo

    Arroyo

    Arroyo

    Scale from zero to millions of events per second. Arroyo ships as a single, compact binary. Run locally on MacOS or Linux for development, and deploy to production with Docker or Kubernetes. Arroyo is a new kind of stream processing engine, built from the ground up to make real-time easier than batch. Arroyo was designed from the start so that anyone with SQL experience can build reliable, efficient, and correct streaming pipelines. Data scientists and engineers can build end-to-end real-time applications, models, and dashboards, without a separate team of streaming experts. Transform, filter, aggregate, and join data streams by writing SQL, with sub-second results. Your streaming pipelines shouldn't page someone just because Kubernetes decided to reschedule your pods. Arroyo is built to run in modern, elastic cloud environments, from simple container runtimes like Fargate to large, distributed deployments on the Kubernetes logo Kubernetes.
  • 23
    Leo

    Leo

    Leo

    Turn your data into a realtime stream, making it immediately available and ready to use. Leo reduces the complexity of event sourcing by making it easy to create, visualize, monitor, and maintain your data flows. Once you unlock your data, you are no longer limited by the constraints of your legacy systems. Dramatically reduced dev time keeps your developers and stakeholders happy. Adopt microservice architectures to continuously innovate and improve agility. In reality, success with microservices is all about data. An organization must invest in a reliable and repeatable data backbone to make microservices a reality. Implement full-fledged search in your custom app. With data flowing, adding and maintaining a search database will not be a burden.
    Starting Price: $251 per month
  • 24
    Red Hat OpenShift Streams
    Red Hat® OpenShift® Streams for Apache Kafka is a managed cloud service that provides a streamlined developer experience for building, deploying, and scaling new cloud-native applications or modernizing existing systems. Red Hat OpenShift Streams for Apache Kafka makes it easy to create, discover, and connect to real-time data streams no matter where they are deployed. Streams are a key component for delivering event-driven and data analytics applications. The combination of seamless operations across distributed microservices, large data transfer volumes, and managed operations allows teams to focus on team strengths, speed up time to value, and lower operational costs. OpenShift Streams for Apache Kafka includes a Kafka ecosystem and is part of a family of cloud services—and the Red Hat OpenShift product family—which helps you build a wide range of data-driven solutions.
  • 25
    Redpanda

    Redpanda

    Redpanda Data

    Breakthrough data streaming capabilities that let you deliver customer experiences never before possible. Kafka API and ecosystem are compatible. Redpanda BulletPredictable low latencies with zero data loss. Redpanda BulletUpto 10x faster than Kafka. Redpanda BulletEnterprise-grade support and hotfixes. Redpanda BulletAutomated backups to S3/GCS. Redpanda Bullet100% freedom from routine Kafka operations. Redpanda BulletSupport for AWS and GCP. Redpanda was designed from the ground up to be easily installed to get streaming up and running quickly. After you see its power, put Redpanda to the test in production. Use the more advanced Redpanda features. We manage provisioning, monitoring, and upgrades. Without any access to your cloud credentials. Sensitive data never leaves your environment. Provisioned, operated, and maintained for you. Configurable instance types. Expand cluster as your needs grow.
  • 26
    SAS Event Stream Processing
    Streaming data from operations, transactions, sensors and IoT devices is valuable – when it's well-understood. Event stream processing from SAS includes streaming data quality and analytics – and a vast array of SAS and open source machine learning and high-frequency analytics for connecting, deciphering, cleansing and understanding streaming data – in one solution. No matter how fast your data moves, how much data you have, or how many data sources you’re pulling from, it’s all under your control via a single, intuitive interface. You can define patterns and address scenarios from all aspects of your business, giving you the power to stay agile and tackle issues as they arise.
  • 27
    SQLstream

    SQLstream

    Guavus, a Thales company

    SQLstream ranks #1 for IoT stream processing & analytics (ABI Research). Used by Verizon, Walmart, Cisco, & Amazon, our technology powers applications across data centers, the cloud, & the edge. Thanks to sub-ms latency, SQLstream enables live dashboards, time-critical alerts, & real-time action. Smart cities can optimize traffic light timing or reroute ambulances & fire trucks. Security systems can shut down hackers & fraudsters right away. AI / ML models, trained by streaming sensor data, can predict equipment failures. With lightning performance, up to 13M rows / sec / CPU core, companies have drastically reduced their footprint & cost. Our efficient, in-memory processing permits operations at the edge that are otherwise impossible. Acquire, prepare, analyze, & act on data in any format from any source. Create pipelines in minutes not months with StreamLab, our interactive, low-code GUI dev environment. Export SQL scripts & deploy with the flexibility of Kubernetes.
  • 28
    Samza

    Samza

    Apache Software Foundation

    Samza allows you to build stateful applications that process data in real-time from multiple sources including Apache Kafka. Battle-tested at scale, it supports flexible deployment options to run on YARN or as a standalone library. Samza provides extremely low latencies and high throughput to analyze your data instantly. Scales to several terabytes of state with features like incremental checkpoints and host-affinity. Samza is easy to operate with flexible deployment options - YARN, Kubernetes or standalone. Ability to run the same code to process both batch and streaming data. Integrates with several sources including Kafka, HDFS, AWS Kinesis, Azure Eventhubs, K-V stores and ElasticSearch.
  • 29
    Hitachi Streaming Data Platform
    ​The Hitachi Streaming Data Platform (SDP) is a real-time data processing system designed to analyze large volumes of time-sequenced data as it is generated. By leveraging in-memory and incremental computational processing, SDP enables swift analysis without the delays associated with traditional stored data processing. Users can define summary analysis scenarios using Continuous Query Language (CQL), similar to SQL, allowing for flexible and programmable data analysis without the need for custom applications. The platform's architecture comprises components such as development servers, data-transfer servers, data-analysis servers, and dashboard servers, facilitating scalable and efficient data processing workflows. SDP's modular design supports various data input and output formats, including text files and HTTP packets, and integrates with visualization tools like RTView for real-time monitoring.
  • 30
    TIBCO Streaming
    TIBCO Streaming is a real-time analytics platform designed to process and analyze high-velocity data streams, enabling organizations to make immediate, data-driven decisions. It offers a low-code development environment through StreamBase Studio, allowing users to build complex event processing applications with minimal coding. It supports over 150 connectors, including APIs, Apache Kafka, MQTT, RabbitMQ, and databases like MySQL and JDBC, facilitating seamless integration with various data sources. TIBCO Streaming incorporates dynamic learning operators, enabling adaptive machine learning models that provide contextual insights and automate decision-making processes. It also features real-time business intelligence capabilities, allowing users to visualize live data alongside historical information for comprehensive analysis. It is cloud-ready, supporting deployments on AWS, Azure, GCP, and on-premises environments.
  • 31
    Macrometa

    Macrometa

    Macrometa

    We deliver a geo-distributed real-time database, stream processing and compute runtime for event-driven applications across up to 175 worldwide edge data centers. App & API builders love our platform because we solve the hardest problems of sharing mutable state across 100s of global locations, with strong consistency & low latency. Macrometa enables you to surgically extend your existing infrastructure to bring part of or your entire application closer to your end users. This allows you to improve performance, user experience, and comply with global data governance laws. Macrometa is a serverless, streaming NoSQL database, with integrated pub/sub and stream data processing and compute engine. Create stateful data infrastructure, stateful functions & containers for long running workloads, and process data streams in real time. You do the code, we do all the ops and orchestration.
  • 32
    Cogility Cogynt

    Cogility Cogynt

    Cogility Software

    Deliver Continuous Intelligence solutions easier, faster, and cost-effectively - with less engineering effort. The Cogility Cogynt platform delivers cloud-scalable event stream processing software powered by advanced, Expert AI-based analytics. A complete, integrated toolset enables organizations to quickly, easily, and more efficiently deliver continuous intelligence solutions. The end-to-end platform streamlines deployment, constructing model logic, customizing data source intake, processing data streams, examining, visualizing and sharing intelligence findings, auditing and improving results, and integrating with other applications. Cogynt’s Authoring Tool provides a convenient, zero-code design environment for creating, updating, and deploying data models. Cogynt’s Data Management Tool makes it easy to publish your model to immediately apply to stream data processing while abstracting Flink job coding.
  • 33
    Amazon Managed Service for Apache Flink
    Thousands of customers use Amazon Managed Service for Apache Flink to run stream processing applications. With Amazon Managed Service for Apache Flink, you can transform and analyze streaming data in real-time using Apache Flink and integrate applications with other AWS services. There are no servers and clusters to manage, and there is no computing and storage infrastructure to set up. You pay only for the resources you use. Build and run Apache Flink applications, without setting up infrastructure and managing resources and clusters. Process gigabytes of data per second with subsecond latencies and respond to events in real-time. Deploy highly available and durable applications with Multi-AZ deployments and APIs for application lifecycle management. Develop applications that transform and deliver data to Amazon Simple Storage Service (Amazon S3), Amazon OpenSearch Service, and more.
    Starting Price: $0.11 per hour
  • 34
    SelectDB

    SelectDB

    SelectDB

    SelectDB is a modern data warehouse based on Apache Doris, which supports rapid query analysis on large-scale real-time data. From Clickhouse to Apache Doris, to achieve the separation of the lake warehouse and upgrade to the lake warehouse. The fast-hand OLAP system carries nearly 1 billion query requests every day to provide data services for multiple scenes. Due to the problems of storage redundancy, resource seizure, complicated governance, and difficulty in querying and adjustment, the original lake warehouse separation architecture was decided to introduce Apache Doris lake warehouse, combined with Doris's materialized view rewriting ability and automated services, to achieve high-performance data query and flexible data governance. Write real-time data in seconds, and synchronize flow data from databases and data streams. Data storage engine for real-time update, real-time addition, and real-time pre-polymerization.
    Starting Price: $0.22 per hour
  • 35
    Apache Doris

    Apache Doris

    The Apache Software Foundation

    Apache Doris is a modern data warehouse for real-time analytics. It delivers lightning-fast analytics on real-time data at scale. Push-based micro-batch and pull-based streaming data ingestion within a second. Storage engine with real-time upsert, append and pre-aggregation. Optimize for high-concurrency and high-throughput queries with columnar storage engine, MPP architecture, cost based query optimizer, vectorized execution engine. Federated querying of data lakes such as Hive, Iceberg and Hudi, and databases such as MySQL and PostgreSQL. Compound data types such as Array, Map and JSON. Variant data type to support auto data type inference of JSON data. NGram bloomfilter and inverted index for text searches. Distributed design for linear scalability. Workload isolation and tiered storage for efficient resource management. Supports shared-nothing clusters as well as separation of storage and compute.
  • 36
    HarperDB

    HarperDB

    HarperDB

    HarperDB is a distributed systems platform that combines database, caching, application, and streaming functions into a single technology. With it, you can start delivering global-scale back-end services with less effort, higher performance, and lower cost than ever before. Deploy user-programmed applications and pre-built add-ons on top of the data they depend on for a high throughput, ultra-low latency back end. Lightning-fast distributed database delivers orders of magnitude more throughput per second than popular NoSQL alternatives while providing limitless horizontal scale. Native real-time pub/sub communication and data processing via MQTT, WebSocket, and HTTP interfaces. HarperDB delivers powerful data-in-motion capabilities without layering in additional services like Kafka. Focus on features that move your business forward, not fighting complex infrastructure. You can't change the speed of light, but you can put less light between your users and their data.
  • 37
    VeloDB

    VeloDB

    VeloDB

    Powered by Apache Doris, VeloDB is a modern data warehouse for lightning-fast analytics on real-time data at scale. Push-based micro-batch and pull-based streaming data ingestion within seconds. Storage engine with real-time upsert、append and pre-aggregation. Unparalleled performance in both real-time data serving and interactive ad-hoc queries. Not just structured but also semi-structured data. Not just real-time analytics but also batch processing. Not just run queries against internal data but also work as a federate query engine to access external data lakes and databases. Distributed design to support linear scalability. Whether on-premise deployment or cloud service, separation or integration of storage and compute, resource usage can be flexibly and efficiently adjusted according to workload requirements. Built on and fully compatible with open source Apache Doris. Support MySQL protocol, functions, and SQL for easy integration with other data tools.
  • 38
    IBM Streams
    IBM Streams evaluates a broad range of streaming data — unstructured text, video, audio, geospatial and sensor — helping organizations spot opportunities and risks and make decisions in real-time. Make sense of your data, turning fast-moving volumes and varieties into insight with IBM® Streams. Streams evaluate a broad range of streaming data — unstructured text, video, audio, geospatial and sensor — helping organizations spot opportunities and risks as they happen. Combine Streams with other IBM Cloud Pak® for Data capabilities, built on an open, extensible architecture. Help enable data scientists to collaboratively build models to apply to stream flows, plus, analyze massive amounts of data in real-time. Acting upon your data and deriving true value is easier than ever.
  • 39
    Baidu AI Cloud Stream Computing
    Baidu Stream Computing (BSC) provides real-time streaming data processing capacity with low delay, high throughput and high accuracy. It is fully compatible with Spark SQL; and can realize the logic data processing of complicated businesses through SQL statement, which is easy to use; provides users with full life cycle management for the streaming-oriented computing jobs. Integrate deeply with multiple storage products of Baidu AI Cloud as the upstream and downstream of stream computing, including Baidu Kafka, RDS, BOS, IOT Hub, Baidu ElasticSearch, TSDB, SCS and others. Provide a comprehensive job monitoring indicator, and the user can view the monitoring indicators of the job and set the alarm rules to protect the job.
  • 40
    Azure Stream Analytics
    Discover Azure Stream Analytics, the easy-to-use, real-time analytics service that is designed for mission-critical workloads. Build an end-to-end serverless streaming pipeline with just a few clicks. Go from zero to production in minutes using SQL—easily extensible with custom code and built-in machine learning capabilities for more advanced scenarios. Run your most demanding workloads with the confidence of a financially backed SLA.
  • 41
    IBM StreamSets
    IBM® StreamSets enables users to create and manage smart streaming data pipelines through an intuitive graphical interface, facilitating seamless data integration across hybrid and multicloud environments. This is why leading global companies rely on IBM StreamSets to support millions of data pipelines for modern analytics, intelligent applications and hybrid integration. Decrease data staleness and enable real-time data at scale—handling millions of records of data, across thousands of pipelines within seconds. Insulate data pipelines from change and unexpected shifts with drag-and-drop, prebuilt processors designed to automatically identify and adapt to data drift. Create streaming pipelines to ingest structured, semistructured or unstructured data and deliver it to a wide range of destinations.
    Starting Price: $1000 per month
  • 42
    Aiven for Apache Kafka
    Apache Kafka as a fully managed service, with zero vendor lock-in and a full set of capabilities to build your streaming pipeline. Set up fully managed Kafka in less than 10 minutes — directly from our web console or programmatically via our API, CLI, Terraform provider or Kubernetes operator. Easily connect it to your existing tech stack with over 30 connectors, and feel confident in your setup with logs and metrics available out of the box via the service integrations. A fully managed distributed data streaming platform, deployable in the cloud of your choice. Ideal for event-driven applications, near-real-time data transfer and pipelines, stream analytics, and any other case where you need to move a lot of data between applications — and quickly. With Aiven’s hosted and managed-for-you Apache Kafka, you can set up clusters, deploy new nodes, migrate clouds, and upgrade existing versions — in a single mouse click — and monitor them through a simple dashboard.
    Starting Price: $200 per month
  • 43
    Decodable

    Decodable

    Decodable

    No more low level code and stitching together complex systems. Build and deploy pipelines in minutes with SQL. A data engineering service that makes it easy for developers and data engineers to build and deploy real-time data pipelines for data-driven applications. Pre-built connectors for messaging systems, storage systems, and database engines make it easy to connect and discover available data. For each connection you make, you get a stream to or from the system. With Decodable you can build your pipelines with SQL. Pipelines use streams to send data to, or receive data from, your connections. You can also use streams to connect pipelines together to handle the most complex processing tasks. Observe your pipelines to ensure data keeps flowing. Create curated streams for other teams. Define retention policies on streams to avoid data loss during external system failures. Real-time health and performance metrics let you know everything’s working.
    Starting Price: $0.20 per task per hour
  • 44
    Amazon Data Firehose
    Easily capture, transform, and load streaming data. Create a delivery stream, select your destination, and start streaming real-time data with just a few clicks. Automatically provision and scale compute, memory, and network resources without ongoing administration. Transform raw streaming data into formats like Apache Parquet, and dynamically partition streaming data without building your own processing pipelines. Amazon Data Firehose provides the easiest way to acquire, transform, and deliver data streams within seconds to data lakes, data warehouses, and analytics services. To use Amazon Data Firehose, you set up a stream with a source, destination, and required transformations. Amazon Data Firehose continuously processes the stream, automatically scales based on the amount of data available, and delivers it within seconds. Select the source for your data stream or write data using the Firehose Direct PUT API.
    Starting Price: $0.075 per month
  • 45
    Kinetica

    Kinetica

    Kinetica

    A scalable cloud database for real-time analysis on large and streaming datasets. Kinetica is designed to harness modern vectorized processors to be orders of magnitude faster and more efficient for real-time spatial and temporal workloads. Track and gain intelligence from billions of moving objects in real-time. Vectorization unlocks new levels of performance for analytics on spatial and time series data at scale. Ingest and query at the same time to act on real-time events. Kinetica's lockless architecture and distributed ingestion ensures data is available to query as soon as it lands. Vectorized processing enables you to do more with less. More power allows for simpler data structures, which lead to lower storage costs, more flexibility and less time engineering your data. Vectorized processing opens the door to amazingly fast analytics and detailed visualization of moving objects at scale.
  • 46
    PubSub+ Platform
    Solace PubSub+ Platform helps enterprises design, deploy and manage event-driven systems across hybrid and multi-cloud and IoT environments so they can be more event-driven and operate in real-time. The PubSub+ Platform includes the powerful PubSub+ Event Brokers, event management capabilities with PubSub+ Event Portal, as well as monitoring and integration capabilities all available via a single cloud console. PubSub+ allows easy creation of an event mesh, an interconnected network of event brokers, allowing for seamless and dynamic data movement across highly distributed network environments. PubSub+ Event Brokers can be deployed as fully managed cloud services, self-managed software in private cloud or on-premises environments, or as turnkey hardware appliances for unparalleled performance and low TCO. PubSub+ Event Portal is a complimentary toolset for design and governance of event-driven systems including both Solace and Kafka-based event broker environments.
  • 47
    Flowcore

    Flowcore

    Flowcore

    The Flowcore platform provides you with event streaming and event sourcing in a single, easy-to-use service. Data flow and replayable storage, designed for developers at data-driven startups and enterprises that aim to stay at the forefront of innovation and growth. All your data operations are efficiently persisted, ensuring no valuable data is ever lost. Immediate transformations and reclassifications of your data, loading it seamlessly to any required destination. Break free from rigid data structures. Flowcore's scalable architecture adapts to your growth, handling increasing volumes of data with ease. By simplifying and streamlining backend data processes, your engineering teams can focus on what they do best, creating innovative products. Integrate AI technologies more effectively, enriching your products with smart, data-driven solutions. Flowcore is built with developers in mind, but its benefits extend beyond the dev team.
    Starting Price: $10/month
  • 48
    Aiven

    Aiven

    Aiven

    Aiven manages your open source data infrastructure in the cloud - so you don't have to. Developers can do what they do best: create applications. We do what we do best: manage cloud data infrastructure. All solutions are open source. You can also freely move data between clouds or create multi-cloud environments. Know exactly how much you’ll be paying and why. We bundle networking, storage and basic support costs together. We are committed to keeping your Aiven software online. If there’s ever an issue, we’ll be there to fix it. Deploy a service on the Aiven platform in 10 minutes. Sign up - no credit card info needed. Select your open source service, and the cloud and region to deploy to. Choose your plan - you have $300 in free credits. Click "Create service" and go on to configure your data sources. Stay in control of your data using powerful open-source services.
    Starting Price: $200.00 per month
  • 49
    Apache Storm

    Apache Storm

    Apache Software Foundation

    Apache Storm is a free and open source distributed realtime computation system. Apache Storm makes it easy to reliably process unbounded streams of data, doing for realtime processing what Hadoop did for batch processing. Apache Storm is simple, can be used with any programming language, and is a lot of fun to use! Apache Storm has many use cases: realtime analytics, online machine learning, continuous computation, distributed RPC, ETL, and more. Apache Storm is fast: a benchmark clocked it at over a million tuples processed per second per node. It is scalable, fault-tolerant, guarantees your data will be processed, and is easy to set up and operate. Apache Storm integrates with the queueing and database technologies you already use. An Apache Storm topology consumes streams of data and processes those streams in arbitrarily complex ways, repartitioning the streams between each stage of the computation however needed. Read more in the tutorial.
  • 50
    Yandex Data Streams
    Simplifies data exchange between components in microservice architectures. When used as a transport for microservices, it simplifies integration, increases reliability, and improves scaling. Read and write data in near real-time. Set data throughput and storage times to meet your needs. Enjoy granular configuration of the resources for processing data streams, from small streams of 100 KB/s to streams of 100 MB/s. Deliver a single stream to multiple targets with different retention policies using Yandex Data Transfer. Data is automatically replicated across multiple geographically distributed availability zones. Once created, you can manage data streams centrally in the management console or using the API. Yandex Data Streams can continuously collect data from sources like website browsing histories, application and system logs, and social media feeds. Yandex Data Streams is capable of continuously collecting data from sources such as website browsing histories, application logs, etc.
    Starting Price: $0.086400 per GB