Alternatives to Onum

Compare Onum alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to Onum in 2026. Compare features, ratings, user reviews, pricing, and more from Onum competitors and alternatives in order to make an informed decision for your business.

  • 1
    dbt

    dbt

    dbt Labs

    dbt helps data teams transform raw data into trusted, analysis-ready datasets faster. With dbt, data analysts and data engineers can collaborate on version-controlled SQL models, enforce testing and documentation standards, lean on detailed metadata to troubleshoot and optimize pipelines, and deploy transformations reliably at scale. Built on modern software engineering best practices, dbt brings transparency and governance to every step of the data transformation workflow. Thousands of companies, from startups to Fortune 500 enterprises, rely on dbt to improve data quality and trust as well as drive efficiencies and reduce costs as they deliver AI-ready data across their organization. Whether you’re scaling data operations or just getting started, dbt empowers your team to move from raw data to actionable analytics with confidence.
    Compare vs. Onum View Software
    Visit Website
  • 2
    DataBuck

    DataBuck

    FirstEigen

    DataBuck is an AI-powered data validation platform that automates risk detection across dynamic, high-volume, and evolving data environments. DataBuck empowers your teams to: ✅ Enhance trust in analytics and reports, ensuring they are built on accurate and reliable data. ✅ Reduce maintenance costs by minimizing manual intervention. ✅ Scale operations 10x faster compared to traditional tools, enabling seamless adaptability in ever-changing data ecosystems. By proactively addressing system risks and improving data accuracy, DataBuck ensures your decision-making is driven by dependable insights. Proudly recognized in Gartner’s 2024 Market Guide for #DataObservability, DataBuck goes beyond traditional observability practices with its AI/ML innovations to deliver autonomous Data Trustability—empowering you to lead with confidence in today’s data-driven world.
    Compare vs. Onum View Software
    Visit Website
  • 3
    DataBahn

    DataBahn

    DataBahn

    DataBahn.ai is redefining how enterprises manage the explosion of security and operational data in the AI era. Our AI-powered data pipeline and fabric platform helps organizations securely collect, enrich, orchestrate, and optimize enterprise data—including security, application, observability, and IoT/OT telemetry—for analytics, automation, and AI. With native support for over 400 integrations and built-in enrichment capabilities, DataBahn streamlines fragmented data workflows and reduces SIEM and infrastructure costs from day one. The platform requires no specialist training, enabling security and IT teams to extract insights in real time and adapt quickly to new demands. We've helped Fortune 500 and Global 2000 companies reduce data processing costs by over 50% and automate more than 80% of their data engineering workloads.
  • 4
    Tenzir

    Tenzir

    Tenzir

    ​Tenzir is a data pipeline engine specifically designed for security teams, facilitating the collection, transformation, enrichment, and routing of security data throughout its lifecycle. It enables users to seamlessly gather data from various sources, parse unstructured data into structured formats, and transform it as needed. It optimizes data volume, reduces costs, and supports mapping to standardized schemas like OCSF, ASIM, and ECS. Tenzir ensures compliance through data anonymization features and enriches data by adding context from threats, assets, and vulnerabilities. It supports real-time detection and stores data efficiently in Parquet format within object storage systems. Users can rapidly search and materialize necessary data and reactivate at-rest data back into motion. Tension is built for flexibility, allowing deployment as code and integration into existing workflows, ultimately aiming to reduce SIEM costs and provide full control.
  • 5
    Fivetran

    Fivetran

    Fivetran

    Fivetran is a leading data integration platform that centralizes an organization’s data from various sources to enable modern data infrastructure and drive innovation. It offers over 700 fully managed connectors to move data automatically, reliably, and securely from SaaS applications, databases, ERPs, and files to data warehouses and lakes. The platform supports real-time data syncs and scalable pipelines that fit evolving business needs. Trusted by global enterprises like Dropbox, JetBlue, and Pfizer, Fivetran helps accelerate analytics, AI workflows, and cloud migrations. It features robust security certifications including SOC 1 & 2, GDPR, HIPAA, and ISO 27001. Fivetran provides an easy-to-use, customizable platform that reduces engineering time and enables faster insights.
  • 6
    Cribl Stream
    Cribl Stream allows you to implement an observability pipeline which helps you parse, restructure, and enrich data in flight - before you pay to analyze it. Get the right data, where you want, in the formats you need. Route data to the best tool for the job - or all the tools for the job - by translating and formatting data into any tooling schema you require. Let different departments choose different analytics environments without having to deploy new agents or forwarders. As much as 50% of log and metric data goes unused – null fields, duplicate data, and fields that offer zero analytical value. With Cribl Stream, you can trim wasted data streams and analyze only what you need. Cribl Stream is the best way to get multiple data formats into the tools you trust for your Security and IT efforts. Use the Cribl Stream universal receiver to collect from any machine data source - and even to schedule batch collection from REST APIs, Kinesis Firehose, Raw HTTP, and Microsoft Office 365 APIs
    Starting Price: Free (1TB / Day)
  • 7
    OpenSnowcat

    OpenSnowcat

    OpenSnowcat

    OpenSnowcat is an open source fork of Snowplow under the Apache 2.0 License that delivers a full event data pipeline for collection, enrichment, routing, and loading, remaining fully compatible with Snowplow and Segment SDKs. It provides an end-to-end solution to collect behavioral data from web and mobile sources, enrich it with customizable processes, route events through modern integrations, and load enriched data into destinations such as Snowflake, Redshift, S3, Amplitude, Kinesis, and more, with support for JSON and TSV output formats. OpenSnowcat emphasizes being free and open source forever with a trusted license, aiming for security, stability, and backward compatibility so existing Snowplow implementations can continue without disruption. Its architecture is designed for high performance, minimal latency, and dynamic scalability, integrating with cloud services to simplify management and cost efficiency at scale.
    Starting Price: Free
  • 8
    Observo AI

    Observo AI

    Observo AI

    ​Observo AI is an AI-native data pipeline platform designed to address the challenges of managing vast amounts of telemetry data in security and DevOps operations. By leveraging machine learning and agentic AI, Observo AI automates data optimization, enabling enterprises to process AI-generated data more efficiently, securely, and cost-effectively. It reduces data processing costs by over 50% and accelerates incident response times by more than 40%. Observo AI's features include intelligent data deduplication and compression, real-time anomaly detection, and dynamic data routing to appropriate storage or analysis tools. It also enriches data streams with contextual information to enhance threat detection accuracy while minimizing false positives. Observo AI offers a searchable cloud data lake for efficient data storage and retrieval.
  • 9
    GlassFlow

    GlassFlow

    GlassFlow

    GlassFlow is a serverless, event-driven data pipeline platform designed for Python developers. It enables users to build real-time data pipelines without the need for complex infrastructure like Kafka or Flink. By writing Python functions, developers can define data transformations, and GlassFlow manages the underlying infrastructure, offering auto-scaling, low latency, and optimal data retention. The platform supports integration with various data sources and destinations, including Google Pub/Sub, AWS Kinesis, and OpenAI, through its Python SDK and managed connectors. GlassFlow provides a low-code interface for quick pipeline setup, allowing users to create and deploy pipelines within minutes. It also offers features such as serverless function execution, real-time API connections, and alerting and reprocessing capabilities. The platform is designed to simplify the creation and management of event-driven data pipelines, making it accessible for Python developers.
    Starting Price: $350 per month
  • 10
    Talend Pipeline Designer
    Talend Pipeline Designer is a web-based self-service application that takes raw data and makes it analytics-ready. Compose reusable pipelines to extract, improve, and transform data from almost any source, then pass it to your choice of data warehouse destinations, where it can serve as the basis for the dashboards that power your business insights. Build and deploy data pipelines in less time. Design and preview, in batch or streaming, directly in your web browser with an easy, visual UI. Scale with native support for the latest hybrid and multi-cloud technologies, and improve productivity with real-time development and debugging. Live preview lets you instantly and visually diagnose issues with your data. Make better decisions faster with dataset documentation, quality proofing, and promotion. Transform data and improve data quality with built-in functions applied across batch or streaming pipelines, turning data health into an effortless, automated discipline.
  • 11
    IBM StreamSets
    IBM® StreamSets enables users to create and manage smart streaming data pipelines through an intuitive graphical interface, facilitating seamless data integration across hybrid and multicloud environments. This is why leading global companies rely on IBM StreamSets to support millions of data pipelines for modern analytics, intelligent applications and hybrid integration. Decrease data staleness and enable real-time data at scale—handling millions of records of data, across thousands of pipelines within seconds. Insulate data pipelines from change and unexpected shifts with drag-and-drop, prebuilt processors designed to automatically identify and adapt to data drift. Create streaming pipelines to ingest structured, semistructured or unstructured data and deliver it to a wide range of destinations.
    Starting Price: $1000 per month
  • 12
    FLIP

    FLIP

    Kanerika

    Flip, Kanerika's AI-powered Data Operations Platform, simplifies the complexity of data transformation with its low-code/no-code approach. Designed to help organizations build data pipelines seamlessly, Flip offers flexible deployment options, a user-friendly interface, and a cost-effective pay-per-use pricing model. Empowering businesses to modernize their IT strategies, Flip accelerates data processing and automation, unlocking actionable insights faster. Whether you aim to streamline workflows, enhance decision-making, or stay competitive, Flip ensures your data works harder for you in today’s dynamic landscape.
    Starting Price: $1614/month
  • 13
    CData Sync

    CData Sync

    CData Software

    CData Sync is a universal data pipeline that delivers automated continuous replication between hundreds of SaaS applications & cloud data sources and any major database or data warehouse, on-premise or in the cloud. Replicate data from hundreds of cloud data sources to popular database destinations, such as SQL Server, Redshift, S3, Snowflake, BigQuery, and more. Configuring replication is easy: login, select the data tables to replicate, and select a replication interval. Done. CData Sync extracts data iteratively, causing minimal impact on operational systems by only querying and updating data that has been added or changed since the last update. CData Sync offers the utmost flexibility across full and partial replication scenarios and ensures that critical data is stored safely in your database of choice. Download a 30-day free trial of the Sync application or request more information at www.cdata.com/sync
  • 14
    Datastreamer

    Datastreamer

    Datastreamer

    Integrate unstructured external data into your organization in minutes. Datastreamer is a turnkey data platform to source, unify, and enrich unstructured external data with 95% less work than building pipelines in-house. Customers use Datastreamer to feed specialized AI models and accelerate insights in Threat Intelligence, KYC/AML, Financial Analysis and more. Feed your analytics products or specialized AI models with billions of data pieces from social media, blogs, news, forums, dark web data, and more. Our platform unifies source data into a common schema so you can use content from multiple sources simultaneously. Leverage our pre-integrated data partners or connect data from any data supplier. Tap into our powerful AI models to enhance data with components like sentiment analysis and PII redaction. Scale data pipelines with less costs by plugging into our managed infrastructure that is optimized to handle massive volumes of text data.
  • 15
    Openbridge

    Openbridge

    Openbridge

    Uncover insights to supercharge sales growth using code-free, fully-automated data pipelines to data lakes or cloud warehouses. A flexible, standards-based platform to unify sales and marketing data for automating insights and smarter growth. Say goodbye to messy, expensive manual data downloads. Always know what you’ll pay and only pay for what you use. Fuel your tools with quick access to analytics-ready data. As certified developers, we only work with secure, official APIs. Get started quickly with data pipelines from popular sources. Pre-built, pre-transformed, and ready-to-go data pipelines. Unlock data from Amazon Vendor Central, Amazon Seller Central, Instagram Stories, Facebook, Amazon Advertising, Google Ads, and many others. Code-free data ingestion and transformation processes allow teams to realize value from their data quickly and cost-effectively. Data is always securely stored directly in a trusted, customer-owned data destination like Databricks, Amazon Redshift, etc.
    Starting Price: $149 per month
  • 16
    Datavolo

    Datavolo

    Datavolo

    Capture all your unstructured data for all your LLM needs. Datavolo replaces single-use, point-to-point code with fast, flexible, reusable pipelines, freeing you to focus on what matters most, doing incredible work. Datavolo is the dataflow infrastructure that gives you a competitive edge. Get fast, unencumbered access to all of your data, including the unstructured files that LLMs rely on, and power up your generative AI. Get pipelines that grow with you, in minutes, not days, without custom coding. Instantly configure from any source to any destination at any time. Trust your data because lineage is built into every
pipeline. Make single-use pipelines and expensive configurations a thing of the past. Harness your unstructured data and unleash AI innovation with Datavolo, powered by Apache NiFi and built specifically for unstructured data. Our founders have spent a lifetime helping organizations make the most of their data.
    Starting Price: $36,000 per year
  • 17
    Key Ward

    Key Ward

    Key Ward

    Extract, transform, manage, & process CAD, FE, CFD, and test data effortlessly. Create automatic data pipelines for machine learning, ROM, & 3D deep learning. Removing data science barriers without coding. Key Ward's platform is the first end-to-end engineering no-code solution that redefines how engineers interact with their data, experimental & CAx. Through leveraging engineering data intelligence, our software enables engineers to easily handle their multi-source data, extract direct value with our built-in advanced analytics tools, and custom-build their machine and deep learning models, all under one platform, all with a few clicks. Automatically centralize, update, extract, sort, clean, and prepare your multi-source data for analysis, machine learning, and/or deep learning. Use our advanced analytics tools on your experimental & simulation data to correlate, find dependencies, and identify patterns.
    Starting Price: €9,000 per year
  • 18
    Axoflow

    Axoflow

    Axoflow

    Axoflow, the Security Data Layer is the foundation for your SIEM and analytics tools enabling the use of AI, up to 70% faster investigations, and more than 50% reduction in SIEM spend by feeding them with actionable data. Axoflow Platform is built up of the following parts: A pipeline acting as the transportation layer for your security data and also acting as an automated ‘translator’ between data schemas. AI - If you prefer to run your detection content locally - whether it’s an AI or ML model, a threat intel lookup, or another type of enrichment - we’ve got you covered. Storage solutions to facilitate the cost-effective storage of security data and also acting as local storage to run your decentralized detection. Orchestration to weave all of the parts together in an easy-to-use GUI that lets youmonitor and manage, and control and search your data.
  • 19
    DPR

    DPR

    Qvikly

    Data Prep Runner (DPR) by QVIKPREP simplifies data prepping and streamlines data processing. Improve your business processes, easily compare data, and enhance data profiling. Save time prepping data for operational reporting, data analysis, and moving data between systems. Reduce risk on data integration project timelines and catch issues early through data profiling. Increase productivity for operations teams by automating data processing. Manage data prep easily and build a robust data pipeline. DPR provides checks based on past data for better accuracy. Drive transactions into your systems and use data to drive data driven test automation. DPR gets data where it needs to end up. Ensure data integration projects deliver on time. Uncover and tackle data issues early, instead of during test cycles. Validate your data with rules and repair data in the data pipeline. DPR makes comparing data between sources efficient with color-coded reports.
    Starting Price: $50 per user per year
  • 20
    Alooma

    Alooma

    Google

    Alooma enables data teams to have visibility and control. It brings data from your various data silos together into BigQuery, all in real time. Set up and flow data in minutes or customize, enrich, and transform data on the stream before it even hits the data warehouse. Never lose an event. Alooma's built in safety nets ensure easy error handling without pausing your pipeline. Any number of data sources, from low to high volume, Alooma’s infrastructure scales to your needs.
  • 21
    SnowcatCloud

    SnowcatCloud

    SnowcatCloud

    SnowcatCloud is a cloud-hosted customer data infrastructure platform built on an open source Snowplow fork (OpenSnowcat) that enables organizations to collect, process, route, and integrate behavioral and event-level data at scale across web, mobile, server, and IoT sources so teams can build a real-time, first-party customer 360 view while retaining full ownership and control of their data; it supports multiple deployment models including cloud-hosted, fully managed service, “bring your own cloud,” and self-hosted open-source options to suit different privacy, cost, and infrastructure needs, all with enterprise-grade security (SOC 2 Type II) and real-time data delivery capabilities. It enriches event pipelines with identity resolution techniques like browser fingerprinting and probabilistic/deterministic matching to improve customer profiles, helps create a customer knowledge graph for deeper insights, and integrates with analytics and data warehouses.
    Starting Price: Free
  • 22
    Chalk

    Chalk

    Chalk

    Powerful data engineering workflows, without the infrastructure headaches. Complex streaming, scheduling, and data backfill pipelines, are all defined in simple, composable Python. Make ETL a thing of the past, fetch all of your data in real-time, no matter how complex. Incorporate deep learning and LLMs into decisions alongside structured business data. Make better predictions with fresher data, don’t pay vendors to pre-fetch data you don’t use, and query data just in time for online predictions. Experiment in Jupyter, then deploy to production. Prevent train-serve skew and create new data workflows in milliseconds. Instantly monitor all of your data workflows in real-time; track usage, and data quality effortlessly. Know everything you computed and data replay anything. Integrate with the tools you already use and deploy to your own infrastructure. Decide and enforce withdrawal limits with custom hold times.
    Starting Price: Free
  • 23
    VirtualMetric

    VirtualMetric

    VirtualMetric

    VirtualMetric is a powerful telemetry pipeline solution designed to enhance data collection, processing, and security monitoring across enterprise environments. Its core offering, DataStream, automatically collects and transforms security logs from a wide range of systems such as Windows, Linux, MacOS, and Unix, enriching data for further analysis. By reducing data volume and filtering out non-meaningful logs, VirtualMetric helps businesses lower SIEM ingestion costs, increase operational efficiency, and improve threat detection accuracy. The platform’s scalable architecture, with features like zero data loss and long-term compliance storage, ensures that businesses can maintain high security standards while optimizing performance.
    Starting Price: Free
  • 24
    Crux

    Crux

    Crux

    Find out why the heavy hitters are using the Crux external data automation platform to scale external data integration, transformation, and observability without increasing headcount. Our cloud-native data integration technology accelerates the ingestion, preparation, observability and ongoing delivery of any external dataset. The result is that we can ensure you get quality data in the right place, in the right format when you need it. Leverage automatic schema detection, delivery schedule inference, and lifecycle management to build pipelines from any external data source quickly. Enhance discoverability throughout your organization through a private catalog of linked and matched data products. Enrich, validate, and transform any dataset to quickly combine it with other data sources and accelerate analytics.
  • 25
    Y42

    Y42

    Datos-Intelligence GmbH

    Y42 is the first fully managed Modern DataOps Cloud. It is purpose-built to help companies easily design production-ready data pipelines on top of their Google BigQuery or Snowflake cloud data warehouse. Y42 provides native integration of best-of-breed open-source data tools, comprehensive data governance, and better collaboration for data teams. With Y42, organizations enjoy increased accessibility to data and can make data-driven decisions quickly and efficiently.
  • 26
    BettrData

    BettrData

    BettrData

    Our automated data operations platform will allow businesses to reduce or reallocate the number of full-time employees needed to support their data operations. This is traditionally a very manual and expensive process, and our product packages it all together to simplify the process and significantly reduce costs. With so much problematic data in business, most companies cannot give appropriate attention to the quality of their data because they are too busy processing it. By using our product, you automatically become a proactive business when it comes to data quality. With clear visibility of all incoming data and a built-in alerting system, our platform ensures that your data quality standards are met. We are a first-of-its-kind solution that has taken many costly manual processes and put them into a single platform. The BettrData.io platform is ready to use after a simple installation and several straightforward configurations.
  • 27
    Adele

    Adele

    Adastra

    Adele is an intuitive platform designed to simplify the migration of data pipelines from any legacy system to a target platform. It empowers users with full control over the functional migration process, while its intelligent mapping capabilities offer valuable insights. By reverse-engineering data pipelines, Adele creates data lineage mappings and extracts metadata, enhancing visibility and understanding of data flows.
  • 28
    Google Cloud Data Fusion
    Open core, delivering hybrid and multi-cloud integration. Data Fusion is built using open source project CDAP, and this open core ensures data pipeline portability for users. CDAP’s broad integration with on-premises and public cloud platforms gives Cloud Data Fusion users the ability to break down silos and deliver insights that were previously inaccessible. Integrated with Google’s industry-leading big data tools. Data Fusion’s integration with Google Cloud simplifies data security and ensures data is immediately available for analysis. Whether you’re curating a data lake with Cloud Storage and Dataproc, moving data into BigQuery for data warehousing, or transforming data to land it in a relational store like Cloud Spanner, Cloud Data Fusion’s integration makes development and iteration fast and easy.
  • 29
    DoubleCloud

    DoubleCloud

    DoubleCloud

    Save time & costs by streamlining data pipelines with zero-maintenance open source solutions. From ingestion to visualization, all are integrated, fully managed, and highly reliable, so your engineers will love working with data. You choose whether to use any of DoubleCloud’s managed open source services or leverage the full power of the platform, including data storage, orchestration, ELT, and real-time visualization. We provide leading open source services like ClickHouse, Kafka, and Airflow, with deployment on Amazon Web Services or Google Cloud. Our no-code ELT tool allows real-time data syncing between systems, fast, serverless, and seamlessly integrated with your existing infrastructure. With our managed open-source data visualization you can simply visualize your data in real time by building charts and dashboards. We’ve designed our platform to make the day-to-day life of engineers more convenient.
    Starting Price: $0.024 per 1 GB per month
  • 30
    Tarsal

    Tarsal

    Tarsal

    Tarsal's infinite scalability means as your organization grows, Tarsal grows with you. Tarsal makes it easy for you to switch where you're sending data - today's SIEM data is tomorrow's data lake data; all with one click. Keep your SIEM and gradually migrate analytics over to a data lake. You don't have to rip anything out to use Tarsal. Some analytics just won't run on your SIEM. Use Tarsal to have query-ready data on a data lake. Your SIEM is one of the biggest line items in your budget. Use Tarsal to send some of that data to your data lake. Tarsal is the first highly scalable ETL data pipeline built for security teams. Easily exfil terabytes of data in just just a few clicks, with instant normalization, and route that data to your desired destination.
  • 31
    AWS Data Pipeline
    AWS Data Pipeline is a web service that helps you reliably process and move data between different AWS compute and storage services, as well as on-premises data sources, at specified intervals. With AWS Data Pipeline, you can regularly access your data where it’s stored, transform and process it at scale, and efficiently transfer the results to AWS services such as Amazon S3, Amazon RDS, Amazon DynamoDB, and Amazon EMR. AWS Data Pipeline helps you easily create complex data processing workloads that are fault tolerant, repeatable, and highly available. You don’t have to worry about ensuring resource availability, managing inter-task dependencies, retrying transient failures or timeouts in individual tasks, or creating a failure notification system. AWS Data Pipeline also allows you to move and process data that was previously locked up in on-premises data silos.
    Starting Price: $1 per month
  • 32
    Datameer

    Datameer

    Datameer

    Datameer revolutionizes data transformation with a low-code approach, trusted by top global enterprises. Craft, transform, and publish data seamlessly with no code and SQL, simplifying complex data engineering tasks. Empower your data teams to make informed decisions confidently while saving costs and ensuring responsible self-service analytics. Speed up your analytics workflow by transforming datasets to answer ad-hoc questions and support operational dashboards. Empower everyone on your team with our SQL or Drag-and-Drop to transform your data in an intuitive and collaborative workspace. And best of all, everything happens in Snowflake. Datameer is designed and optimized for Snowflake to reduce data movement and increase platform adoption. Some of the problems Datameer solves: - Analytics is not accessible - Drowning in backlog - Long development
  • 33
    BigBI

    BigBI

    BigBI

    BigBI enables data specialists to build their own powerful big data pipelines interactively & efficiently, without any coding! BigBI unleashes the power of Apache Spark enabling: Scalable processing of real Big Data (up to 100X faster) Integration of traditional data (SQL, batch files) with modern data sources including semi-structured (JSON, NoSQL DBs, Elastic, Hadoop), and unstructured (Text, Audio, video), Integration of streaming data, cloud data, AI/ML & graphs
  • 34
    Etleap

    Etleap

    Etleap

    Etleap was built from the ground up on AWS to support Redshift and snowflake data warehouses and S3/Glue data lakes. Their solution simplifies and automates ETL by offering fully-managed ETL-as-a-service. Etleap's data wrangler and modeling tools let users control how data is transformed for analysis, without writing any code. Etleap monitors and maintains data pipelines for availability and completeness, eliminating the need for constant maintenance, and centralizes data from 50+ disparate sources and silos into your data warehouse or data lake.
  • 35
    Orchestra

    Orchestra

    Orchestra

    Orchestra is a Unified Control Plane for Data and AI Operations, designed to help data teams build, deploy, and monitor workflows with ease. It offers a declarative framework that combines code and GUI, allowing users to implement workflows 10x faster and reduce maintenance time by 50%. With real-time metadata aggregation, Orchestra provides full-stack data observability, enabling proactive alerting and rapid recovery from pipeline failures. It integrates seamlessly with tools like dbt Core, dbt Cloud, Coalesce, Airbyte, Fivetran, Snowflake, BigQuery, Databricks, and more, ensuring compatibility with existing data stacks. Orchestra's modular architecture supports AWS, Azure, and GCP, making it a versatile solution for enterprises and scale-ups aiming to streamline their data operations and build trust in their AI initiatives.
  • 36
    datuum.ai
    AI-powered data integration tool that helps streamline the process of customer data onboarding. It allows for easy and fast automated data integration from various sources without coding, reducing preparation time to just a few minutes. With Datuum, organizations can efficiently extract, ingest, transform, migrate, and establish a single source of truth for their data, while integrating it into their existing data storage. Datuum is a no-code product and can reduce up to 80% of the time spent on data-related tasks, freeing up time for organizations to focus on generating insights and improving the customer experience. With over 40 years of experience in data management and operations, we at Datuum have incorporated our expertise into the core of our product, addressing the key challenges faced by data engineers and managers and ensuring that the platform is user-friendly, even for non-technical specialists.
  • 37
    K2View

    K2View

    K2View

    At K2View, we believe that every enterprise should be able to leverage its data to become as disruptive and agile as the best companies in its industry. We make this possible through our patented Data Product Platform, which creates and manages a complete and compliant dataset for every business entity – on demand, and in real time. The dataset is always in sync with its underlying sources, adapts to changes in the source structures, and is instantly accessible to any authorized data consumer. Data Product Platform fuels many operational use cases, including customer 360, data masking and tokenization, test data management, data migration, legacy application modernization, data pipelining and more – to deliver business outcomes in less than half the time, and at half the cost, of any other alternative. The platform inherently supports modern data architectures – data mesh, data fabric, and data hub – and deploys in cloud, on-premise, or hybrid environments.
  • 38
    Kestra

    Kestra

    Kestra

    Kestra is an open-source, event-driven orchestrator that simplifies data operations and improves collaboration between engineers and business users. By bringing Infrastructure as Code best practices to data pipelines, Kestra allows you to build reliable workflows and manage them with confidence. Thanks to the declarative YAML interface for defining orchestration logic, everyone who benefits from analytics can participate in the data pipeline creation process. The UI automatically adjusts the YAML definition any time you make changes to a workflow from the UI or via an API call. Therefore, the orchestration logic is defined declaratively in code, even if some workflow components are modified in other ways.
  • 39
    Lightbend

    Lightbend

    Lightbend

    Lightbend provides technology that enables developers to easily build data-centric applications that bring the most demanding, globally distributed applications and streaming data pipelines to life. Companies worldwide turn to Lightbend to solve the challenges of real-time, distributed data in support of their most business-critical initiatives. Akka Platform provides the building blocks that make it easy for businesses to build, deploy, and run large-scale applications that support digitally transformative initiatives. Accelerate time-to-value and reduce infrastructure and cloud costs with reactive microservices that take full advantage of the distributed nature of the cloud and are resilient to failure, highly efficient, and operative at any scale. Native support for encryption, data shredding, TLS enforcement, and continued compliance with GDPR. Framework for quick construction, deployment and management of streaming data pipelines.
  • 40
    Gravity Data
    Gravity's mission is to make streaming data easy from over 100 sources while only paying for what you use. Gravity removes the reliance on engineering teams to deliver streaming pipelines with a simple interface to get streaming up and running in minutes from databases, event data and APIs. Everyone in the data team can now build with simple point and click so that you can focus on building apps, services and customer experiences. Full Execution trace and detailed error messaging for quick diagnosis and resolution. We have implemented new, feature-rich ways for you to quickly get started. From bulk set-up, default schemas and data selection to different job modes and statuses. Spend less time wrangling with infrastructure and more time analysing data while allowing our intelligent engine to keep your pipelines running. Gravity integrates with your systems for notifications and orchestration.
  • 41
    Lumada IIoT
    Embed sensors for IoT use cases and enrich sensor data with control system and environment data. Integrate this in real time with enterprise data and deploy predictive algorithms to discover new insights and harvest your data for meaningful use. Use analytics to predict maintenance problems, understand asset utilization, reduce defects and optimize processes. Harness the power of connected devices to deliver remote monitoring and diagnostics services. Employ IoT Analytics to predict safety hazards and comply with regulations to reduce worksite accidents. Lumada Data Integration: Rapidly build and deploy data pipelines at scale. Integrate data from lakes, warehouses and devices, and orchestrate data flows across all environments. By building ecosystems with customers and business partners in various business areas, we can accelerate digital innovation to create new value for a new society.
  • 42
    BDB Platform

    BDB Platform

    Big Data BizViz

    BDB is a modern data analytics and BI platform which can skillfully dive deep into your data to provide actionable insights. It is deployable on the cloud as well as on-premise. Our exclusive microservices based architecture has the elements of Data Preparation, Predictive, Pipeline and Dashboard designer to provide customized solutions and scalable analytics to different industries. BDB’s strong NLP based search enables the user to unleash the power of data on desktop, tablets and mobile as well. BDB has various ingrained data connectors, and it can connect to multiple commonly used data sources, applications, third party API’s, IoT, social media, etc. in real-time. It lets you connect to RDBMS, Big data, FTP/ SFTP Server, flat files, web services, etc. and manage structured, semi-structured as well as unstructured data. Start your journey to advanced analytics today.
  • 43
    Nauta

    Nauta

    Nauta

    Nauta is an AI-powered global logistics orchestration platform that intelligently connects disparate data sources used in container movement for the global trade industry. It consolidates and automates workflows by harmonizing shipment information, operational data, and logistics systems, enabling real-time synchronization and actionable insights. By leveraging artificial intelligence, it enhances operational visibility, streamlines decision-making, and supports margin improvement across import/export workflows. Nauta’s integration across complex logistics networks simplifies tracking and coordination throughout the container lifecycle, from vendor to destination, making cross-system consolidation seamless and dynamic.
  • 44
    DMSFACTORY DocumentsPipeliner
    DocumentsPipeliner is a server-based middleware solution for automated processing of incoming documents. It monitors mailboxes (e.g., Microsoft Exchange), file folders, or other input channels, extracts email attachments, normalizes formats (e.g., PDF/A), and enriches documents with metadata from third-party systems as needed. It then forwards the data to target systems such as M-Files, ABBYY FlexiCapture, or other DMS and workflow solutions based on rules. With DocumentsPipeliner, companies can create a central “digital mailroom” that reduces routine work in document receipt, ensures compliance, and lays the foundation for consistent, scalable business processes.
    Starting Price: 2580€/server
  • 45
    Arcion

    Arcion

    Arcion Labs

    Deploy production-ready change data capture pipelines for high-volume, real-time data replication - without a single line of code. Supercharged Change Data Capture. Enjoy automatic schema conversion, end-to-end replication, flexible deployment, and more with Arcion’s distributed Change Data Capture (CDC). Leverage Arcion’s zero data loss architecture for guaranteed end-to-end data consistency, built-in checkpointing, and more without any custom code. Leave scalability and performance concerns behind with a highly-distributed, highly parallel architecture supporting 10x faster data replication. Reduce DevOps overhead with Arcion Cloud, the only fully-managed CDC offering. Enjoy autoscaling, built-in high availability, monitoring console, and more. Simplify & standardize data pipelines architecture, and zero downtime workload migration from on-prem to cloud.
    Starting Price: $2,894.76 per month
  • 46
    RudderStack

    RudderStack

    RudderStack

    RudderStack is the smart customer data pipeline. Easily build pipelines connecting your whole customer data stack, then make them smarter by pulling analysis from your data warehouse to trigger enrichment and activation in customer tools for identity stitching and other advanced use cases. Start building smarter customer data pipelines today.
    Starting Price: $750/month
  • 47
    Trifacta

    Trifacta

    Trifacta

    The fastest way to prep data and build data pipelines in the cloud. Trifacta provides visual and intelligent guidance to accelerate data preparation so you can get to insights faster. Poor data quality can sink any analytics project. Trifacta helps you understand your data so you can quickly and accurately clean it up. All the power with none of the code. Trifacta provides visual and intelligent guidance so you can get to insights faster. Manual, repetitive data preparation processes don’t scale. Trifacta helps you build, deploy and manage self-service data pipelines in minutes not months.
  • 48
    Azure Event Hubs
    Event Hubs is a fully managed, real-time data ingestion service that’s simple, trusted, and scalable. Stream millions of events per second from any source to build dynamic data pipelines and immediately respond to business challenges. Keep processing data during emergencies using the geo-disaster recovery and geo-replication features. Integrate seamlessly with other Azure services to unlock valuable insights. Allow existing Apache Kafka clients and applications to talk to Event Hubs without any code changes—you get a managed Kafka experience without having to manage your own clusters. Experience real-time data ingestion and microbatching on the same stream. Focus on drawing insights from your data instead of managing infrastructure. Build real-time big data pipelines and respond to business challenges right away.
    Starting Price: $0.03 per hour
  • 49
    Ask On Data

    Ask On Data

    Helical Insight

    Ask On Data is a chat based AI powered open source Data Engineering/ ETL tool. With agentic capabilities and pioneering next gen data stack, Ask On Data can help in creating data pipelines via a very simple chat interface. It can be used for tasks like Data Migration, Data Loading, Data Transformations, Data Wrangling, Data Cleaning as well as Data Analysis as well with a simple chat interface. This tool can be used by Data Scientists to get clean data. Data Analyst and BI engineers to create calculated tables. Data Engineers can also use this tool to increase their efficiency and achieve much more.
  • 50
    Azure Data Factory
    Integrate data silos with Azure Data Factory, a service built for all data integration needs and skill levels. Easily construct ETL and ELT processes code-free within the intuitive visual environment, or write your own code. Visually integrate data sources using more than 90+ natively built and maintenance-free connectors at no added cost. Focus on your data—the serverless integration service does the rest. Data Factory provides a data integration and transformation layer that works across your digital transformation initiatives. Data Factory can help independent software vendors (ISVs) enrich their SaaS apps with integrated hybrid data as to deliver data-driven user experiences. Pre-built connectors and integration at scale enable you to focus on your users while Data Factory takes care of the rest.