Alternatives to SigView

Compare SigView alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to SigView in 2026. Compare features, ratings, user reviews, pricing, and more from SigView competitors and alternatives in order to make an informed decision for your business.

  • 1
    StarTree

    StarTree

    StarTree

    StarTree, powered by Apache Pinot™, is a fully managed real-time analytics platform built for customer-facing applications that demand instant insights on the freshest data. Unlike traditional data warehouses or OLTP databases—optimized for back-office reporting or transactions—StarTree is engineered for real-time OLAP at true scale, meaning: - Data Volume: query performance sustained at petabyte scale - Ingest Rates: millions of events per second, continuously indexed for freshness - Concurrency: thousands to millions of simultaneous users served with sub-second latency With StarTree, businesses deliver always-fresh insights at interactive speed, enabling applications that personalize, monitor, and act in real time.
    Starting Price: Free
  • 2
    Striim

    Striim

    Striim

    Data integration for your hybrid cloud. Modern, reliable data integration across your private and public cloud. All in real-time with change data capture and data streams. Built by the executive & technical team from GoldenGate Software, Striim brings decades of experience in mission-critical enterprise workloads. Striim scales out as a distributed platform in your environment or in the cloud. Scalability is fully configurable by your team. Striim is fully secure with HIPAA and GDPR compliance. Built ground up for modern enterprise workloads in the cloud or on-premise. Drag and drop to create data flows between your sources and targets. Process, enrich, and analyze your streaming data with real-time SQL queries.
  • 3
    Amazon EMR
    Amazon EMR is the industry-leading cloud big data platform for processing vast amounts of data using open-source tools such as Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto. With EMR you can run Petabyte-scale analysis at less than half of the cost of traditional on-premises solutions and over 3x faster than standard Apache Spark. For short-running jobs, you can spin up and spin down clusters and pay per second for the instances used. For long-running workloads, you can create highly available clusters that automatically scale to meet demand. If you have existing on-premises deployments of open-source tools such as Apache Spark and Apache Hive, you can also run EMR clusters on AWS Outposts. Analyze data using open-source ML frameworks such as Apache Spark MLlib, TensorFlow, and Apache MXNet. Connect to Amazon SageMaker Studio for large-scale model training, analysis, and reporting.
  • 4
    Kyligence

    Kyligence

    Kyligence

    Let Kyligence Zen take care of collecting, organizing, and analyzing your metrics so you can focus more on taking action Kyligence Zen is the go-to low-code metrics platform to define, collect, and analyze your business metrics. It empowers users to quickly connect their data sources, define their business metrics, uncover hidden insights in minutes, and share them across their organization. Kyligence Enterprise provides diverse solutions based on on-premise, public cloud, and private cloud, helping enterprises of any size to simplify multidimensional analysis based on massive amounts of data according to their business needs.​ Kyligence Enterprise, based on Apache Kylin, provides sub-second standard SQL query responses based on PB-scale datasets, simplifying multidimensional data analysis on data lakes for enterprises and enabling business users to quickly discover business value in massive amounts of data and drive better business decisions.
  • 5
    Bizintel360
    AI powered self-service advanced analytics platform. Connect data sources and derive visualizations without any programming. Cloud native advanced analytics platform that provides high-quality data supply and intelligent real-time analysis across the enterprise without any code. Connect different data sources of different formats. Enables identification of root cause problems. Reduce cycle time: source to target. Analytics without programming knowledge. Real time data refresh on the go. Connect data source of any format, stream data in real time or defined frequency to data lake and visualize them in advanced interactive search engine-based dashboards. Descriptive, predictive and prescriptive analytics in a single platform with the power of search engine and advanced visualization. No traditional technology required to see data in various visualization formats. Roll up, slice and dice data with various mathematical computation right inside Bizintel360 visualization.
  • 6
    Fund-Studio

    Fund-Studio

    Fund-Studio

    For hedge fund managers, CTAs, asset managers and family offices, FundStudio’s real-time operations and reporting platform and outsourced managed services provides complete knowledge of fund or managed account conditions -- and the tools to manage them. Capture orders and trades via realtime FIX, uploads, programmatic API or on-screen manual data entry. Allocate orders to accounts, strategies and more. Apply compliance rules such as position limits or mandated asset-class restrictions, or complex programmatic rules such as leverage bias. Run real-time intra-day P&L. Define multiple views of aggregate or sliced-and-diced data in real-time "pivot tables". Manage all cash movements including trade settlements, margin, coupons and principal. Run cash projections and more. Meet investor DDQ requirements by shadowing your fund administrator to produce NAV at the master-fund level as well as associated feeder funds or investor subscriptions.
  • 7
    GeoSpock

    GeoSpock

    GeoSpock

    GeoSpock enables data fusion for the connected world with GeoSpock DB – the space-time analytics database. GeoSpock DB is a unique, cloud-native database optimised for querying for real-world use cases, able to fuse multiple sources of Internet of Things (IoT) data together to unlock its full value, whilst simultaneously reducing complexity and cost. GeoSpock DB enables efficient storage, data fusion, and rapid programmatic access to data, and allows you to run ANSI SQL queries and connect to analytics tools via JDBC/ODBC connectors. Users are able to perform analysis and share insights using familiar toolsets, with support for common BI tools (such as Tableau™, Amazon QuickSight™, and Microsoft Power BI™), and Data Science and Machine Learning environments (including Python Notebooks and Apache Spark). The database can also be integrated with internal applications and web services – with compatibility for open-source and visualisation libraries such as Kepler and Cesium.js.
  • 8
    E-MapReduce
    EMR is an all-in-one enterprise-ready big data platform that provides cluster, job, and data management services based on open-source ecosystems, such as Hadoop, Spark, Kafka, Flink, and Storm. Alibaba Cloud Elastic MapReduce (EMR) is a big data processing solution that runs on the Alibaba Cloud platform. EMR is built on Alibaba Cloud ECS instances and is based on open-source Apache Hadoop and Apache Spark. EMR allows you to use the Hadoop and Spark ecosystem components, such as Apache Hive, Apache Kafka, Flink, Druid, and TensorFlow, to analyze and process data. You can use EMR to process data stored on different Alibaba Cloud data storage service, such as Object Storage Service (OSS), Log Service (SLS), and Relational Database Service (RDS). You can quickly create clusters without the need to configure hardware and software. All maintenance operations are completed on its Web interface.
  • 9
    Apache Spark

    Apache Spark

    Apache Software Foundation

    Apache Spark™ is a unified analytics engine for large-scale data processing. Apache Spark achieves high performance for both batch and streaming data, using a state-of-the-art DAG scheduler, a query optimizer, and a physical execution engine. Spark offers over 80 high-level operators that make it easy to build parallel apps. And you can use it interactively from the Scala, Python, R, and SQL shells. Spark powers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application. Spark runs on Hadoop, Apache Mesos, Kubernetes, standalone, or in the cloud. It can access diverse data sources. You can run Spark using its standalone cluster mode, on EC2, on Hadoop YARN, on Mesos, or on Kubernetes. Access data in HDFS, Alluxio, Apache Cassandra, Apache HBase, Apache Hive, and hundreds of other data sources.
  • 10
    Delta Lake

    Delta Lake

    Delta Lake

    Delta Lake is an open-source storage layer that brings ACID transactions to Apache Spark™ and big data workloads. Data lakes typically have multiple data pipelines reading and writing data concurrently, and data engineers have to go through a tedious process to ensure data integrity, due to the lack of transactions. Delta Lake brings ACID transactions to your data lakes. It provides serializability, the strongest level of isolation level. Learn more at Diving into Delta Lake: Unpacking the Transaction Log. In big data, even the metadata itself can be "big data". Delta Lake treats metadata just like data, leveraging Spark's distributed processing power to handle all its metadata. As a result, Delta Lake can handle petabyte-scale tables with billions of partitions and files at ease. Delta Lake provides snapshots of data enabling developers to access and revert to earlier versions of data for audits, rollbacks or to reproduce experiments.
  • 11
    doolytic

    doolytic

    doolytic

    doolytic is leading the way in big data discovery, the convergence of data discovery, advanced analytics, and big data. doolytic is rallying expert BI users to the revolution in self-service exploration of big data, revealing the data scientist in all of us. doolytic is an enterprise software solution for native discovery on big data. doolytic is based on best-of-breed, scalable, open-source technologies. Lightening performance on billions of records and petabytes of data. Structured, unstructured and real-time data from any source. Sophisticated advanced query capabilities for expert users, Integration with R for advanced and predictive applications. Search, analyze, and visualize data from any format, any source in real-time with the flexibility of Elastic. Leverage the power of Hadoop data lakes with no latency and concurrency issues. doolytic solves common BI problems and enables big data discovery without clumsy and inefficient workarounds.
  • 12
    CelerData Cloud
    CelerData is a high-performance SQL engine built to power analytics directly on data lakehouses, eliminating the need for traditional data‐warehouse ingestion pipelines. It delivers sub-second query performance at scale, supports on-the‐fly JOINs without costly denormalization, and simplifies architecture by allowing users to run demanding workloads on open format tables. Built on the open source engine StarRocks, the platform outperforms legacy query engines like Trino, ClickHouse, and Apache Druid in latency, concurrency, and cost-efficiency. With a cloud-managed service that runs in your own VPC, you retain infrastructure control and data ownership while CelerData handles maintenance and optimization. The platform is positioned to power real-time OLAP, business intelligence, and customer-facing analytics use cases and is trusted by enterprise customers (including names such as Pinterest, Coinbase, and Fanatics) who have achieved significant latency reductions and cost savings.
  • 13
    SelectDB

    SelectDB

    SelectDB

    SelectDB is a modern data warehouse based on Apache Doris, which supports rapid query analysis on large-scale real-time data. From Clickhouse to Apache Doris, to achieve the separation of the lake warehouse and upgrade to the lake warehouse. The fast-hand OLAP system carries nearly 1 billion query requests every day to provide data services for multiple scenes. Due to the problems of storage redundancy, resource seizure, complicated governance, and difficulty in querying and adjustment, the original lake warehouse separation architecture was decided to introduce Apache Doris lake warehouse, combined with Doris's materialized view rewriting ability and automated services, to achieve high-performance data query and flexible data governance. Write real-time data in seconds, and synchronize flow data from databases and data streams. Data storage engine for real-time update, real-time addition, and real-time pre-polymerization.
    Starting Price: $0.22 per hour
  • 14
    IBM Db2 Event Store
    IBM Db2 Event Store is a cloud-native database system that is designed to handle massive amounts of structured data that is stored in Apache Parquet format. Because it is optimized for event-driven data processing and analysis, this high-speed data store can capture, analyze, and store more than 250 billion events per day. The data store is flexible and scalable to adapt quickly to your changing business needs. With the Db2 Event Store service, you can create these data stores in your Cloud Pak for Data cluster so that you can govern the data and use it for more in-depth analysis. You need to rapidly ingest large amounts of streaming data (up to one million inserts per second per node) and use it for real-time analytics with integrated machine learning capabilities. Analyze incoming data from different medical devices in real time to provide better health outcomes for patients while providing cost savings for moving the data to storage.
  • 15
    Apache Storm

    Apache Storm

    Apache Software Foundation

    Apache Storm is a free and open source distributed realtime computation system. Apache Storm makes it easy to reliably process unbounded streams of data, doing for realtime processing what Hadoop did for batch processing. Apache Storm is simple, can be used with any programming language, and is a lot of fun to use! Apache Storm has many use cases: realtime analytics, online machine learning, continuous computation, distributed RPC, ETL, and more. Apache Storm is fast: a benchmark clocked it at over a million tuples processed per second per node. It is scalable, fault-tolerant, guarantees your data will be processed, and is easy to set up and operate. Apache Storm integrates with the queueing and database technologies you already use. An Apache Storm topology consumes streams of data and processes those streams in arbitrarily complex ways, repartitioning the streams between each stage of the computation however needed. Read more in the tutorial.
  • 16
    DataSift

    DataSift

    DataSift

    Extract insights from a universe of human-created data. With data from social networks, blogs, news, and more. Integrate social, blog and news data in a single place. Real-time and historic data from billions of data points. Normalized and enriched data in real-time for accurate analysis. With DataSift, you can deliver Human Data into business intelligence (BI) tools and business processes in real-time. You can also innovate with our powerful API to build your own apps. Human Data is the fastest growing type of data that covers the entire spectrum of human-generated information regardless of format or channel through which it is delivered. It includes text, image, audio or video shared with other people on social networks, blogs, news content and inside the business. The DataSift Human Data platform unifies all the data - real-time and historical - in one place, unlocks its meaning and delivers it for use anywhere in the business.
  • 17
    Keen

    Keen

    Keen.io

    Keen is the fully managed event streaming platform. Built upon trusted Apache Kafka, we make it easier than ever for you to collect massive volumes of event data with our real-time data pipeline. Use Keen’s powerful REST API and SDKs to collect event data from anything connected to the internet. Our platform allows you to store your data securely decreasing your operational and delivery risk with Keen. With storage infrastructure powered by Apache Cassandra, data is totally secure through transfer through HTTPS and TLS, then stored with multi-layer AES encryption. Once data is securely stored, utilize our Access Keys to be able to present data in arbitrary ways without having to re-architect your security or data model. Or, take advantage of Role-based Access Control (RBAC), allowing for completely customizable permission tiers, down to specific data points or queries.
    Starting Price: $149 per month
  • 18
    Apache Druid
    Apache Druid is an open source distributed data store. Druid’s core design combines ideas from data warehouses, timeseries databases, and search systems to create a high performance real-time analytics database for a broad range of use cases. Druid merges key characteristics of each of the 3 systems into its ingestion layer, storage format, querying layer, and core architecture. Druid stores and compresses each column individually, and only needs to read the ones needed for a particular query, which supports fast scans, rankings, and groupBys. Druid creates inverted indexes for string values for fast search and filter. Out-of-the-box connectors for Apache Kafka, HDFS, AWS S3, stream processors, and more. Druid intelligently partitions data based on time and time-based queries are significantly faster than traditional databases. Scale up or down by just adding or removing servers, and Druid automatically rebalances. Fault-tolerant architecture routes around server failures.
  • 19
    Google Cloud Dataproc
    Dataproc makes open source data and analytics processing fast, easy, and more secure in the cloud. Build custom OSS clusters on custom machines faster. Whether you need extra memory for Presto or GPUs for Apache Spark machine learning, Dataproc can help accelerate your data and analytics processing by spinning up a purpose-built cluster in 90 seconds. Easy and affordable cluster management. With autoscaling, idle cluster deletion, per-second pricing, and more, Dataproc can help reduce the total cost of ownership of OSS so you can focus your time and resources elsewhere. Security built in by default. Encryption by default helps ensure no piece of data is unprotected. With JobsAPI and Component Gateway, you can define permissions for Cloud IAM clusters, without having to set up networking or gateway nodes.
  • 20
    Azure HDInsight
    Run popular open-source frameworks—including Apache Hadoop, Spark, Hive, Kafka, and more—using Azure HDInsight, a customizable, enterprise-grade service for open-source analytics. Effortlessly process massive amounts of data and get all the benefits of the broad open-source project ecosystem with the global scale of Azure. Easily migrate your big data workloads and processing to the cloud. Open-source projects and clusters are easy to spin up quickly without the need to install hardware or manage infrastructure. Big data clusters reduce costs through autoscaling and pricing tiers that allow you to pay for only what you use. Enterprise-grade security and industry-leading compliance with more than 30 certifications helps protect your data. Optimized components for open-source technologies such as Hadoop and Spark keep you up to date.
  • 21
    33Across Lexicon
    Boost the impact and efficiency of your campaign with the first exchange built for attention. Real-time viewability detection. Real-time detection delivers reliable viewability without a PMP. Impact formats remain in-view. Scalable, rich media-like formats with an average time-in-view of 25 seconds. Buying based on time-in-view. Bid across specific time-based increments on a CPM basis. Direct partnerships with 1,500 publishers, including S2S and Header Bidding integrations. Connections to more than 100 platforms across the programmatic ecosystem including leading all DSPs and DMPs. Proactive partnerships with leading industry organizations dedicated to quality supply.
  • 22
    Oracle Cloud Infrastructure Data Flow
    Oracle Cloud Infrastructure (OCI) Data Flow is a fully managed Apache Spark service to perform processing tasks on extremely large data sets without infrastructure to deploy or manage. This enables rapid application delivery because developers can focus on app development, not infrastructure management. OCI Data Flow handles infrastructure provisioning, network setup, and teardown when Spark jobs are complete. Storage and security are also managed, which means less work is required for creating and managing Spark applications for big data analysis. With OCI Data Flow, there are no clusters to install, patch, or upgrade, which saves time and operational costs for projects. OCI Data Flow runs each Spark job in private dedicated resources, eliminating the need for upfront capacity planning. With OCI Data Flow, IT only needs to pay for the infrastructure resources that Spark jobs use while they are running.
    Starting Price: $0.0085 per GB per hour
  • 23
    Azure Databricks
    Unlock insights from all your data and build artificial intelligence (AI) solutions with Azure Databricks, set up your Apache Spark™ environment in minutes, autoscale, and collaborate on shared projects in an interactive workspace. Azure Databricks supports Python, Scala, R, Java, and SQL, as well as data science frameworks and libraries including TensorFlow, PyTorch, and scikit-learn. Azure Databricks provides the latest versions of Apache Spark and allows you to seamlessly integrate with open source libraries. Spin up clusters and build quickly in a fully managed Apache Spark environment with the global scale and availability of Azure. Clusters are set up, configured, and fine-tuned to ensure reliability and performance without the need for monitoring. Take advantage of autoscaling and auto-termination to improve total cost of ownership (TCO).
  • 24
    AVEVA PI System
    The PI System unlocks operational insights and new possibilities. The PI System enables digital transformation through trusted, high-quality operations data. Collect, enhance, and deliver data in real time in any location. Empower engineers and operators. Accelerate the work of analysts and data scientists. Support new business opportunities. Collect real-time data from hundreds of assets—including legacy, proprietary, remote, mobile, and IIoT devices. The PI System connects you to your data, no matter the location or format. Store decades worth of data with sub-second resolution. Get immediate access to high-fidelity historical, real-time, and predictive data to keep critical operations running and business insights coming. Make data more meaningful by adding intuitive labels and metadata. Define data hierarchies that reflect your operating and reporting environments. With context, you don’t just see a data point, you see the big picture.
  • 25
    OpenText Analytics Database (Vertica)
    OpenText Analytics Database is a high-performance, scalable analytics platform that enables organizations to analyze massive data sets quickly and cost-effectively. It supports real-time analytics and in-database machine learning to deliver actionable business insights. The platform can be deployed flexibly across hybrid, multi-cloud, and on-premises environments to optimize infrastructure and reduce total cost of ownership. Its massively parallel processing (MPP) architecture handles complex queries efficiently, regardless of data size. OpenText Analytics Database also features compatibility with data lakehouse architectures, supporting formats like Parquet and ORC. With built-in machine learning and broad language support, it empowers users from SQL experts to Python developers to derive predictive insights.
  • 26
    Catalyst

    Catalyst

    Catalyst

    Catalyst is a business-performance software solution that sits on top of a Data Lake that includes your ERP, Big Data sources, and any other data you might have. What if you could instantly unlock game-changing insights from deep inside your data? Sound too good to be true? It lets you slice, dice, and drill down into that data in an instant. Reports that used to take weeks? One push of a button. Analyze Big Data alongside your own and you can build stunningly accurate budgets with direct input from sales. Create financial and operational plans from a single source of truth. Wonder what's standing between you and maximum profit-ability? Drill down to transaction level for root cause analysis with a couple of clicks. With Catalyst, every number ties out, every time. When you cut tasks that used to take days down to seconds, you can focus on what matters, analyzing and evolving the business.
  • 27
    Papermap

    Papermap

    Papermap

    Papermap is an AI-native data analytics platform designed to help teams collect, process, analyze, and visualize business data without the complexity of traditional BI tools. It enables users to connect data sources such as databases, spreadsheets, and third-party platforms, and automatically builds data pipelines and dashboards in seconds, allowing teams to start analyzing information immediately. It emphasizes real-time data processing, giving users access to up-to-date insights as data arrives, and supports advanced analytics that scale from simple dashboards to more complex data modeling. Through its conversational AI interface, users can ask questions in plain English and receive instant answers, charts, and insights, eliminating the need for SQL queries or technical expertise. Papermap also includes an AI command center that generates visualizations, uncovers trends, and helps identify anomalies, correlations, and opportunities directly from raw data.
    Starting Price: $19 per month
  • 28
    DoubleCloud

    DoubleCloud

    DoubleCloud

    Save time & costs by streamlining data pipelines with zero-maintenance open source solutions. From ingestion to visualization, all are integrated, fully managed, and highly reliable, so your engineers will love working with data. You choose whether to use any of DoubleCloud’s managed open source services or leverage the full power of the platform, including data storage, orchestration, ELT, and real-time visualization. We provide leading open source services like ClickHouse, Kafka, and Airflow, with deployment on Amazon Web Services or Google Cloud. Our no-code ELT tool allows real-time data syncing between systems, fast, serverless, and seamlessly integrated with your existing infrastructure. With our managed open-source data visualization you can simply visualize your data in real time by building charts and dashboards. We’ve designed our platform to make the day-to-day life of engineers more convenient.
    Starting Price: $0.024 per 1 GB per month
  • 29
    Inzata Analytics

    Inzata Analytics

    Inzata Analytics

    Inzata Analytics: An AI-powered, end-to-end data analytics software solution. Inzata takes your raw, unrefined data and transforms it into actionable insights, all on one platform. Build your entire data warehouse in less than one day using Inzata Analytics. Inzata’s library of over 700 data connectors ensures as seamless and hasty data integration process. Our patented aggregation engine promises prepped, blended, and organized data models in seconds. Create automated data pipeline workflows for real-time data analysis updates in Inzata’s newest too, InFlow. Finally, display your business data confidently on 100% customizable interactive dashboards. Realize the power of real-time analytics to supercharge your business agility and responsiveness, with Inzata.
  • 30
    BIRD Analytics

    BIRD Analytics

    Lightning Insights

    BIRD Analytics is a blazingly fast high performance, full-stack data management, and analytics platform to generate insights using agile BI and AI/ ML models. It covers all the aspects - starting from data ingestion, transformation, wrangling, modeling, storing, analyze data in real-time, that too on petabyte-scale data. BIRD provides self-service capabilities with Google-type search and powerful ChatBot integration. We’ve compiled our resources to provide the answers you seek. From industry uses cases to blog articles, learn more about how BIRD addresses Big Data pain points. Now that you’ve discovered the value of BIRD, schedule a demo to see the platform in action and uncover how it can transform your distinct data. Utilize AI/ML technologies for greater agility & responsiveness in decision-making, cost reduction, and improving customer experiences.
  • 31
    Exasol

    Exasol

    Exasol

    With an in-memory, columnar database and MPP architecture, you can query billions of rows in seconds. Queries are distributed across all nodes in a cluster, providing linear scalability for more users and advanced analytics. MPP, in-memory, and columnar storage add up to the fastest database built for data analytics. With SaaS, cloud, on premises and hybrid deployment options you can analyze data wherever it lives. Automatic query tuning reduces maintenance and overhead. Seamless integrations and performance efficiency gets you more power at a fraction of normal infrastructure costs. Smart, in-memory query processing allowed this social networking company to boost performance, processing 10B data sets a year. A single data repository and speed engine to accelerate critical analytics, delivering improved patient outcome and bottom line.
  • 32
    Riak KV
    At Riak, we are distributed systems experts and we work with Application teams to overcome these distributed system challenges. Riak’s Riak® is a distributed NoSQL database that delivers unmatched Resiliency beyond typical “high availability” offerings. Innovative technology to ensure data accuracy and never lose a write. Massive scale on commodity hardware. Common code foundation with true multi-model support. Riak® provides all this, while still focused on ease of operations. Chose Riak® KV flexible key-value data model for web scale profile and session management, real-time big data, catalog, content management, customer 360, digital messaging, and more use cases. Chose Riak® TS for IoT and time series use cases. When seconds of latency can cost thousands of dollars and an outage millions, the call for scalable, highly available databases that are easy to operationalize is resoundingly clear. Riak performs as promised and keeps the lights on.
  • 33
    Apache Kylin

    Apache Kylin

    Apache Software Foundation

    Apache Kylin™ is an open source, distributed Analytical Data Warehouse for Big Data; it was designed to provide OLAP (Online Analytical Processing) capability in the big data era. By renovating the multi-dimensional cube and precalculation technology on Hadoop and Spark, Kylin is able to achieve near constant query speed regardless of the ever-growing data volume. Reducing query latency from minutes to sub-second, Kylin brings online analytics back to big data. Kylin can analyze 10+ billions of rows in less than a second. No more waiting on reports for critical decisions. Kylin connects data on Hadoop to BI tools like Tableau, PowerBI/Excel, MSTR, QlikSense, Hue and SuperSet, making the BI on Hadoop faster than ever. As an Analytical Data Warehouse, Kylin offers ANSI SQL on Hadoop/Spark and supports most ANSI SQL query functions. Kylin can support thousands of interactive queries at the same time, thanks to the low resource consumption of each query.
  • 34
    IBM Db2 Big SQL
    A hybrid SQL-on-Hadoop engine delivering advanced, security-rich data query across enterprise big data sources, including Hadoop, object storage and data warehouses. IBM Db2 Big SQL is an enterprise-grade, hybrid ANSI-compliant SQL-on-Hadoop engine, delivering massively parallel processing (MPP) and advanced data query. Db2 Big SQL offers a single database connection or query for disparate sources such as Hadoop HDFS and WebHDFS, RDMS, NoSQL databases, and object stores. Benefit from low latency, high performance, data security, SQL compatibility, and federation capabilities to do ad hoc and complex queries. Db2 Big SQL is now available in 2 variations. It can be integrated with Cloudera Data Platform, or accessed as a cloud-native service on the IBM Cloud Pak® for Data platform. Access and analyze data and perform queries on batch and real-time data across sources, like Hadoop, object stores and data warehouses.
  • 35
    Indexima Data Hub
    Reshape your perception of time in data analytics. Instantly access your business’ data in no time and work directly on your dashboard without going back and forth with the IT team. Meet Indexima DataHub, a new space-time where operational and functional users gain instant access to their data, in no time. With a combination of its unique indexing engine and machine learning, Indexima allows businesses to access all their data to simplify and speed up analytics. Robust and scalable, the solution allows organizations to query all their data directly at the source, in volumes of tens of billions of rows in just a few milliseconds. Our Indexima platform allows users to implement instant analytics on all their data in just one click. Thanks to Indexima’s new ROI and TCO calculator, find out in 30 seconds the ROI of your data platform. Infrastructure costs, project deployment time, and data engineering costs, while boosting your analytical performances.
    Starting Price: $3,290 per month
  • 36
    Proclivity

    Proclivity

    Proclivity Media

    Buy and sell your proprietary data signals in a secure marketplace to drive billions of digital transactions in pharma and healthcare without data leakage and revenue loss. Buy and sell digital media in real-time in the largest programmatic healthcare advertising exchange to reach verified healthcare professionals (HCP) and patients (DTC). Predict and manage campaign performance, yield, and ROI based on real-time market trading dynamics driven by the engagement of verified physicians and patients across various medical conditions. Nowhere is data leakage (the extraction and use of data for revenue generation without permission) more consequential than in healthcare in terms of revenue loss and consumer privacy.
  • 37
    Enterprise Enabler

    Enterprise Enabler

    Stone Bond Technologies

    It unifies information across silos and scattered data for visibility across multiple sources in a single environment; whether in the cloud, spread across siloed databases, on instruments, in Big Data stores, or within various spreadsheets/documents, Enterprise Enabler can integrate all your data so you can make informed business decisions in real-time. By creating logical views of data from the original source locations. This means you can reuse, configure, test, deploy, and monitor all your data in a single integrated environment. Analyze your business data in one place as it is occurring to maximize the use of assets, minimize costs, and improve/refine your business processes. Our implementation time to market value is 50-90% faster. We get your sources connected and running so you can start making business decisions based on real-time data.
  • 38
    eGain Analytics

    eGain Analytics

    eGain Corporation

    eGain Analytics makes it easy to measure, analyze and refine your contact center operations, knowledge and web customer experience. Create informative reports, charts and dashboards effortlessly. Slice-&-dice the data in any number of ways and use it to manage the business effectively. Measure performance by agent, queue, call type, category and more. The highly flexible report builder wizard allows data to be grouped, sorted, sliced-&-diced by a number of business hierarchies for a 360 degree view. Contact center managers gain visibility of a host of data, with contact volumes, abandon rates, response times and service levels being just the tip of the iceberg. Knowledge managers gain visibility of article views, feedback, search behavior, contact deflection and a whole lot more.
  • 39
    SHREWD Platform

    SHREWD Platform

    Transforming Systems

    Harness your whole system’s data with ease, with our SHREWD Platform tools and open APIs. SHREWD Platform provides the integration and data collection tools the SHREWD modules operate from. The tools aggregate data, storing it in our secure, UK-based data lake. This data is then accessed by the SHREWD modules or an API, to transform the data into meaningful information with targeted functions. Data can be ingested by SHREWD Platform in almost any format, from analog in spreadsheets, to digital systems via APIs. The system’s open API can also allow third-party connections to use the information held in the data lake, if required. SHREWD Platform provides an operational data layer that is a single source of the truth in real-time, allowing the SHREWD modules to provide intelligent insights, and managers and key decision-makers to take the right action at the right time.
  • 40
    SingleStore

    SingleStore

    SingleStore

    SingleStore (formerly MemSQL) is a distributed, highly-scalable SQL database that can run anywhere. We deliver maximum performance for transactional and analytical workloads with familiar relational models. SingleStore is a scalable SQL database that ingests data continuously to perform operational analytics for the front lines of your business. Ingest millions of events per second with ACID transactions while simultaneously analyzing billions of rows of data in relational SQL, JSON, geospatial, and full-text search formats. SingleStore delivers ultimate data ingestion performance at scale and supports built in batch loading and real time data pipelines. SingleStore lets you achieve ultra fast query response across both live and historical data using familiar ANSI SQL. Perform ad hoc analysis with business intelligence tools, run machine learning algorithms for real-time scoring, perform geoanalytic queries in real time.
    Starting Price: $0.69 per hour
  • 41
    Hydrolix

    Hydrolix

    Hydrolix

    Hydrolix is a streaming data lake that combines decoupled storage, indexed search, and stream processing to deliver real-time query performance at terabyte-scale for a radically lower cost. CFOs love the 4x reduction in data retention costs. Product teams love 4x more data to work with. Spin up resources when you need them and scale to zero when you don’t. Fine-tune resource consumption and performance by workload to control costs. Imagine what you can build when you don’t have to sacrifice data because of budget. Ingest, enrich, and transform log data from multiple sources including Kafka, Kinesis, and HTTP. Return just the data you need, no matter how big your data is. Reduce latency and costs, eliminate timeouts, and brute force queries. Storage is decoupled from ingest and query, allowing each to independently scale to meet performance and budget targets. Hydrolix’s high-density compression (HDX) typically reduces 1TB of stored data to 55GB.
    Starting Price: $2,237 per month
  • 42
    Kyvos Semantic Layer

    Kyvos Semantic Layer

    Kyvos Insights

    Kyvos is a semantic layer for AI and BI. It gives organizations a single, consistent, business-friendly view of their entire data estate. By standardizing how data is defined and understood, Kyvos eliminates metric drift across BI tools and ensures that LLMs and AI agents work with governed business semantics rather than raw tables. Kyvos also delivers lightning-fast analytics at massive scale and high concurrency — including granular multidimensional analysis on the cloud — without the sluggish query times and escalating cloud costs that typically come with it. Kyvos semantic layer provides a unified semantic foundation for AI and BI, standardizing metrics, KPIs, and business logic across tools. It grounds AI in governed business context, eliminates metric drift, and delivers sub-second analytics at scale with high concurrency. It also enables deep multidimensional analysis and reduces cloud costs by serving analytics through its semantic layer.
  • 43
    Phocas Software

    Phocas Software

    Phocas Software

    Phocas offers a cloud-based BI and financial planning and analysis (FP&A) platform tailored for mid-market businesses who make, move and sell. Our intuitive platform makes data analysis and financial planning accessible and straightforward, empowering cross-functional teams to make confident, data-driven decisions. Seamlessly integrating with ERP systems like Epicor, Sage, and Infor, Phocas consolidates data into one centralized platform, unlocking ERP potential and improving workflows across the business. Phocas’ key features are: intuitive dashboards, ad hoc reporting, dynamic financial statements, flexible budgeting, accurate forecasting, and automated rebate management. With real-time insights and secure access, Phocas empowers cross-functional teams to explore data and make informed decisions confidently. Whether you're preparing month-end reports, analyzing trends, managing cash flow, or optimizing rebates, the Phocas platform provides the clarity you need to stay ahead.
  • 44
    Talend Data Fabric
    Talend Data Fabric’s suite of cloud services efficiently handles all your integration and integrity challenges — on-premises or in the cloud, any source, any endpoint. Deliver trusted data at the moment you need it — for every user, every time. Ingest and integrate data, applications, files, events and APIs from any source or endpoint to any location, on-premise and in the cloud, easier and faster with an intuitive interface and no coding. Embed quality into data management and guarantee ironclad regulatory compliance with a thoroughly collaborative, pervasive and cohesive approach to data governance. Make the most informed decisions based on high quality, trustworthy data derived from batch and real-time processing and bolstered with market-leading data cleaning and enrichment tools. Get more value from your data by making it available internally and externally. Extensive self-service capabilities make building APIs easy— improve customer engagement.
  • 45
    MightySignal

    MightySignal

    MightySignal

    You want all the data, and you want it now. And since MightySignal has now joined forces with AppMonsta to bring you custom mobile data feeds, there's nothing standing in your way! Are you looking for app ranking stats? App publisher data? SDK intelligence? You choose the feed type and set your desired date range, and we'll capture that data snapshot for you to programmatically slice and dice your way. Import the data directly into your business intelligence tools, ETL platform, CRM, or any internal system you desire. Subscribe to multiple data feeds to compare and contrast various data sets from Google Play Store and Apple's App store. Or just build your own custom data feed package from the start. A monthly snapshot of either Google Play or Apple App store's public details and basic publisher data (publisher contact info not included). Query global parameters or drill down to get specific app details.
    Starting Price: Free
  • 46
    OctoData

    OctoData

    SoyHuCe

    OctoData is deployed at a lower cost, in Cloud hosting and includes personalized support from the definition of your needs to the use of the solution. OctoData is based on innovative open-source technologies and knows how to adapt to open up to future possibilities. Its Supervisor offers a management interface that allows you to quickly capture, store and exploit a growing quantity and variety of data. With OctoData, prototype and industrialize your massive data recovery solutions in the same environment, including in real time. Thanks to the exploitation of your data, obtain precise reports, explore new possibilities, increase your productivity and gain in profitability.
  • 47
    DataMax

    DataMax

    Digiterre

    DataMax is an enterprise-ready platform that takes the most complex elements of real-time data management and makes them simple to develop, deploy and operate at scale. This provides the ability to deliver business change at greater velocity. DataMax is a unique architecture, process and combination of specific technologies that rapidly move an organisation from disparate data and reporting sources into a single view of data which provides organisations with the insight they need to run their business more effectively. The system combines technologies in a unique way that delivers enterprise strength data management. The approach was proven to scale and is Cloud deployable. Covers time series and non-time series data to produce analytics platforms which provide a step change in the quality of data, analysis and reporting available to market analytics teams and then provided to Traders.
  • 48
    Fornova

    Fornova

    Fornova

    Business intelligence solutions for hotels and travel organizations. Increase your market share by having a better understanding of how your property or chain performs against your comp set. Maintain competitive market visibility. Monitor and manage your digital distribution across all contracted and uncontracted OTA and metasearch sites. Optimize marketing spend on direct programmatic channels for better digital marketing ROI. Combine the power of Google PPS and FornovaDI. Revenue and operational analytics in a unified dashboard. Analyze customer and channel segmentation. Take immediate actions to reduce occupancy risks. BI solutions market leader with the most comprehensive global data set in travel & hospitality. We are the preferred partner to over 20,000 hotels worldwide & the biggest global OTAs. We empower the industry to optimize distribution, generate demand and maximize revenue.
  • 49
    jethro

    jethro

    jethro

    Data-driven decision-making has unleashed a surge of business data and a rise in user demand to analyze it. This trend drives IT departments to migrate off expensive Enterprise Data Warehouses (EDW) toward cost-effective Big Data platforms like Hadoop or AWS. These new platforms come with a Total Cost of Ownership (TCO) that is about 10 times lower. They are not ideal for interactive BI applications, however, as they fail to match the high performance and user concurrency of legacy EDWs. For this exact reason, we developed Jethro. Customers use Jethro for interactive BI on Big Data. Jethro is a transparent middle tier that requires no changes to existing apps or data. It is self-driving with no maintenance required. Jethro is compatible with BI tools like Tableau, Qlik, and Microstrategy and is data source agnostic. Jethro delivers on the demands of business users allowing for thousands of concurrent users to run complicated queries over billions of records.
  • 50
    Google Cloud Logging
    Real-time log management and analysis at scale. Securely store, search, analyze, and alert on all of your log data and events. Ingest custom log data from any source. An exabyte-scale, fully managed service for your application and infrastructure logs. Analyze log data in real time. Supported across Google Cloud services and integrated with Cloud Monitoring, Error Reporting, and Cloud Trace so you can quickly troubleshoot issues across your infrastructure and applications. With sub-second ingestion latency, terabyte per-second ingestion rate, and exabytes of logs stored each month, you can securely store all of your logs from any source in one place with no management overhead. Combine the power of Cloud Logging with BigQuery for advanced analysis and use log-based metrics to build real-time Cloud Monitoring dashboards.
    Starting Price: $0.50 per GiB