Alternatives to Apache Hive

Compare Apache Hive alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to Apache Hive in 2026. Compare features, ratings, user reviews, pricing, and more from Apache Hive competitors and alternatives in order to make an informed decision for your business.

  • 1
    Google Cloud BigQuery
    BigQuery is a serverless, multicloud data warehouse that simplifies the process of working with all types of data so you can focus on getting valuable business insights quickly. At the core of Google’s data cloud, BigQuery allows you to simplify data integration, cost effectively and securely scale analytics, share rich data experiences with built-in business intelligence, and train and deploy ML models with a simple SQL interface, helping to make your organization’s operations more data-driven. Gemini in BigQuery offers AI-driven tools for assistance and collaboration, such as code suggestions, visual data preparation, and smart recommendations designed to boost efficiency and reduce costs. BigQuery delivers an integrated platform featuring SQL, a notebook, and a natural language-based canvas interface, catering to data professionals with varying coding expertise. This unified workspace streamlines the entire analytics process.
    Compare vs. Apache Hive View Software
    Visit Website
  • 2
    HiveMQ

    HiveMQ

    HiveMQ

    HiveMQ is the Industrial AI Platform helping enterprises move from connected devices to intelligent operations. Built on the MQTT standard and a distributed edge-to-cloud architecture, HiveMQ connects and governs industrial data in real time, enabling organizations to act with intelligence. With proven reliability, scalability, and interoperability, HiveMQ provides the foundation industrial companies need to operationalize AI, powering the next generation of intelligent industry. Global leaders including Audi, BMW, Eli Lilly, Liberty Global, Mercedes-Benz, and Siemens trust HiveMQ to run their most mission-critical operations.
    Partner badge
    Compare vs. Apache Hive View Software
    Visit Website
  • 3
    StarTree

    StarTree

    StarTree

    StarTree, powered by Apache Pinot™, is a fully managed real-time analytics platform built for customer-facing applications that demand instant insights on the freshest data. Unlike traditional data warehouses or OLTP databases—optimized for back-office reporting or transactions—StarTree is engineered for real-time OLAP at true scale, meaning: - Data Volume: query performance sustained at petabyte scale - Ingest Rates: millions of events per second, continuously indexed for freshness - Concurrency: thousands to millions of simultaneous users served with sub-second latency With StarTree, businesses deliver always-fresh insights at interactive speed, enabling applications that personalize, monitor, and act in real time.
  • 4
    Hive Moderation
    Hive’s complete solution to protect your platform. Mobilizing the world's largest distributed workforce of humans labeling data, we are raising the bar for automated content moderation. We offer both best-in-class models as well as manual moderation, allowing us to provide solutions at scale and outperform contract workforces of business process outsourcers (BPOs). In addition to our best-in-class models, our distributed workforce can meet a variety of manual moderation needs. Whether you want to manually moderate user content or annotate training data at scale, our distributed system and consensus policy provide a level of precision that our competitors cannot.
  • 5
    Delta Lake

    Delta Lake

    Delta Lake

    Delta Lake is an open-source storage layer that brings ACID transactions to Apache Spark™ and big data workloads. Data lakes typically have multiple data pipelines reading and writing data concurrently, and data engineers have to go through a tedious process to ensure data integrity, due to the lack of transactions. Delta Lake brings ACID transactions to your data lakes. It provides serializability, the strongest level of isolation level. Learn more at Diving into Delta Lake: Unpacking the Transaction Log. In big data, even the metadata itself can be "big data". Delta Lake treats metadata just like data, leveraging Spark's distributed processing power to handle all its metadata. As a result, Delta Lake can handle petabyte-scale tables with billions of partitions and files at ease. Delta Lake provides snapshots of data enabling developers to access and revert to earlier versions of data for audits, rollbacks or to reproduce experiments.
  • 6
    Hadoop

    Hadoop

    Apache Software Foundation

    The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures. A wide variety of companies and organizations use Hadoop for both research and production. Users are encouraged to add themselves to the Hadoop PoweredBy wiki page. Apache Hadoop 3.3.4 incorporates a number of significant enhancements over the previous major release line (hadoop-3.2).
  • 7
    OpenText Analytics Database (Vertica)
    OpenText Analytics Database is a high-performance, scalable analytics platform that enables organizations to analyze massive data sets quickly and cost-effectively. It supports real-time analytics and in-database machine learning to deliver actionable business insights. The platform can be deployed flexibly across hybrid, multi-cloud, and on-premises environments to optimize infrastructure and reduce total cost of ownership. Its massively parallel processing (MPP) architecture handles complex queries efficiently, regardless of data size. OpenText Analytics Database also features compatibility with data lakehouse architectures, supporting formats like Parquet and ORC. With built-in machine learning and broad language support, it empowers users from SQL experts to Python developers to derive predictive insights.
  • 8
    Trino

    Trino

    Trino

    Trino is a query engine that runs at ludicrous speed. Fast-distributed SQL query engine for big data analytics that helps you explore your data universe. Trino is a highly parallel and distributed query engine, that is built from the ground up for efficient, low-latency analytics. The largest organizations in the world use Trino to query exabyte-scale data lakes and massive data warehouses alike. Supports diverse use cases, ad-hoc analytics at interactive speeds, massive multi-hour batch queries, and high-volume apps that perform sub-second queries. Trino is an ANSI SQL-compliant query engine, that works with BI tools such as R, Tableau, Power BI, Superset, and many others. You can natively query data in Hadoop, S3, Cassandra, MySQL, and many others, without the need for complex, slow, and error-prone processes for copying the data. Access data from multiple systems within a single query.
  • 9
    Apache Drill

    Apache Drill

    The Apache Software Foundation

    Schema-free SQL Query Engine for Hadoop, NoSQL and Cloud Storage
  • 10
    Apache HBase

    Apache HBase

    The Apache Software Foundation

    Use Apache HBase™ when you need random, realtime read/write access to your Big Data. This project's goal is the hosting of very large tables -- billions of rows X millions of columns -- atop clusters of commodity hardware. Automatic failover support between RegionServers. Easy to use Java API for client access. Thrift gateway and a REST-ful Web service that supports XML, Protobuf, and binary data encoding options. Support for exporting metrics via the Hadoop metrics subsystem to files or Ganglia; or via JMX.
  • 11
    Apache Hudi

    Apache Hudi

    Apache Corporation

    Hudi is a rich platform to build streaming data lakes with incremental data pipelines on a self-managing database layer, while being optimized for lake engines and regular batch processing. Hudi maintains a timeline of all actions performed on the table at different instants of time that helps provide instantaneous views of the table, while also efficiently supporting retrieval of data in the order of arrival. A Hudi instant consists of the following components. Hudi provides efficient upserts, by mapping a given hoodie key consistently to a file id, via an indexing mechanism. This mapping between record key and file group/file id, never changes once the first version of a record has been written to a file. In short, the mapped file group contains all versions of a group of records.
  • 12
    Apache Iceberg

    Apache Iceberg

    Apache Software Foundation

    Iceberg is a high-performance format for huge analytic tables. Iceberg brings the reliability and simplicity of SQL tables to big data, while making it possible for engines like Spark, Trino, Flink, Presto, Hive and Impala to safely work with the same tables, at the same time. Iceberg supports flexible SQL commands to merge new data, update existing rows, and perform targeted deletes. Iceberg can eagerly rewrite data files for read performance, or it can use delete deltas for faster updates. Iceberg handles the tedious and error-prone task of producing partition values for rows in a table and skips unnecessary partitions and files automatically. No extra filters are needed for fast queries, and the table layout can be updated as data or queries change.
  • 13
    Apache Kylin

    Apache Kylin

    Apache Software Foundation

    Apache Kylin™ is an open source, distributed Analytical Data Warehouse for Big Data; it was designed to provide OLAP (Online Analytical Processing) capability in the big data era. By renovating the multi-dimensional cube and precalculation technology on Hadoop and Spark, Kylin is able to achieve near constant query speed regardless of the ever-growing data volume. Reducing query latency from minutes to sub-second, Kylin brings online analytics back to big data. Kylin can analyze 10+ billions of rows in less than a second. No more waiting on reports for critical decisions. Kylin connects data on Hadoop to BI tools like Tableau, PowerBI/Excel, MSTR, QlikSense, Hue and SuperSet, making the BI on Hadoop faster than ever. As an Analytical Data Warehouse, Kylin offers ANSI SQL on Hadoop/Spark and supports most ANSI SQL query functions. Kylin can support thousands of interactive queries at the same time, thanks to the low resource consumption of each query.
  • 14
    Apache Sentry

    Apache Sentry

    Apache Software Foundation

    Apache Sentry™ is a system for enforcing fine grained role based authorization to data and metadata stored on a Hadoop cluster. Apache Sentry has successfully graduated from the Incubator in March of 2016 and is now a Top-Level Apache project. Apache Sentry is a granular, role-based authorization module for Hadoop. Sentry provides the ability to control and enforce precise levels of privileges on data for authenticated users and applications on a Hadoop cluster. Sentry currently works out of the box with Apache Hive, Hive Metastore/HCatalog, Apache Solr, Impala and HDFS (limited to Hive table data). Sentry is designed to be a pluggable authorization engine for Hadoop components. It allows you to define authorization rules to validate a user or application’s access requests for Hadoop resources. Sentry is highly modular and can support authorization for a wide variety of data models in Hadoop.
  • 15
    Apache Impala
    Impala provides low latency and high concurrency for BI/analytic queries on the Hadoop ecosystem, including Iceberg, open data formats, and most cloud storage options. Impala also scales linearly, even in multitenant environments. Impala is integrated with native Hadoop security and Kerberos for authentication, and via the Ranger module, you can ensure that the right users and applications are authorized for the right data. Utilize the same file and data formats and metadata, security, and resource management frameworks as your Hadoop deployment, with no redundant infrastructure or data conversion/duplication. For Apache Hive users, Impala utilizes the same metadata and ODBC driver. Like Hive, Impala supports SQL, so you don't have to worry about reinventing the implementation wheel. With Impala, more users, whether using SQL queries or BI applications, can interact with more data through a single repository and metadata stored from source through analysis.
  • 16
    Apache Spark

    Apache Spark

    Apache Software Foundation

    Apache Spark™ is a unified analytics engine for large-scale data processing. Apache Spark achieves high performance for both batch and streaming data, using a state-of-the-art DAG scheduler, a query optimizer, and a physical execution engine. Spark offers over 80 high-level operators that make it easy to build parallel apps. And you can use it interactively from the Scala, Python, R, and SQL shells. Spark powers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application. Spark runs on Hadoop, Apache Mesos, Kubernetes, standalone, or in the cloud. It can access diverse data sources. You can run Spark using its standalone cluster mode, on EC2, on Hadoop YARN, on Mesos, or on Kubernetes. Access data in HDFS, Alluxio, Apache Cassandra, Apache HBase, Apache Hive, and hundreds of other data sources.
  • 17
    Apache Phoenix

    Apache Phoenix

    Apache Software Foundation

    Apache Phoenix enables OLTP and operational analytics in Hadoop for low-latency applications by combining the best of both worlds. The power of standard SQL and JDBC APIs with full ACID transaction capabilities and the flexibility of late-bound, schema-on-read capabilities from the NoSQL world by leveraging HBase as its backing store. Apache Phoenix is fully integrated with other Hadoop products such as Spark, Hive, Pig, Flume, and Map Reduce. Become the trusted data platform for OLTP and operational analytics for Hadoop through well-defined, industry-standard APIs. Apache Phoenix takes your SQL query, compiles it into a series of HBase scans, and orchestrates the running of those scans to produce regular JDBC result sets. Direct use of the HBase API, along with coprocessors and custom filters, results in performance on the order of milliseconds for small queries, or seconds for tens of millions of rows.
  • 18
    E-MapReduce
    EMR is an all-in-one enterprise-ready big data platform that provides cluster, job, and data management services based on open-source ecosystems, such as Hadoop, Spark, Kafka, Flink, and Storm. Alibaba Cloud Elastic MapReduce (EMR) is a big data processing solution that runs on the Alibaba Cloud platform. EMR is built on Alibaba Cloud ECS instances and is based on open-source Apache Hadoop and Apache Spark. EMR allows you to use the Hadoop and Spark ecosystem components, such as Apache Hive, Apache Kafka, Flink, Druid, and TensorFlow, to analyze and process data. You can use EMR to process data stored on different Alibaba Cloud data storage service, such as Object Storage Service (OSS), Log Service (SLS), and Relational Database Service (RDS). You can quickly create clusters without the need to configure hardware and software. All maintenance operations are completed on its Web interface.
  • 19
    Amazon EMR
    Amazon EMR is the industry-leading cloud big data platform for processing vast amounts of data using open-source tools such as Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto. With EMR you can run Petabyte-scale analysis at less than half of the cost of traditional on-premises solutions and over 3x faster than standard Apache Spark. For short-running jobs, you can spin up and spin down clusters and pay per second for the instances used. For long-running workloads, you can create highly available clusters that automatically scale to meet demand. If you have existing on-premises deployments of open-source tools such as Apache Spark and Apache Hive, you can also run EMR clusters on AWS Outposts. Analyze data using open-source ML frameworks such as Apache Spark MLlib, TensorFlow, and Apache MXNet. Connect to Amazon SageMaker Studio for large-scale model training, analysis, and reporting.
  • 20
    Apache Derby
    Apache Derby, an Apache DB subproject, is an open source relational database implemented entirely in Java and available under the Apache License, Version 2.0. Derby has a small footprint - about 3.5 megabytes for the base engine and embedded JDBC driver. Derby provides an embedded JDBC driver that lets you embed Derby in any Java-based solution. Derby also supports the more familiar client/server mode with the Derby Network Client JDBC driver and Derby Network Server.
  • 21
    Apache Doris

    Apache Doris

    The Apache Software Foundation

    Apache Doris is a modern data warehouse for real-time analytics. It delivers lightning-fast analytics on real-time data at scale. Push-based micro-batch and pull-based streaming data ingestion within a second. Storage engine with real-time upsert, append and pre-aggregation. Optimize for high-concurrency and high-throughput queries with columnar storage engine, MPP architecture, cost based query optimizer, vectorized execution engine. Federated querying of data lakes such as Hive, Iceberg and Hudi, and databases such as MySQL and PostgreSQL. Compound data types such as Array, Map and JSON. Variant data type to support auto data type inference of JSON data. NGram bloomfilter and inverted index for text searches. Distributed design for linear scalability. Workload isolation and tiered storage for efficient resource management. Supports shared-nothing clusters as well as separation of storage and compute.
  • 22
    VeloDB

    VeloDB

    VeloDB

    Powered by Apache Doris, VeloDB is a modern data warehouse for lightning-fast analytics on real-time data at scale. Push-based micro-batch and pull-based streaming data ingestion within seconds. Storage engine with real-time upsert、append and pre-aggregation. Unparalleled performance in both real-time data serving and interactive ad-hoc queries. Not just structured but also semi-structured data. Not just real-time analytics but also batch processing. Not just run queries against internal data but also work as a federate query engine to access external data lakes and databases. Distributed design to support linear scalability. Whether on-premise deployment or cloud service, separation or integration of storage and compute, resource usage can be flexibly and efficiently adjusted according to workload requirements. Built on and fully compatible with open source Apache Doris. Support MySQL protocol, functions, and SQL for easy integration with other data tools.
  • 23
    Oracle Big Data SQL Cloud Service
    Oracle Big Data SQL Cloud Service enables organizations to immediately analyze data across Apache Hadoop, NoSQL and Oracle Database leveraging their existing SQL skills, security policies and applications with extreme performance. From simplifying data science efforts to unlocking data lakes, Big Data SQL makes the benefits of Big Data available to the largest group of end users possible. Big Data SQL gives users a single location to catalog and secure data in Hadoop and NoSQL systems, Oracle Database. Seamless metadata integration and queries which join data from Oracle Database with data from Hadoop and NoSQL databases. Utilities and conversion routines support automatic mappings from metadata stored in HCatalog (or the Hive Metastore) to Oracle Tables. Enhanced access parameters give administrators the flexibility to control column mapping and data access behavior. Multiple cluster support enables one Oracle Database to query multiple Hadoop clusters and/or NoSQL systems.
  • 24
    PySpark

    PySpark

    PySpark

    PySpark is an interface for Apache Spark in Python. It not only allows you to write Spark applications using Python APIs, but also provides the PySpark shell for interactively analyzing your data in a distributed environment. PySpark supports most of Spark’s features such as Spark SQL, DataFrame, Streaming, MLlib (Machine Learning) and Spark Core. Spark SQL is a Spark module for structured data processing. It provides a programming abstraction called DataFrame and can also act as distributed SQL query engine. Running on top of Spark, the streaming feature in Apache Spark enables powerful interactive and analytical applications across both streaming and historical data, while inheriting Spark’s ease of use and fault tolerance characteristics.
  • 25
    R2 SQL

    R2 SQL

    Cloudflare

    R2 SQL is Cloudflare’s serverless, distributed analytics query engine (currently in open beta) that enables you to run SQL queries over Apache Iceberg tables stored in R2 Data Catalog without needing to manage your own compute clusters. It is built to efficiently query large volumes of data by leveraging metadata pruning, partition-level statistics, file and row-group filtering, and Cloudflare’s globally distributed compute infrastructure to parallelize execution. The system works by integrating with R2 object storage and an Iceberg catalog layer, so you can ingest data via Cloudflare Pipelines into Iceberg tables, and then query that data with minimal overhead. Queries can be issued via the Wrangler CLI or HTTP API (with an API token granting permissions across R2 SQL, Data Catalog, and storage). During the open beta period, using R2 SQL itself is not billed, only storage and standard R2 operations incur charges.
  • 26
    Hive

    Hive

    Hive Technology

    Increase productivity among team members with Hive. Hive is a powerful project management and collaboration platform that offers a plethora of features in one robust solution. The platform comes with transparent project management tools, team communication, easy file storage and sharing, time tracking, and app integrations.
    Leader badge
    Starting Price: $16 per user per month
  • 27
    Azure HDInsight
    Run popular open-source frameworks—including Apache Hadoop, Spark, Hive, Kafka, and more—using Azure HDInsight, a customizable, enterprise-grade service for open-source analytics. Effortlessly process massive amounts of data and get all the benefits of the broad open-source project ecosystem with the global scale of Azure. Easily migrate your big data workloads and processing to the cloud. Open-source projects and clusters are easy to spin up quickly without the need to install hardware or manage infrastructure. Big data clusters reduce costs through autoscaling and pricing tiers that allow you to pay for only what you use. Enterprise-grade security and industry-leading compliance with more than 30 certifications helps protect your data. Optimized components for open-source technologies such as Hadoop and Spark keep you up to date.
  • 28
    Cloud BI

    Cloud BI

    Perfsys

    cloud-based applications for your business. Cloud Business Intelligence for marketing, sales, finance and operations 100% Amazon Web Services solutions. No servers needed, no prepayments. Collect AWS Lambda workers. AWS Scheduled Events. Tokens management. Transform. DynamoDB as zero-like super reliable no-SQL storage. Store Raw data and trigger Transformations. AWS Lambda Serverless ETL logic. Store. Triggered by DynamoDB Streams AWS S3 + CSV files as lightweight cheap objects storage. Integrates great with big data HDFS distributed storage. Explore. AWS Athena is a Hadoop Hive based open source solutions from big data ecosystem. AWS S3 as native datasource to read CSV files as a datasource files SQL-like queries. Present AWS Quicksights for BI dashboards. Use Athena + S3 a datasource. Web and Mobile Quicksight clients. Quicksight allow to do drill-down and filters + many more.
  • 29
    Hive Engine

    Hive Engine

    Hive Engine

    Hive Engine is a platform that empowers communities, project owners, fundraisers, developers, and businesses to quickly and easily build on the Hive blockchain like never before. Up until now, you’ve only been tapping into a very small fraction of Hive's true potential. It’s like you’ve been using the Death Star to charge your phone. Hive Engine unlocks unlimited possibilities by adding a layer of functionality that seamlessly integrates with the blockchain. The Hive Engine platform makes smart contracts on the Hive blockchain a reality. The first smart contract being introduced is the ability to quickly and easily create custom tokens. We wanted to get Tokens out to you guys ASAP, but don’t worry, there’s lots more to come. This includes more robust token management and an internal market to trade tokens against Hive, just like the existing HIVE/SBD market. All easy to deploy whether you are a developer or not.
  • 30
    Tabular

    Tabular

    Tabular

    Tabular is an open table store from the creators of Apache Iceberg. Connect multiple computing engines and frameworks. Decrease query time and storage costs by up to 50%. Centralize enforcement of data access (RBAC) policies. Connect any query engine or framework, including Athena, BigQuery, Redshift, Snowflake, Databricks, Trino, Spark, and Python. Smart compaction, clustering, and other automated data services reduce storage costs and query times by up to 50%. Unify data access at the database or table. RBAC controls are simple to manage, consistently enforced, and easy to audit. Centralize your security down to the table. Tabular is easy to use plus it features high-powered ingestion, performance, and RBAC under the hood. Tabular gives you the flexibility to work with multiple “best of breed” compute engines based on their strengths. Assign privileges at the data warehouse database, table, or column level.
    Starting Price: $100 per month
  • 31
    Hive

    Hive

    Hive

    Hive has a thriving ecosystem of over 126 apps, communities & projects and is home to some of the most-used Web3 apps in the world, such as Splinterlands, PeakD and HiveBlog. Wallets are incredibly important to securely store your cryptocurrencies and to interact with Web3 apps. Hive has multiple community-owned and open-source wallets available for Windows, macOS, Linux, iOS, Android & Web. The development of Hive and its ecosystem is made possible by contributors. To incentivize crucial work, such as Core Development, a DAO-like structure: the Decentralized Hive Fund (DHF), is being leveraged to intelligently fund important work.
  • 32
    Dremio

    Dremio

    Dremio

    Dremio delivers lightning-fast queries and a self-service semantic layer directly on your data lake storage. No moving data to proprietary data warehouses, no cubes, no aggregation tables or extracts. Just flexibility and control for data architects, and self-service for data consumers. Dremio technologies like Data Reflections, Columnar Cloud Cache (C3) and Predictive Pipelining work alongside Apache Arrow to make queries on your data lake storage very, very fast. An abstraction layer enables IT to apply security and business meaning, while enabling analysts and data scientists to explore data and derive new virtual datasets. Dremio’s semantic layer is an integrated, searchable catalog that indexes all of your metadata, so business users can easily make sense of your data. Virtual datasets and spaces make up the semantic layer, and are all indexed and searchable.
  • 33
    HiveOtter

    HiveOtter

    HiveOtter

    HiveOtter is an innovative platform designed to transform your satisfied customers into effective brand advocates. It achieves this by streamlining the creation and management of referral marketing programs, with a focus on automating discount coupon distribution through personalized referral links. The process is straightforward yet powerful. HiveOtter enables you to establish a customized referral program for your business in a matter of minutes. Once implemented, your customers can easily share unique referral links within their networks. When a new customer makes a purchase using this link, HiveOtter automatically generates and sends a discount coupon to the referrer as a reward. This automation is a key strength of HiveOtter. It eliminates the need for manual tracking and reward distribution, thereby saving valuable time and ensuring that every referral is properly acknowledged.
    Starting Price: $14/month
  • 34
    HiveDrive

    HiveDrive

    SilentWave

    HiveDrive is a Common Data Environment (CDE) designed for decentralized collaboration in Engineering, Architecture, Mechanical, and Graphic Design. It integrates Web3 and Distributed Ledger Technology (DLT) for secure data sharing and concurrent teamwork. Features: Design Platform Plug-ins: Connects with leading design software. IFC Viewer & IDS: Supports BIM workflows. Project Management Suite: Streamlines tasks and approvals. Chat on Files: Enables direct communication within projects. Smart Sync & Deduplication: Saves only file changes, optimizing storage and performance. HiveDrive HUB: A private cloud infrastructure for professionals working across multiple locations and time zones. Decentralized Data Ownership: Ensures security and control over project files without reliance on centralized servers. Cross-Platform Compatibility: Supports all file formats, enabling seamless collaboration across different software. Try HiveDrive today at hivedrive.eu! 🚀
    Starting Price: €9/month/user
  • 35
    Liketu

    Liketu

    Liketu

    Liketu, pronounced as "like to", is a photo sharing website built on the HIVE blockchain. It allow creators to share photos and receive HIVE rewards from upvotes received. Add pay-walls to premium images for an extra level of monetization. Curators can also earn for upvoting content using their acquired Hive Power.
  • 36
    LeoDex

    LeoDex

    LeoDex

    LeoDex is an exchange interface that connects to the Hive-Engine project on the Hive blockchain. With LeoDex, you can trade and manage your Hive-based tokens. We’ve added (and continue to add) a wide variety of features based on what our community asks for. You can think of this page as a documentation guide for LeoDex, we’ll explore each page of the Dex and talk about the core features that are available.
  • 37
    Cedara Hive
    Hive is the world’s first platform providing businesses with an end-to-end sustainability solution specifically built for the marketing industry. Hive’s proprietary mapping engine seamlessly integrates with any data source through APIs, automatically mapping data sets to globally recognized emission factors and industry standards, empowering organizations to compute precise carbon emissions.
 Hive's mapping engine measures all media delivery across your business and harmonizes the data sets needed to work with a brand and agency's methodology. Hive streamlines the process and also ensures accuracy in assessing and mitigating carbon footprints. Accessing Hive's suite provides you with comprehensive carbon emission tracking. Clients can effortlessly monitor emissions from all business operations, including media delivery by channel enabling informed decision-making. Stay ahead with Hive's intuitive platform.
  • 38
    Apache Xalan

    Apache Xalan

    The Apache Software Foundation

    The Apache Xalan Project develops and maintains libraries and programs that transform XML documents using XSLT standard stylesheets. Our subprojects use the Java and C++ programing languages to implement the XSLT libraries. The Xalan-Java 2.7.2 was released in April 2014. You can download the current release the current Xalan-Java 2.7.2 release for your development. The current work in progress can be found in the subversion repository. The current release fixes a security issue that was registered against version 2.7.1. The old Xalan-J 2.7.1 distributions are still available on the Apache Archives. This is a mature project. There has been some discussion about supporting XPath-2. We could use your support in this major rework of the library. You can follow the efforts and post your own contributions on the Java users and developers mail lists.
  • 39
    Hive Keychain

    Hive Keychain

    Hive Keychain

    Hive Keychain lets you handle all your Hive related operations from your mobile! Your keys are kept safe by a combination of encryption by pin code and biometrics (fingerprint). From the App, you can import your accounts via your private keys or QR code (from Hive Keychain browser extension). You will then get access to your account main information such as VP, balances, Hive Engine tokens, delegations and transaction history. It also lets you handle Hive operations while keeping your keys safe. You can broadcast your transfers, delegations and power up/down, as well as Hive Engine operations.
  • 40
    MLlib

    MLlib

    Apache Software Foundation

    ​Apache Spark's MLlib is a scalable machine learning library that integrates seamlessly with Spark's APIs, supporting Java, Scala, Python, and R. It offers a comprehensive suite of algorithms and utilities, including classification, regression, clustering, collaborative filtering, and tools for constructing machine learning pipelines. MLlib's high-quality algorithms leverage Spark's iterative computation capabilities, delivering performance up to 100 times faster than traditional MapReduce implementations. It is designed to operate across diverse environments, running on Hadoop, Apache Mesos, Kubernetes, standalone clusters, or in the cloud, and accessing various data sources such as HDFS, HBase, and local files. This flexibility makes MLlib a robust solution for scalable and efficient machine learning tasks within the Apache Spark ecosystem. ​
  • 41
    Apache Mahout

    Apache Mahout

    Apache Software Foundation

    Apache Mahout is a powerful, scalable, and versatile machine learning library designed for distributed data processing. It offers a comprehensive set of algorithms for various tasks, including classification, clustering, recommendation, and pattern mining. Built on top of the Apache Hadoop ecosystem, Mahout leverages MapReduce and Spark to enable data processing on large-scale datasets. Apache Mahout(TM) is a distributed linear algebra framework and mathematically expressive Scala DSL designed to let mathematicians, statisticians, and data scientists quickly implement their own algorithms. Apache Spark is the recommended out-of-the-box distributed back-end or can be extended to other distributed backends. Matrix computations are a fundamental part of many scientific and engineering applications, including machine learning, computer vision, and data analysis. Apache Mahout is designed to handle large-scale data processing by leveraging the power of Hadoop and Spark.
  • 42
    Tribaldex

    Tribaldex

    Tribaldex

    Tribaldex is a platform that empowers communities, project owners, fundraisers, developers, and businesses to quickly and easily build on the Hive blockchain like never before. Build a Token Economy with a unique smart contract on one of the cheapest and most scalable blockchains. With Tribaldex various individuals and organizations kick-start their initiatives leveraging the costless and scalable features of the HIVE blockchain. You can start your own TRIBE today. Feel the flexible UI with a user-friendly presentation. Enjoy a mobile-friendly interface with a clear presentation. Create custom tokens to power your own TRIBE on the HIVE Blockchain. Enjoy a processing speed that beats time constraints while trading on Tribaldex. Feel the power of a platform that is flexible enough to meet the needs of individual users. Be a part of an amazing community already being built on the HIVE Blockchain.
  • 43
    HiveSocial

    HiveSocial

    Enterprise Hive

    Enterprise Hive’s enterprise engagement platform for higher education transforms institutions into an engaged campus where all internal and external constituents of the campus are connected. HiveSocial for Higher Education is a secure and safe, enterprise engagement solution that enables students, faculty, staff, administration, alumni, corporations, and communities to communicate, collaborate and share knowledge in an environment that is familiar to social media users. As the two-way communication hub for colleges and universities, HiveSocial for Higher Education is a technologically advanced social business software solution that includes a full suite of collaboration tools that can be accessed on any mobile device. These tools include; activity streams, blogs, forums, communities, mail, online chat, documentation storage, wiki, video, photo and audio sharing, and more.
    Starting Price: $3000 per month
  • 44
    Apache Trafodion

    Apache Trafodion

    Apache Software Foundation

    Apache Trafodion is a webscale SQL-on-Hadoop solution enabling transactional or operational workloads on Apache Hadoop. Trafodion builds on the scalability, elasticity, and flexibility of Hadoop. Trafodion extends Hadoop to provide guaranteed transactional integrity, enabling new kinds of big data applications to run on Hadoop. Full-functioned ANSI SQL language support. JDBC/ODBC connectivity for Linux/Windows clients. Distributed ACID transaction protection across multiple statements, tables, and rows. Performance improvements for OLTP workloads with compile-time and run-time optimizations. Support for large data sets using a parallel-aware query optimizer. Reuse existing SQL skills and improve developer productivity. Distributed ACID transactions guarantee data consistency across multiple rows and tables. Interoperability with existing tools and applications. Hadoop and Linux distribution neutral. Easy to add to your existing Hadoop infrastructure.
  • 45
    Deeplearning4j

    Deeplearning4j

    Deeplearning4j

    DL4J takes advantage of the latest distributed computing frameworks including Apache Spark and Hadoop to accelerate training. On multi-GPUs, it is equal to Caffe in performance. The libraries are completely open-source, Apache 2.0, and maintained by the developer community and Konduit team. Deeplearning4j is written in Java and is compatible with any JVM language, such as Scala, Clojure, or Kotlin. The underlying computations are written in C, C++, and Cuda. Keras will serve as the Python API. Eclipse Deeplearning4j is the first commercial-grade, open-source, distributed deep-learning library written for Java and Scala. Integrated with Hadoop and Apache Spark, DL4J brings AI to business environments for use on distributed GPUs and CPUs. There are a lot of parameters to adjust when you're training a deep-learning network. We've done our best to explain them, so that Deeplearning4j can serve as a DIY tool for Java, Scala, Clojure, and Kotlin programmers.
  • 46
    Hive Marketing Cloud

    Hive Marketing Cloud

    Hive Marketing Cloud

    Hive Marketing Cloud: Customer Intelligence and Engagement Platform Hive Marketing Cloud Hive is a privately owned business, founded 2010, which specialises in the Travel, Insurance and Retail industries. Hive helps brands engage and convert audiences at scale by deploying highly personalised, sophisticated multi-channel marketing from a single platform – surfacing ALL of their data for improved, relevant customer experiences. Hive gives you the tools to discover data insights, reveal customer lifetime value, create recency frequency and value (RFM) based segmentation, orchestrate and automate customer journeys, and measure engagement and results, reporting on impact beyond clicks and opens.
    Starting Price: £1,750/month
  • 47
    HerdDB

    HerdDB

    Diennea

    HerdDB is a SQL distributed database implemented in Java. It has been designed to be embeddable in any Java Virtual Machine. It is optimized for fast "writes" and primary key read/update access patterns. HerdDB is designed to manage hundreds of tables. It is simple to add and remove hosts and to reconfigure tablespaces to easly distribute the load on multiple systems. HerdDB leverages Apache Zookeeper and Apache Bookkeeper to build a fully replicated, shared-nothing architecture without any single point of failure. At the low level HerdDB is very similar to a key-value NoSQL database. On top of that an SQL abstraction layer and JDBC Driver support enables every user to leverage existing known-how and port existing applications to HerdDB. At Diennea we developed EmailSuccess, a powerfull MTA (Mail Transfer Agent), designed to deliver millions of email messages per hour to inboxes all around the world,
  • 48
    Fetch Hive

    Fetch Hive

    Fetch Hive

    Fetch Hive is a versatile Generative AI Collaboration Platform packed with features and values that enhance user experience and productivity: Custom RAG Chat Agents: Users can create chat agents with retrieval-augmented generation, which improves response quality and relevance. Centralized Data Storage: It provides a system for easily accessing and managing all necessary data for AI model training and deployment. Real-Time Data Integration: By incorporating real-time data from Google Search, Fetch Hive enhances workflows with up-to-date information, boosting decision-making and productivity. Generative AI Prompt Management: The platform helps in building and managing AI prompts, enabling users to refine and achieve desired outputs efficiently. Fetch Hive is a comprehensive solution for those looking to develop and manage generative AI projects effectively, optimizing interactions with advanced features and streamlined workflows.
    Starting Price: $49/month
  • 49
    Hive.co

    Hive.co

    Hive.co

    Sell more with an email marketing CRM that works. And gain a team that cares about your growth, who'll be there to help craft and execute your email marketing strategy. Hive for Shopify. Power your ecommerce email strategy with Hive's Shopify integration. Hive vs. Mailchimp. Get the simplicity of Mailchimp with the functionality of an advanced CRM. Hive vs. Klaviyo. Get the advanced capabilities of Klaviyo without the headaches. Gain real visibility into your list and optimize your email marketing to build customer journeys that drive revenue. Send smarter email. It’s about more than just sending email. Hive gives you the visibility you need to understand your list and the state of your email marketing. From knowing where subscribers are in their customer journey, to segments that help you action your list, Hive helps you send smarter email that sells more. Automation without the guesswork With simple set-up for abandoned cart, browse abandonment, and win back email journeys
    Starting Price: $79 per month
  • 50
    DeviceHive

    DeviceHive

    DeviceHive

    Open Source IoT Data Platform with a wide range of integration options. DeviceHive has various deployment options, and that is why it fits ideal for every company either it is a mature enterprise or a small start-up. With the docker compose and kubernetes deployment options, you can go with private, public or hybrid cloud and scale from a single virtual machine, to enterprise grade cluster. Don’t have time to deploy? Make yourself familiar with the DeviceHive with zero deployment, sign up for our free public playground. From prototyping to enterprise solutions, one small step with DeviceHive, one giant leap for your business. DeviceHive allows you to think about business development instead of technical formalities. DeviceHive employs the best software design practices, introducing container based service oriented architecture approach managed and orchestrated by Kubernetes.