Alternatives to Hazelcast Jet
Compare Hazelcast Jet alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to Hazelcast Jet in 2026. Compare features, ratings, user reviews, pricing, and more from Hazelcast Jet competitors and alternatives in order to make an informed decision for your business.
-
1
Google Compute Engine
Google
Compute Engine is Google's infrastructure as a service (IaaS) platform for organizations to create and run cloud-based virtual machines. Computing infrastructure in predefined or custom machine sizes to accelerate your cloud transformation. General purpose (E2, N1, N2, N2D) machines provide a good balance of price and performance. Compute optimized (C2) machines offer high-end vCPU performance for compute-intensive workloads. Memory optimized (M2) machines offer the highest memory and are great for in-memory databases. Accelerator optimized (A2) machines are based on the A100 GPU, for very demanding applications. Integrate Compute with other Google Cloud services such as AI/ML and data analytics. Make reservations to help ensure your applications have the capacity they need as they scale. Save money just for running Compute with sustained-use discounts, and achieve greater savings when you use committed-use discounts. -
2
Dragonfly
DragonflyDB
Dragonfly is a drop-in Redis replacement that cuts costs and boosts performance. Designed to fully utilize the power of modern cloud hardware and deliver on the data demands of modern applications, Dragonfly frees developers from the limits of traditional in-memory data stores. The power of modern cloud hardware can never be realized with legacy software. Dragonfly is optimized for modern cloud computing, delivering 25x more throughput and 12x lower snapshotting latency when compared to legacy in-memory data stores like Redis, making it easy to deliver the real-time experience your customers expect. Scaling Redis workloads is expensive due to their inefficient, single-threaded model. Dragonfly is far more compute and memory efficient, resulting in up to 80% lower infrastructure costs. Dragonfly scales vertically first, only requiring clustering at an extremely high scale. This results in a far simpler operational model and a more reliable system. -
3
RaimaDB
Raima
RaimaDB is an embedded time series database for IoT and Edge devices that can run in-memory. It is an extremely powerful, lightweight and secure RDBMS. Field tested by over 20 000 developers worldwide and has more than 25 000 000 deployments. RaimaDB is a high-performance, cross-platform embedded database designed for mission-critical applications, particularly in the Internet of Things (IoT) and edge computing markets. It offers a small footprint, making it suitable for resource-constrained environments, and supports both in-memory and persistent storage configurations. RaimaDB provides developers with multiple data modeling options, including traditional relational models and direct relationships through network model sets. It ensures data integrity with ACID-compliant transactions and supports various indexing methods such as B+Tree, Hash Table, R-Tree, and AVL-Tree. -
4
Amazon ElastiCache
Amazon
Amazon ElastiCache allows you to seamlessly set up, run, and scale popular open-Source compatible in-memory data stores in the cloud. Build data-intensive apps or boost the performance of your existing databases by retrieving data from high throughput and low latency in-memory data stores. Amazon ElastiCache is a popular choice for real-time use cases like Caching, Session Stores, Gaming, Geospatial Services, Real-Time Analytics, and Queuing. Amazon ElastiCache offers fully managed Redis and Memcached for your most demanding applications that require sub-millisecond response times. Amazon ElastiCache works as an in-memory data store and cache to support the most demanding applications requiring sub-millisecond response times. By utilizing an end-to-end optimized stack running on customer-dedicated nodes, Amazon ElastiCache provides secure, blazing-fast performance. -
5
Amazon DynamoDB
Amazon
Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It's a fully managed, multi-region, Multimaster, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. DynamoDB can handle more than 10 trillion requests per day and can support peaks of more than 20 million requests per second. Many of the world's fastest-growing businesses such as Lyft, Airbnb, and Redfin as well as enterprises such as Samsung, Toyota, and Capital One depend on the scale and performance of DynamoDB to support their mission-critical workloads. Focus on driving innovation with no operational overhead. Build out your game platform with player data, session history, and leaderboards for millions of concurrent users. Use design patterns for deploying shopping carts, workflow engines, inventory tracking, and customer profiles. DynamoDB supports high-traffic, extreme-scaled events. -
6
Hazelcast
Hazelcast
In-Memory Computing Platform. The digital world is different. Microseconds matter. That's why the world's largest organizations rely on us to power their most time-sensitive applications at scale. New data-enabled applications can deliver transformative business power – if they meet today’s requirement of immediacy. Hazelcast solutions complement virtually any database to deliver results that are significantly faster than a traditional system of record. Hazelcast’s distributed architecture provides redundancy for continuous cluster up-time and always available data to serve the most demanding applications. Capacity grows elastically with demand, without compromising performance or availability. The fastest in-memory data grid, combined with third-generation high-speed event processing, delivered through the cloud. -
7
Red Hat Data Grid
Red Hat
Red Hat® Data Grid is an in-memory, distributed, NoSQL datastore solution. Your applications can access, process, and analyze data at in-memory speed to deliver a superior user experience. High performance, elastic scalability, always available. Quickly access your data through fast, low-latency data processing using memory (RAM) and distributed parallel execution. Achieve linear scalability with data partitioning and distribution across cluster nodes. Gain high availability through data replication across cluster nodes. Attain fault tolerance and recover from disaster through cross-datacenter geo-replication and clustering. Gain development flexibly and greater productivity with a highly versatile, functionally rich NoSQL data store. Obtain comprehensive data security with encryption and role-based access. Data Grid 7.3.10 provides a security enhancement to address a CVE. You must upgrade any Data Grid 7.3 deployments to version 7.3.10 as soon as possible. -
8
Exasol
Exasol
With an in-memory, columnar database and MPP architecture, you can query billions of rows in seconds. Queries are distributed across all nodes in a cluster, providing linear scalability for more users and advanced analytics. MPP, in-memory, and columnar storage add up to the fastest database built for data analytics. With SaaS, cloud, on premises and hybrid deployment options you can analyze data wherever it lives. Automatic query tuning reduces maintenance and overhead. Seamless integrations and performance efficiency gets you more power at a fraction of normal infrastructure costs. Smart, in-memory query processing allowed this social networking company to boost performance, processing 10B data sets a year. A single data repository and speed engine to accelerate critical analytics, delivering improved patient outcome and bottom line. -
9
Infinispan
Infinispan
Infinispan is an open-source in-memory data grid that offers flexible deployment options and robust capabilities for storing, managing, and processing data. Infinispan provides a key/value data store that can hold all types of data, from Java objects to plain text. Infinispan distributes your data across elastically scalable clusters to guarantee high availability and fault tolerance, whether you use Infinispan as a volatile cache or a persistent data store. Infinispan turbocharges applications by storing data closer to processing logic, which reduces latency and increases throughput. Available as a Java library, you simply add Infinispan to your application dependencies and then you’re ready to store data in the same memory space as the executing code. -
10
Breakthrough performance and innovation for HPC and AI workloads. If you want to accelerate your HPC workloads, the Intel® Server D50DNP Family is the right platform for you. Powered by 4th Gen Intel® Xeon® Scalable processors or the Intel® Xeon® CPU Max Series, the Intel® Server D50DNP Family delivers exceptional compute performance, enhanced AI and in-memory analytics acceleration built into the processor, and increased I/O throughout versus previous generation servers. Delivers breakthrough memory bandwidth (1TB/sec) with on-chip, High Bandwidth Memory (HBM2e) for memory-intensive workloads. You can deploy and adapt the Intel® Server D50DNP Family to meet your ever-changing needs. Compute, management, and accelerator modules enable you to easily scale cluster resources to fit workload demands. Advanced, next-generation AI and in-memory analytics accelerators are built into the processor to speed up HPC workloads.
-
11
kdb+
KX Systems
A high-performance cross-platform historical time-series columnar database featuring: - An in-memory compute engine - A real-time streaming processor - An expressive query and programming language called q kdb+ powers kdb Insights portfolio and KDB.AI, together delivering time-oriented data insights and generative AI capabilities to the world’s leading enterprise organizations. Independently benchmarked* as the fastest in-memory, columnar analytics database available, kdb+ delivers unmatched value to businesses operating in the toughest data environments. kdb+ improves decision-making processes to help navigate rapidly changing data landscapes. -
12
Apache Geode
Apache
Build high-speed, data-intensive applications that elastically meet performance requirements at any scale. Take advantage of Apache Geode's unique technology that blends advanced techniques for data replication, partitioning and distributed processing. Apache Geode provides a database-like consistency model, reliable transaction processing and a shared-nothing architecture to maintain very low latency performance with high concurrency processing. Data can easily be partitioned (sharded) or replicated between nodes allowing performance to scale as needed. Durability is ensured through redundant in-memory copies and disk-based persistence. Super fast write-ahead-logging (WAL) persistence with a shared-nothing architecture that is optimized for fast parallel recovery of nodes or an entire cluster. -
13
Apache Ignite
Apache Ignite
Use Ignite as a traditional SQL database by leveraging JDBC drivers, ODBC drivers, or the native SQL APIs that are available for Java, C#, C++, Python, and other programming languages. Seamlessly join, group, aggregate, and order your distributed in-memory and on-disk data. Accelerate your existing applications by 100x using Ignite as an in-memory cache or in-memory data grid that is deployed over one or more external databases. Think of a cache that you can query with SQL, transact, and compute on. Build modern applications that support transactional and analytical workloads by using Ignite as a database that scales beyond the available memory capacity. Ignite allocates memory for your hot data and goes to disk whenever applications query cold records. Execute kilobyte-size custom code over petabytes of data. Turn your Ignite database into a distributed supercomputer for low-latency calculations, complex analytics, and machine learning. -
14
GridGain
GridGain Systems
The enterprise-grade platform built on Apache Ignite that provides in-memory speed and massive scalability for data-intensive applications and real-time data access across datastores and applications. Upgrade from Ignite to GridGain with no code changes and deploy your clusters securely at global scale with zero downtime. Perform rolling upgrades of your production clusters with no impact on application availability. Replicate across globally distributed data centers to load balance workloads and prevent downtime from regional outages. Secure your data at rest and in motion, and ensure compliance with security and privacy standards. Easily integrate with your organization's authentication and authorization system. Enable full data and user activity auditing. Create automated schedules for full and incremental backups. Restore your cluster to the last stable state with snapshots and point-in-time recovery. -
15
XAP
GigaSpaces
GigaSpaces XAP, an event-driven, distributed development platform, delivers extreme processing for mission-critical applications. XAP provides high availability, resilience and boundless scale under any load. XAP Skyline, an in-memory distributed technology for mission-critical applications running in cloud-native environments, unites data and business logic within the Kubernetes cluster. With XAP Skyline, developers can ensure that data-driven applications achieve the highest levels of performance and serve hundreds of thousands of concurrent users while delivering sub-second response times. XAP Skyline delivers the low latency, scalability and resilience. This developer platform is used in financial services, retail, and other industries where speed and scalability are critical. -
16
Amazon MemoryDB
Amazon
Valkey- and Redis OSS-compatible, durable, in-memory database service for ultra-fast performance. Scale to hundreds of millions of requests per second and over one hundred terabytes of storage per cluster. Stores data durably using a multi-AZ transaction log for 99.99% availability and near-instantaneous recovery without data loss. Secure your data with encryption at rest and in transit, private VPC endpoints, and multiple authentication mechanisms, including IAM authentication. Quickly build applications with Valkey and Redis OSS data structures and a rich open source API, and easily integrate with other AWS services. Deliver real-time personalized experiences with the highest relevancy and fastest semantic search experience among popular vector databases on AWS. Simplify application development and improve time-to-market with built-in access to flexible data structures that are available in Valkey and Redis OSS.Starting Price: $0.2163 per hour -
17
GridDB
GridDB
GridDB uses multicast communication to constitute a cluster. Set the network to enable multicast communication. First, check the host name and an IP address. Execute “hostname -i” command to check the settings of an IP address of the host. If the IP address of the machine is the same as below, no need to perform additional network setting and you can jump to the next section. GridDB is a database that manages a group of data (known as a row) that is made up of a key and multiple values. Besides having a composition of an in-memory database that arranges all the data in the memory, it can also adopt a hybrid composition combining the use of a disk (including SSD as well) and a memory. -
18
Corosync Cluster Engine
Corosync
The Corosync Cluster Engine is a group communication system with additional features for implementing high availability within applications. The project provides four C application programming interface features. Closed process group communication model with extended virtual synchrony guarantees for creating replicated state machines; a simple availability manager that restarts the application process when it has failed; a configuration and statistics in-memory database that provides the ability to set, retrieve, and receive change notifications of information; and a quorum system that notifies applications when a quorum is achieved or lost. Our project is used as a high-availability framework by projects such as Pacemaker and Asterisk. We are always looking for developers or users interested in clustering or participating in our project. -
19
Terracotta
Software AG
Terracotta DB is a comprehensive, distributed in-memory data management solution which caters to caching and operational storage use cases, and enables transactional and analytical processing. Ultra-Fast Ram + Big Data = Business Power. With BigMemory, you get: Real-time access to terabytes of in-memory data. High throughput with low, predictable latency. Support for Java®, Microsoft® .NET/C#, C++ applications. 99.999 percent uptime. Linear scalability. Data consistency guarantees across multiple servers. Optimized data storage across RAM and SSD. SQL support for querying in-memory data. Reduced infrastructure costs through maximum hardware utilization. High-performance, persistent storage for durability and ultra-fast restart. Advanced monitoring, management and control. Ultra-fast in-memory data stores that automatically move data where it’s needed. Support for data replication across multiple data centers for disaster recovery. Manage fast-moving data in real time -
20
Starcounter
Starcounter
Our ACID in-memory technology and application server enable you to build lightning-fast enterprise software. Without custom tooling or new syntax. Starcounter applications let you achieve 50 to 1000 times better performance without adding complexity. Applications are written in regular C#, LINQ, and SQL. Even the ACID transactions are written in regular C# code. Full Visual Studio support including IntelliSense, debugger, and performance profiler. All the things you like, minus the headache. Write regular C# syntax with MVVM pattern to leverage ACID in-memory technology and thin client UI for extreme performance. Starcounter technology adds business value from day one. We leverage technology that’s already developed and in production, processing millions of business transactions for high-demand customers. Starcounter combines ACID in-memory database and application server into a single platform unmatched in performance, simplicity, and price.Starting Price: Free -
21
Memstate
Memstate
Build high quality, mission critical applications with real-time performance at a fraction of the time and cost. Memstate is a new. Moving data back and forth between disk and RAM is not just extremely inefficient, it requires multiple layers of complex software that can be eliminated entirely. Use Memstate to structure and manage your data in-memory, obtain transparent persistence, concurrency control and transactions with strong ACID guarantees. note: this is too techy... Make your applications 100x faster, and your developers 10x more productive. Memstate has many possible use cases but is designed primarily to handle complex OLTP workloads in a typical enterprise application. In-memory operations are orders of magnitude faster than disk operations. A single Memstate engine can execute millions of read transactions and tens of thousands of write transactions per second, all at submillisecond latency.Starting Price: €200 per GB RAM per server -
22
Apache Flink
Apache Software Foundation
Apache Flink is a framework and distributed processing engine for stateful computations over unbounded and bounded data streams. Flink has been designed to run in all common cluster environments, perform computations at in-memory speed and at any scale. Any kind of data is produced as a stream of events. Credit card transactions, sensor measurements, machine logs, or user interactions on a website or mobile application, all of these data are generated as a stream. Apache Flink excels at processing unbounded and bounded data sets. Precise control of time and state enable Flink’s runtime to run any kind of application on unbounded streams. Bounded streams are internally processed by algorithms and data structures that are specifically designed for fixed sized data sets, yielding excellent performance. Flink is designed to work well each of the previously listed resource managers. -
23
Oracle TimesTen
Oracle
Oracle TimesTen In-Memory Database (TimesTen) delivers real time application performance (low response time and high throughput) by changing the assumptions around where data resides at runtime. By managing data in memory, and optimizing data structures and access algorithms accordingly, database operations execute with maximum efficiency achieving dramatic gains in responsiveness and throughput. With the introduction of TimesTen Scaleout, a shared nothing scale-out architecture based on the existing in-memory technology, TimesTen allows databases to transparently scale across dozens of hosts, reach hundreds of terabytes in size and support hundreds of millions of transactions per second without the need for manual database sharding or workload partitioning. -
24
Oracle Real Application Clusters (RAC) is a unique, scale-everything, highly available database architecture that transparently scales both reads and writes for all workloads, including OLTP, analytics, AI vectors, SaaS, JSON, batch, text, graph, IoT, and in-memory. It effortlessly scales complex applications such as SAP, Oracle Fusion Applications, and Salesforce workloads. Oracle RAC delivers the lowest latency and highest throughput for all data needs through its unique fused cache across servers, ensuring ultrafast local data access. Parallelized workloads across all CPUs guarantee maximum throughput, and the integration of Oracle’s storage design enables seamless online storage expansion. Unlike other databases that depend on public cloud infrastructures, sharding, or read replicas for scalability, Oracle RAC guarantees the lowest latency and highest throughput out of the box.
-
25
MemCachier
MemCachier
MemCachier manages and scales clusters of memcache servers so you can focus on your app. Our custom memcache implementation offers better reliability and usability than memcached, with the same low latency. Tell us how much memory you need and get started for free instantly. Add capacity later as you need it without changing any code. MemCachier is the fastest, most reliable implementation of memcache - an in-memory, distributed cache system. Built specifically for customers in the cloud, MemCachier is designed from the ground up to be easier to use, more reliable, more powerful, and lower cost than other implementations such as memcached. By using MemCachier, you can take advantage of the same low latency that memcached provides without sacrificing developer time and resources. Start with a free 25MB, and upgrade with ease when you're ready.Starting Price: $14 per month -
26
Graph Engine
Microsoft
Graph Engine (GE) is a distributed in-memory data processing engine, underpinned by a strongly-typed RAM store and a general distributed computation engine. The distributed RAM store provides a globally addressable high-performance key-value store over a cluster of machines. Through the RAM store, GE enables the fast random data access power over a large distributed data set. The capability of fast data exploration and distributed parallel computing makes GE a natural large graph processing platform. GE supports both low-latency online query processing and high-throughput offline analytics on billion-node large graphs. Schema does matter when we need to process data efficiently. Strongly-typed data modeling is crucial for compact data storage, fast data access, and clear data semantics. GE is good at managing billions of run-time objects of varied sizes. One byte counts as the number of objects goes large. GE provides fast memory allocation and reallocation with high memory ratios. -
27
MemOptimizer
CapturePointStone
The Problem: Almost 100% of software programs contain "memory leaks". Over time these leaks cause less and less memory to be available on your PC. Whenever a Windows based program is running, it's consuming memory resources - unfortunately many Windows programs do not "clean up" after themselves and often leave valuable memory "locked", preventing other programs from taking advantage of it and slowing your computer's performance! In addition, memory is often locked in pages so if your program needed 100 bytes of memory, it's actually locking up 2,048 bytes (a page of memory)! Until now, The only way to free up this "locked" memory was to reboot your computer. Not anymore, with MemOptimizer™! MemOptimizer frees memory from the in-memory cache that accumulates with every file or application read from hard-disk.Starting Price: $14.99 one-time payment -
28
Memurai
Memurai
Redis for Windows alternative, In-Memory Datastore Ready for the most demanding production workloads. Free for development and testing. Fully Redis-compatible. The core of Memurai is based on the Redis source code, ported to run natively on Windows. Memurai reliably supports all the features that make Redis the most popular NoSQL data store, including LRU eviction, persistence, replication, transactions, LUA scripting, high-availability, pub/sub, cluster, modules, and streams. A lot of attention has been put into ensuring full compatibility, including with the myriad of libraries and tools already available for Redis. You can even replicate data between Memurai and Redis, or use both within the same cluster! Seamless integration with Windows infrastructure and workflows. Whether it's used for development or production, Memurai seamlessly integrates with Windows best practices, tools and workflows. Engineering teams with an existing investments in the Windows infrastructure will be -
29
Google Cloud Memorystore
Google
Reduce latency with scalable, secure, and highly available in-memory service for Redis and Memcached. Memorystore automates complex tasks for open source Redis and Memcached like enabling high availability, failover, patching, and monitoring so you can spend more time coding. Start with the lowest tier and smallest size and then grow your instance with minimal impact. Memorystore for Memcached can support clusters as large as 5 TB supporting millions of QPS at very low latency. Memorystore for Redis instances are replicated across two zones and provide a 99.9% availability SLA. Instances are monitored constantly and with automatic failover—applications experience minimal disruption. Choose from the two most popular open source caching engines to build your applications. Memorystore supports both Redis and Memcached and is fully protocol compatible. Choose the right engine that fits your cost and availability requirements. -
30
OrigoDB
Origo
OrigoDB enables you to build high quality, mission critical systems with real-time performance at a fraction of the time and cost. This is not marketing gibberish! Please read on for a no nonsense description of our features. Get in touch if you have questions or download and try it out today! In-memory operations are orders of magnitude faster than disk operations. A single OrigoDB engine can execute millions of read transactions per second and thousands of write transactions per second with synchronous command journaling to a local SSD. This is the #1 reason we built OrigoDB. A single object oriented domain model is far simpler than the full stack including a relational model, object/relational mapping, data access code, views and stored procedures. That's a lot of waste that can be eliminated! The OrigoDB engine is 100% ACID out of the box. Commands execute one at a time, transitioning the in-memory model from one consistent state to the next.Starting Price: €200 per GB RAM per server -
31
Magic xpa
Magic Software
The Magic xpa solution enables rapid creation of cross-platform business applications for desktop, web and mobile, so you can take advantage of new business opportunities quickly and on-demand. Rapid app development and delivery is made possible with Magic xpa’s low-code platform, visual designer interface and component-based architecture. Powered by an In-Memory Data Grid (IMDG), Magic xpa is the perfect solution for you to develop high-performance, self-healing, scalable apps. Magic xpa’s unique metadata-driven approach keeps your apps up to date with technological advances, eliminating the need for constant redevelopment. -
32
Tungsten Insight
Tungsten Automation
Compare, group and focus analysis on how your organization is executing business processes. Understand the behavior that drives how customers and users engage with your processes and how those patterns of execution impact your bottom line. Anyone in your organization can quickly analyze and optimize business operations. With a single-platform approach, there’s no need to wait for IT to build a data warehouse, data mart or proprietary data model. Access visualizations and analyses of business information through an intuitive graphical interface that enables fast, fact-based decisions before process issues affect your customers. Exclusive MapAggregate technology combines the speed of in-memory processing with the scalability of a distributed in-memory model. Scale beyond the resource limits of a single-server by using the memory and CPU available on any physical or virtual server. -
33
Curiosity
Curiosity
Curiosity is an enterprise-grade search and knowledge platform that connects information across your tools instantly. Designed for speed, security and scalability, Curiosity gives teams one place to search, discover and act on their data. With in-memory speed, results appear as you type; whether you’re searching internal systems, cloud apps or local files. Curiosity integrates seamlessly with tools like Google Drive, Confluence, Slack, SharePoint, Outlook and ServiceNow, unifying company knowledge without moving data. Setup is fast and flexible. Deploy it in minutes, connect your sources and empower your organization to find anything instantly. Built for enterprise needs, Curiosity supports secure on-device or self-hosted setups, ensuring complete data privacy and control. Fast setup. In-memory speed. Flexible for enterprise.Starting Price: €3.99/month -
34
Panda Adaptive Defense 360
WatchGuard
Unified Endpoint Protection (EPP) and Endpoint Detection and Response (EDR) capabilities, with our unique Zero-Trust Application Service and Threat Hunting Service in one single solution, to effectively detect and classify 100% of processes running on all the endpoints within your organization. Cloud-delivered endpoint prevention, detection, containment and response technologies against advanced threat, zero-day malware, ransomware, phishing, in-memory exploits and malware-less attacks. It also provides IDS, firewall, device control, email protection, URL & content filtering capabilities. It automates the prevention, detection, containment and response to any advanced threat, zero day malware, ransomware, phishing, in-memory exploits, and fileless and malwareless attacks, inside and outside the corporate network. -
35
FUJITSU Server PRIMEQUEST
Fujitsu
Combining the power of Intel® Xeon® Processor Scalable Family, the standard specifications of Microsoft Windows and Linux operating systems, and the wealth of market solutions with innovative RAS features for the highest availability and business continuity, FUJITSU Server PRIMEQUEST systems provide new levels of operational efficiency for business and mission-critical computing with truly open standards and deliver the highest performance. FUJITSU Server PRIMEQUEST systems combine the efficiency of an x86-architecture with the reliability levels rivaling that of a UNIX/mainframe architecture. This makes it ideal for processing Big Data, In-memory solutions such as SAP HANA® and Business Intelligence applications while preserving all the RAS qualities for maximum uptime. Octo-socket rack server that offers superior performance, reliability, and optimized economics for business-critical workloads. -
36
Cassandana
Cassandana
Cassandana is an open-source MQTT message broker which is entirely written in Java. This project began its life as a fork of Moquette, and later underwent some cleanup, optimization and adding extra features. Now it’s ready to work as an enterprise message broker. Supports In-memory caching mechanism to reduce I/O operations. Supports In-memory caching mechanism to reduce I/O operations.Starting Price: Free -
37
Superlinked
Superlinked
Combine semantic relevance and user feedback to reliably retrieve the optimal document chunks in your retrieval augmented generation system. Combine semantic relevance and document freshness in your search system, because more recent results tend to be more accurate. Build a real-time personalized ecommerce product feed with user vectors constructed from SKU embeddings the user interacted with. Discover behavioral clusters of your customers using a vector index in your data warehouse. Describe and load your data, use spaces to construct your indices and run queries - all in-memory within a Python notebook. -
38
H2
H2
Welcome to H2, the Java SQL database. In embedded mode, an application opens a database from within the same JVM using JDBC. This is the fastest and easiest connection mode. The disadvantage is that a database may only be open in one virtual machine (and class loader) at any time. As in all modes, both persistent and in-memory databases are supported. There is no limit on the number of database open concurrently, or on the number of open connections. The mixed mode is a combination of the embedded and the server mode. The first application that connects to a database does that in embedded mode, but also starts a server so that other applications (running in different processes or virtual machines) can concurrently access the same data. The local connections are as fast as if the database is used in just the embedded mode, while the remote connections are a bit slower. -
39
Oracle Coherence
Oracle
Oracle Coherence is the industry leading in-memory data grid solution that enables organizations to predictably scale mission-critical applications by providing fast access to frequently used data. As data volumes and customer expectations increase, driven by the “internet of things”, social, mobile, cloud and always-connected devices, so does the need to handle more data in real-time, offload over-burdened shared data services and provide availability guarantees. The latest release of Oracle Coherence, 14.1.1, adds a patented scalable messaging implementation, support for polyglot grid-side programming on GraalVM, distributed tracing in the grid, and certification on JDK 11. Coherence stores each piece of data within multiple members (one primary and one or more backup copies), and doesn't consider any mutating operation complete until the backup(s) are successfully created. This ensures that your data grid can tolerate the failure at any level: from single JVM, to whole data center. -
40
Dqlite
Canonical
Dqlite is a fast, embedded, persistent SQL database with Raft consensus that is perfect for fault-tolerant IoT and Edge devices. Dqlite (“distributed SQLite”) extends SQLite across a cluster of machines, with automatic failover and high-availability to keep your application running. It uses C-Raft, an optimised Raft implementation in C, to gain high-performance transactional consensus and fault tolerance while preserving SQlite’s outstanding efficiency and tiny footprint. C-Raft is tuned to minimize transaction latency. C-Raft and dqlite are both written in C for maximum cross-platform portability. Published under the LGPLv3 license with a static linking exception for maximum compatibility. Includes common CLI pattern for database initialization and voting member joins and departures. Minimal, tunable delay for failover with automatic leader election. Disk-backed database with in-memory options and SQLite transactions. -
41
Oracle In-Memory Cost Management Cloud Service provides the data analysis tools to derive product costs, perform cost-volume-benefits, and what-if simulations for discrete and process industries. The product's extreme performance quickly shows near real-time insight to changes in your business. Oracle In-Memory Cost Management Cloud Service (IMCMCS) is a new SaaS on PaaS subscription offering that provides a bottoms-up approach to maximizing profit margins by enabling near real-time insight into all aspects of cost management. Cost accountants, managers and line of business owners in finance, operations, manufacturing and procurement can use Oracle In-Memory Cost Management Cloud Service to derive product costs, quickly perform cost-volume-benefit (Break-even-point), what-if simulations on complex cost data and visualize the impact of changes to their business. User has access to several parameters that allow them to further fine tune the selection of intermediate and finished goods.
-
42
Azure Managed Redis
Microsoft
Azure Managed Redis features the latest Redis innovations, industry-leading availability, and a cost-effective Total Cost of Ownership (TCO) designed for the hyperscale cloud. Azure Managed Redis delivers these capabilities on a trusted cloud platform, empowering businesses to scale and optimize their generative AI applications seamlessly. Azure Managed Redis brings the latest Redis innovations to support high-performance, scalable AI applications. With features like in-memory data storage, vector similarity search, and real-time processing, it enables developers to handle large datasets efficiently, accelerate machine learning, and build faster AI solutions. Its interoperability with Azure OpenAI Service enables AI workloads to be faster, scalable, and ready for mission-critical use cases, making it an ideal choice for building modern, intelligent applications. -
43
Match2Lists
Match2Lists
Match2Lists is the fastest, easiest and most accurate way to Match, Merge and De-duplicate your data. With Our Match2D&B option, you can enrich your data with Dun & Bradstreet information on-demand. In just minutes, you can cleanse your data of duplicates and blend raw data from different sources into powerful information. Our first objective is maximum match results for our customers. Prior to creating Match2Lists, we ran analytics and data visualisation companies and used most "fuzzy" matching software on the market. Unsatisfied by their low match results, we spent 10 years developing the most advanced data matching logic. Our second objective is time: enable our customers to spend less time matching and cleansing data and more time analysing and executing. So we implemented our advanced matching logic on the fast in-memory cloud computing architecture we could find, capable of matching 200 million records in 30 seconds.Starting Price: $95 per month -
44
AsparaDB
Alibaba
ApsaraDB for Redis is an automated and scalable tool for developers to manage data storage shared across multiple processes, applications or servers. As a Redis protocol compatible tool, ApsaraDB for Redis offers exceptional read-write capabilities and ensures data persistence by using memory and hard disk storage. ApsaraDB for Redis provides data read-write capabilities at high speed by retrieving data from in-memory caches and ensures data persistence by using both memory and hard disk storage mode. ApsaraDB for Redis supports advanced data structures such as leaderboard, counting, session, and tracking, which are not readily achievable through ordinary databases. ApsaraDB for Redis also has an enhanced edition called "Tair" . Tair has officially handled the data caching scenarios of Alibaba Group since 2009 and has proven its outstanding performance in scenarios such as Double 11 Shopping Festival. -
45
Hitachi Streaming Data Platform
Hitachi
The Hitachi Streaming Data Platform (SDP) is a real-time data processing system designed to analyze large volumes of time-sequenced data as it is generated. By leveraging in-memory and incremental computational processing, SDP enables swift analysis without the delays associated with traditional stored data processing. Users can define summary analysis scenarios using Continuous Query Language (CQL), similar to SQL, allowing for flexible and programmable data analysis without the need for custom applications. The platform's architecture comprises components such as development servers, data-transfer servers, data-analysis servers, and dashboard servers, facilitating scalable and efficient data processing workflows. SDP's modular design supports various data input and output formats, including text files and HTTP packets, and integrates with visualization tools like RTView for real-time monitoring. -
46
ZEOS
Vizru
Vizru ZEOS. Build and deploy full-stack, AI applications at scale, with Zero lines of code. Scale your AI applications using Vizru ZEOS Industry's first operating system to manage stability, performance and deployment of applications across multi-cloud environment. Compliant Let ZEOS Manage Everything. Deploy and forget as the ZEOS layer completely manages governance and security. Capitalize on a responsive OS that continuously monitors anomalies, performance, and risk vectors across all applications. Achieve regulatory compliance rapidly with inbuilt feedback and self-learning OS. Hyper-scalable. Grow infinitely Deploy big data applications with Elastic NO-SQL database. Execute high-frequency cross-app processes with distributed workflow orchestration. Assure real-time response with in-memory data management. Containerized. Go Super Portable. Port and scale dynamically with Docker + Kubernetes support out-of-the-box. Move apps between clouds effortlessly -
47
Elastic Cloud Server (ECS) provides secure, scalable, on-demand computing resources, enabling you to flexibly deploy applications and workloads. Worry-free comprehensive security protection. Use general computing ECSs, which provide a balance of computing, memory, and network resources. This ECS type is ideal for light- and medium-load applications. Use memory-optimized ECSs, which have a large amount of memory and support ultra-high I/O EVS disks and flexible bandwidths. This ECS type is ideal for applications that process large volumes of data. Use disk-intensive ECSs, which are designed for applications requiring sequential read/write on ultra-large datasets in local storage (such as distributed Hadoop computing) as well as large-scale parallel data processing and log processing. Disk-intensive ECSs are HDD-compatible, feature a default network bandwidth of 10GE, and deliver high PPS and low network latency.Starting Price: $6.13 per month
-
48
CaptchaText
CaptchaText
CaptchaText is 100% FREE, built on a revolutionary ZERO database architecture, powered by a proprietary Hybrid In-Memory Indexing (Hybrid IMI) engine algorithm, that enables CAPTCHA authentication to be performed using minimal data bits against server memory. CaptchaText's multi-layered security approach includes real-time IP verification, sophisticated bot detection algorithms, and intelligent token management that adapts to your traffic patterns. With support for 23 languages and flexible customization options, CaptchaText seamlessly integrates into any website while providing robust protection against automated threats. Experience the power of advanced security without any cost limitations, powered by a revolutionary Zero Database architecture and proprietary Hybrid In-Memory Indexing engine. This cutting-edge technology enables CaptchaText to provide enterprise-grade protection at unprecedented efficiency, allowing to offer their complete feature set at no cost. By eliminatingStarting Price: $0 -
49
SwayDB
SwayDB
Embeddable persistent and in-memory key-value storage engine for high performance & resource efficiency. Designed to be efficient at managing bytes on-disk and in-memory by recognising reoccurring patterns in serialised bytes without restricting the core implementation to any specific data model (SQL, NoSQL etc) or storage type (Disk or RAM). The core provides many configurations that can be manually tuned for custom use-cases, but we aim implement automatic runtime tuning when we are able to collect and analyse runtime machine statistics & read-write patterns. Manage data by creating familiar data structures like Map, Set, Queue, SetMap, MultiMap that can easily be converted to native Java and Scala collections. Perform conditional updates/data modifications with any Java, Scala or any native JVM code - No query language. -
50
MPI for Python (mpi4py)
MPI for Python
Over the last years, high performance computing has become an affordable resource to many more researchers in the scientific community than ever before. The conjunction of quality open source software and commodity hardware strongly influenced the now widespread popularity of Beowulf class clusters and cluster of workstations. Among many parallel computational models, message-passing has proven to be an effective one. This paradigm is specially suited for (but not limited to) distributed memory architectures and is used in today’s most demanding scientific and engineering application related to modeling, simulation, design, and signal processing. However, portable message-passing parallel programming used to be a nightmare in the past because of the many incompatible options developers were faced to. Fortunately, this situation definitely changed after the MPI Forum released its standard specification.Starting Price: Free