Alternatives to CONNX

Compare CONNX alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to CONNX in 2026. Compare features, ratings, user reviews, pricing, and more from CONNX competitors and alternatives in order to make an informed decision for your business.

  • 1
    Windocks

    Windocks

    Windocks

    Windocks is a leader in cloud native database DevOps, recognized by Gartner as a Cool Vendor, and as an innovator by Bloor research in Test Data Management. Novartis, DriveTime, American Family Insurance, and other enterprises rely on Windocks for on-demand database environments for development, testing, and DevOps. Windocks software is easily downloaded for evaluation on standard Linux and Windows servers, for use on-premises or cloud, and for data delivery of SQL Server, Oracle, PostgreSQL, and MySQL to Docker containers or conventional database instances. Windocks database orchestration allows for code-free end to end automated delivery. This includes masking, synthetic data, Git operations and access controls, as well as secrets management. Windocks can be installed on standard Linux or Windows servers in minutes. It can also run on any public cloud infrastructure or on-premise infrastructure. One VM can host up 50 concurrent database environments.
    Compare vs. CONNX View Software
    Visit Website
  • 2
    AWS Glue

    AWS Glue

    Amazon

    AWS Glue is a serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, machine learning, and application development. AWS Glue provides all the capabilities needed for data integration so that you can start analyzing your data and putting it to use in minutes instead of months. Data integration is the process of preparing and combining data for analytics, machine learning, and application development. It involves multiple tasks, such as discovering and extracting data from various sources; enriching, cleaning, normalizing, and combining data; and loading and organizing data in databases, data warehouses, and data lakes. These tasks are often handled by different types of users that each use different products. AWS Glue runs in a serverless environment. There is no infrastructure to manage, and AWS Glue provisions, configures, and scales the resources required to run your data integration jobs.
  • 3
    Delphix

    Delphix

    Perforce

    Delphix is the industry leader in DataOps and provides an intelligent data platform that accelerates digital transformation for leading companies around the world. The Delphix DataOps Platform supports a broad spectrum of systems, from mainframes to Oracle databases, ERP applications, and Kubernetes containers. Delphix supports a comprehensive range of data operations to enable modern CI/CD workflows and automates data compliance for privacy regulations, including GDPR, CCPA, and the New York Privacy Act. In addition, Delphix helps companies sync data from private to public clouds, accelerating cloud migrations, customer experience transformation, and the adoption of disruptive AI technologies. Automate data for fast, quality software releases, cloud adoption, and legacy modernization. Source data from mainframe to cloud-native apps across SaaS, private, and public clouds.
  • 4
    Actifio

    Actifio

    Google

    Automate self-service provisioning and refresh of enterprise workloads, integrate with existing toolchain. High-performance data delivery and re-use for data scientists through a rich set of APIs and automation. Recover any data across any cloud from any point in time – at the same time – at scale, beyond legacy solutions. Minimize the business impact of ransomware / cyber attacks by recovering quickly with immutable backups. Unified platform to better protect, secure, retain, govern, or recover your data on-premises or in the cloud. Actifio’s patented software platform turns data silos into data pipelines. Virtual Data Pipeline (VDP) delivers full-stack data management — on-premises, hybrid or multi-cloud – from rich application integration, SLA-based orchestration, flexible data movement, and data immutability and security.
  • 5
    Hyper-Q

    Hyper-Q

    Datometry

    Adaptive Data Virtualization™ technology enables enterprises to run their existing applications on modern cloud data warehouses, without rewriting or reconfiguring them. Datometry Hyper-Q™ lets enterprises adopt new cloud databases rapidly, control ongoing operating expenses, and build out analytic capabilities for faster digital transformation. Datometry Hyper-Q virtualization software allows any existing applications to run on any cloud database, making applications and databases interoperable. Enterprises can now adopt the cloud database of choice, without having to rip, rewrite and replace applications. Enables runtime application compatibility with Transformation and Emulation of legacy data warehouse functions. Deploys transparently on Azure, AWS, and GCP clouds. Applications can use existing JDBC, ODBC and Native connectors without changes. Connects to major cloud data warehouses, Azure Synapse Analytics, AWS Redshift, and Google BigQuery.
  • 6
    Clonetab

    Clonetab

    Clonetab

    For ERPs like Oracle e-Business Suite, PeopleSoft & Databases Clonetab is the only software which can virtualize and provide true end-to-end on-demand clones of ERPs (like Oracle e-Business Suite, PeopleSoft) or databases. It can also provide an integrated solution for virtualization, cloning, Disaster Recovery, Backups and Oracle EBS Snapshots. Clonetab engines – Deeply aware of ERP Applications, not just Databases The engines are deeply EBS & PS aware and can identify the major releases (e.g. R12.1, R12.2) and patchset levels like AD, TXK and executes the clone commands accordingly. The platform provides options to retain EBS/PS specific options like profile option retention, Concurrent/Process scheduler setups retention, EBS users with responsibilities retention, Database links, Directories retention, workflows setups and many more options, resulting in a true end-to-end ERP clone.
  • 7
    TIBCO Data Virtualization
    An enterprise data virtualization solution that orchestrates access to multiple and varied data sources and delivers the datasets and IT-curated data services foundation for nearly any solution. As a modern data layer, the TIBCO® Data Virtualization system addresses the evolving needs of companies with maturing architectures. Remove bottlenecks and enable consistency and reuse by providing all data, on demand, in a single logical layer that is governed, secure, and serves a diverse community of users. Immediate access to all data helps you develop actionable insights and act on them in real time. Users are empowered because they can easily search for and select from a self-service directory of virtualized business data and then use their favorite analytics tools to obtain results. They can spend more time analyzing data, less time searching for it.
  • 8
    Accelario

    Accelario

    Accelario

    Take the load off of DevOps and eliminate privacy concerns by giving your teams full data autonomy and independence via an easy-to-use self-service portal. Simplify access, eliminate data roadblocks and speed up provisioning for dev, testing, data analysts and more. Accelario Continuous DataOps Platform is a one-stop-shop for handling all of your data needs. Eliminate DevOps bottlenecks and give your teams the high-quality, privacy-compliant data they need. The platform’s four distinct modules are available as stand-alone solutions or as a holistic, comprehensive DataOps management platform. Existing data provisioning solutions can’t keep up with agile demands for continuous, independent access to fresh, privacy-compliant data in autonomous environments. Teams can meet agile demands for fast, frequent deliveries with a comprehensive, one-stop-shop for self-provisioning privacy-compliant high-quality data in their very own environments.
    Starting Price: $0 Free Forever Up to 10GB
  • 9
    Oracle VM
    Designed for efficiency and optimized for performance, Oracle's server virtualization products support x86 and SPARC architectures and a variety of workloads such as Linux, Windows and Oracle Solaris. In addition to solutions that are hypervisor-based, Oracle also offers virtualization built in to hardware and Oracle operating systems to deliver the most complete and optimized solution for your entire computing environment.
  • 10
    IBM Cloud Pak for Data
    The biggest challenge to scaling AI-powered decision-making is unused data. IBM Cloud Pak® for Data is a unified platform that delivers a data fabric to connect and access siloed data on-premises or across multiple clouds without moving it. Simplify access to data by automatically discovering and curating it to deliver actionable knowledge assets to your users, while automating policy enforcement to safeguard use. Further accelerate insights with an integrated modern cloud data warehouse. Universally safeguard data usage with privacy and usage policy enforcement across all data. Use a modern, high-performance cloud data warehouse to achieve faster insights. Empower data scientists, developers and analysts with an integrated experience to build, deploy and manage trustworthy AI models on any cloud. Supercharge analytics with Netezza, a high-performance data warehouse.
    Starting Price: $699 per month
  • 11
    Rocket DataEdge

    Rocket DataEdge

    Rocket Software

    Rocket® DataEdge is a data integration platform that connects, virtualizes, and analyzes data across mainframe, distributed, and cloud environments. It brings together enterprise wide data intelligence, real time data movement, and zero copy access, so organizations can operationalize system of record data for analytics, AI/ML, and modernization while maintaining governance and minimizing disruption. DataEdge includes integrated capabilities for sub-second replication, SQL-based federation, and automated metadata and lineage discovery. It supports the most sources and targets in the industry, including legacy sources such as Db2, VSAM, IMS, Adabas, and Datacom, alongside modern cloud platforms. With built-in security, access controls, and operational safeguards, DataEdge reduces integration cost and risk while delivering consistent, governed access to critical data across hybrid architectures.
  • 12
    K2View

    K2View

    K2View

    At K2View, we believe that every enterprise should be able to leverage its data to become as disruptive and agile as the best companies in its industry. We make this possible through our patented Data Product Platform, which creates and manages a complete and compliant dataset for every business entity – on demand, and in real time. The dataset is always in sync with its underlying sources, adapts to changes in the source structures, and is instantly accessible to any authorized data consumer. Data Product Platform fuels many operational use cases, including customer 360, data masking and tokenization, test data management, data migration, legacy application modernization, data pipelining and more – to deliver business outcomes in less than half the time, and at half the cost, of any other alternative. The platform inherently supports modern data architectures – data mesh, data fabric, and data hub – and deploys in cloud, on-premise, or hybrid environments.
  • 13
    TIBCO Platform

    TIBCO Platform

    Cloud Software Group

    TIBCO delivers industrial-strength solutions that meet your performance, throughput, reliability, and scalability needs while offering a wide range of technology and deployment options to deliver real-time data where it’s needed most. The TIBCO Platform will bring together an evolving set of your TIBCO solutions wherever they are hosted—in the cloud, on-premises, and at the edge—into a single, unified experience so that you can more easily manage and monitor them. TIBCO helps build solutions that are essential to the success of the world’s largest enterprises.
  • 14
    Enterprise Enabler

    Enterprise Enabler

    Stone Bond Technologies

    It unifies information across silos and scattered data for visibility across multiple sources in a single environment; whether in the cloud, spread across siloed databases, on instruments, in Big Data stores, or within various spreadsheets/documents, Enterprise Enabler can integrate all your data so you can make informed business decisions in real-time. By creating logical views of data from the original source locations. This means you can reuse, configure, test, deploy, and monitor all your data in a single integrated environment. Analyze your business data in one place as it is occurring to maximize the use of assets, minimize costs, and improve/refine your business processes. Our implementation time to market value is 50-90% faster. We get your sources connected and running so you can start making business decisions based on real-time data.
  • 15
    Data Virtuality

    Data Virtuality

    Data Virtuality

    Connect and centralize data. Transform your existing data landscape into a flexible data powerhouse. Data Virtuality is a data integration platform for instant data access, easy data centralization and data governance. Our Logical Data Warehouse solution combines data virtualization and materialization for the highest possible performance. Build your single source of data truth with a virtual layer on top of your existing data environment for high data quality, data governance, and fast time-to-market. Hosted in the cloud or on-premises. Data Virtuality has 3 modules: Pipes, Pipes Professional, and Logical Data Warehouse. Cut down your development time by up to 80%. Access any data in minutes and automate data workflows using SQL. Use Rapid BI Prototyping for significantly faster time-to-market. Ensure data quality for accurate, complete, and consistent data. Use metadata repositories to improve master data management.
  • 16
    Oracle Big Data SQL Cloud Service
    Oracle Big Data SQL Cloud Service enables organizations to immediately analyze data across Apache Hadoop, NoSQL and Oracle Database leveraging their existing SQL skills, security policies and applications with extreme performance. From simplifying data science efforts to unlocking data lakes, Big Data SQL makes the benefits of Big Data available to the largest group of end users possible. Big Data SQL gives users a single location to catalog and secure data in Hadoop and NoSQL systems, Oracle Database. Seamless metadata integration and queries which join data from Oracle Database with data from Hadoop and NoSQL databases. Utilities and conversion routines support automatic mappings from metadata stored in HCatalog (or the Hive Metastore) to Oracle Tables. Enhanced access parameters give administrators the flexibility to control column mapping and data access behavior. Multiple cluster support enables one Oracle Database to query multiple Hadoop clusters and/or NoSQL systems.
  • 17
    Denodo

    Denodo

    Denodo Technologies

    The core technology to enable modern data integration and data management solutions. Quickly connect disparate structured and unstructured sources. Catalog your entire data ecosystem. Data stays in the sources and it is accessed on demand, with no need to create another copy. Build data models that suit the needs of the consumer, even across multiple sources. Hide the complexity of your back-end technologies from the end users. The virtual model can be secured and consumed using standard SQL and other formats like REST, SOAP and OData. Easy access to all types of data. Full data integration and data modeling capabilities. Active Data Catalog and self-service capabilities for data & metadata discovery and data preparation. Full data security and data governance capabilities. Fast intelligent execution of data queries. Real-time data delivery in any format. Ability to create data marketplaces. Decoupling of business applications from data systems to facilitate data-driven strategies.
  • 18
    Fraxses

    Fraxses

    Intenda

    There are many products on the market that can help companies to do this, but if your priorities are to create a data-driven enterprise and to be as efficient and cost-effective as possible, then there is only one solution you should consider: Fraxses, the world’s foremost distributed data platform. Fraxses provides customers with access to data on demand, delivering powerful insights via a solution that enables a data mesh or data fabric architecture. Think of a data mesh as a structure that can be laid over disparate data sources, connecting them, and enabling them to function as a single environment. Unlike other data integration and virtualization platforms, the Fraxses data platform has a decentralized architecture. While Fraxses fully supports traditional data integration processes, the future lies in a new approach, whereby data is served directly to users without the need for a centrally owned data lake or platform.
  • 19
    IBM DataStage
    Accelerate AI innovation with cloud-native data integration on IBM Cloud Pak for data. AI-powered data integration, anywhere. Your AI and analytics are only as good as the data that fuels them. With a modern container-based architecture, IBM® DataStage® for IBM Cloud Pak® for Data delivers that high-quality data. It combines industry-leading data integration with DataOps, governance and analytics on a single data and AI platform. Automation accelerates administrative tasks to help reduce TCO. AI-based design accelerators and out-of-the-box integration with DataOps and data science services speed AI innovation. Parallelism and multicloud integration let you deliver trusted data at scale across hybrid or multicloud environments. Manage the data and analytics lifecycle on the IBM Cloud Pak for Data platform. Services include data science, event messaging, data virtualization and data warehousing. Parallel engine and automated load balancing.
  • 20
    Informatica PowerCenter
    Embrace agility with the market-leading scalable, high-performance enterprise data integration platform. Support the entire data integration lifecycle, from jumpstarting the first project to ensuring successful mission-critical enterprise deployments. PowerCenter, the metadata-driven data integration platform, jumpstarts and accelerates data integration projects in order to deliver data to the business more quickly than manual hand coding. Developers and analysts collaborate, rapidly prototype, iterate, analyze, validate, and deploy projects in days instead of months. PowerCenter serves as the foundation for your data integration investments. Use machine learning to efficiently monitor and manage your PowerCenter deployments across domains and locations.
  • 21
    Red Hat JBoss Data Virtualization
    Red Hat JBoss Data Virtualization is a lean, virtual data integration solution that unlocks trapped data and delivers it as easily consumable, unified, and actionable information. Red Hat JBoss Data Virtualization makes data spread across physically diverse systems, such as multiple databases, XML files, and Hadoop systems, appear as a set of tables in a local database. Provides standards-based read/write access to heterogeneous data stores in real-time. Speeds application development and integration by simplifying access to distributed data. Integrate and transform data semantics based on data consumer requirements. Provides centralized access control, and auditing through robust security infrastructure. Turn fragmented data into actionable information at the speed your business needs. Red Hat offers support and maintenance over stated time periods for the major versions of JBoss products.
  • 22
    Cohesity

    Cohesity

    Cohesity

    Simplify your data protection by eliminating legacy backup silos. Efficiently protect virtual, physical and cloud workloads, and ensure instant recovery. Bring compute to your data and run apps to gain insights. Protect your business from sophisticated ransomware attacks with a multilayered data security architecture. We don't need more single-purpose tools for all those silos. This patchwork leaves us more vulnerable to ransomware. Cohesity increases cyber resiliency and solves mass data fragmentation by consolidating data onto one hyper-scale platform. Modernize your data centers by consolidating backups, archives, file shares, object stores, and data used in analytics and dev/test. Our modern approach to solving these challenges is Cohesity Helios, a single next-gen data management platform that offers multiple services. Next-gen data management makes things easy to manage while keeping pace with your data growth.
  • 23
    TROCCO

    TROCCO

    primeNumber Inc

    TROCCO is a fully managed modern data platform that enables users to integrate, transform, orchestrate, and manage their data from a single interface. It supports a wide range of connectors, including advertising platforms like Google Ads and Facebook Ads, cloud services such as AWS Cost Explorer and Google Analytics 4, various databases like MySQL and PostgreSQL, and data warehouses including Amazon Redshift and Google BigQuery. The platform offers features like Managed ETL, which allows for bulk importing of data sources and centralized ETL configuration management, eliminating the need to manually create ETL configurations individually. Additionally, TROCCO provides a data catalog that automatically retrieves metadata from data analysis infrastructure, generating a comprehensive catalog to promote data utilization. Users can also define workflows to create a series of tasks, setting the order and combination to streamline data processing.
  • 24
    Azure Stack Hub
    Part of the Azure Stack portfolio, Azure Stack Hub broadens Azure to let you run apps in an on-premises environment and deliver Azure services in your datacenter. As organizations race to digitally transform, many are finding they can move faster by using public cloud services to build on modern architectures and refresh legacy apps. Many workloads, however, must remain on-premises—for example, due to technological and regulatory obstacles. Microsoft has you covered with broad hybrid cloud options and cloud innovation for all your workloads, wherever they reside. Address latency and connectivity requirements by processing data locally in Azure Stack Hub and then aggregating it in Azure for further analytics, with common app logic across both. You can even deploy Azure Stack Hub disconnected from the internet and from Azure.
    Starting Price: $6 per vCPU per month
  • 25
    SQL Secure

    SQL Secure

    IDERA, an Idera, Inc. company

    SQL Secure helps database administrators to manage SQL Server security in physical, virtual, and cloud environments - including managed cloud databases. Unlike its competition, it provides configurable data collection, customizable templates to satisfy audits for multiple regulatory guidelines, extensive security checks and audit rules, automated server registration process, and server group tagging.
    Starting Price: $1,036 per instance
  • 26
    Lyftrondata

    Lyftrondata

    Lyftrondata

    Whether you want to build a governed delta lake, data warehouse, or simply want to migrate from your traditional database to a modern cloud data warehouse, do it all with Lyftrondata. Simply create and manage all of your data workloads on one platform by automatically building your pipeline and warehouse. Analyze it instantly with ANSI SQL, BI/ML tools, and share it without worrying about writing any custom code. Boost the productivity of your data professionals and shorten your time to value. Define, categorize, and find all data sets in one place. Share these data sets with other experts with zero codings and drive data-driven insights. This data sharing ability is perfect for companies that want to store their data once, share it with other experts, and use it multiple times, now and in the future. Define dataset, apply SQL transformations or simply migrate your SQL data processing logic to any cloud data warehouse.
  • 27
    CData Sync

    CData Sync

    CData Software

    CData Sync is a universal data pipeline that delivers automated continuous replication between hundreds of SaaS applications & cloud data sources and any major database or data warehouse, on-premise or in the cloud. Replicate data from hundreds of cloud data sources to popular database destinations, such as SQL Server, Redshift, S3, Snowflake, BigQuery, and more. Configuring replication is easy: login, select the data tables to replicate, and select a replication interval. Done. CData Sync extracts data iteratively, causing minimal impact on operational systems by only querying and updating data that has been added or changed since the last update. CData Sync offers the utmost flexibility across full and partial replication scenarios and ensures that critical data is stored safely in your database of choice. Download a 30-day free trial of the Sync application or request more information at www.cdata.com/sync
  • 28
    Adoki

    Adoki

    Adastra

    Adoki streamlines data transfers to and from any platform or system—whether it's a data warehouse, database, cloud service, Hadoop platform, or streaming application—on both one-time and recurring schedules. It adapts to your IT infrastructure's workload, adjusting transfer or replication processes to optimal times when needed. With centralized management and monitoring of data transfers, Adoki allows you to handle your data operations with a smaller, more efficient team.
  • 29
    WANdisco

    WANdisco

    WANdisco

    Since 2010 we have seen Hadoop become an essential part of the data management landscape. Over the decade the majority of organizations have adopted Hadoop to build out their data lake infrastructure. However, while Hadoop offered a cost-effective way to store petabytes of data across a distributed environment, it introduced many complexities. The systems required specialized IT skills and the on-premises environments lacked the flexibility to easily scale the systems up and down as usage demands changed. The management complexity and flexibility challenges associated with on-premises Hadoop environments are much more optimally addressed in the cloud. To minimize the risks and costs associated with these data modernization efforts, many companies have selected to automate their cloud data migration with WANdisco. LiveData Migrator is a fully self-service solution requiring no WANdisco expertise or services.
  • 30
    Presto

    Presto

    Presto Foundation

    Presto is an open source distributed SQL query engine for running interactive analytic queries against data sources of all sizes ranging from gigabytes to petabytes. For data engineers who struggle with managing multiple query languages and interfaces to siloed databases and storage, Presto is the fast and reliable engine that provides one simple ANSI SQL interface for all your data analytics and your open lakehouse. Different engines for different workloads means you will have to re-platform down the road. With Presto, you get 1 familar ANSI SQL language and 1 engine for your data analytics so you don't need to graduate to another lakehouse engine. Presto can be used for interactive and batch workloads, small and large amounts of data, and scales from a few to thousands of users. Presto gives you one simple ANSI SQL interface for all of your data in various siloed data systems, helping you join your data ecosystem together.
  • 31
    CData Query Federation Drivers
    The Query Federation Drivers provide a universal data access layer that simplifies application development and data access. The drivers make it easy to query data across systems with SQL through a common driver interface. The Query Federation Drivers enable users to embed Logical Data Warehousing capabilities into any application or process. A Logical Data Warehouse is an architectural layer that enables access to multiple data sources on-demand, without relocating or transforming data in advance. Essentially the Query Federation Drivers give users simple, SQL-based access to all of your databases, data warehouses, and cloud applications through a single interface. Developers can pick multiple data processing systems and access all of them with a single SQL-based interface.
  • 32
    IBM InfoSphere Information Server
    Set up cloud environments quickly for ad hoc development, testing and productivity for your IT and business users. Reduce the risks and costs of maintaining your data lake by implementing comprehensive data governance, including end-to-end data lineage, for business users. Improve cost savings by delivering clean, consistent and timely information for your data lakes, data warehouses or big data projects, while consolidating applications and retiring outdated databases. Take advantage of automatic schema propagation to speed up job generation, type-ahead search, and backwards capability, while designing once and executing anywhere. Create data integration flows and enforce data governance and quality rules with a cognitive design that recognizes and suggests usage patterns. Improve visibility and information governance by enabling complete, authoritative views of information with proof of lineage and quality.
    Starting Price: $16,500 per month
  • 33
    Rubrik

    Rubrik

    Rubrik

    A logical air gap prevents attackers from discovering your backups while our append-only file system ensures backup data can't be encrypted. You can keep unauthorized users out with globally-enforced multi-factor authentication. From backup frequency and retention to replication and archival, replace hundreds or thousands of backup jobs with just a few policies. Apply the same policies to all your workloads across on-premises and cloud. Archive your data to your public cloud provider’s blob storage service. Quickly access archived data with real-time predictive search. Search across your entire environment, down to the file level, and select the right point in time to recover. Reduce recovery time from days and weeks to hours or less. Rubrik and Microsoft have joined forces to help you build a cyber-resilient business. Reduce the risk of backup data breach, loss, or theft by storing immutable copies of your data in a Rubrik-hosted cloud environment, isolated from your core workloads.
  • 34
    Alibaba Cloud Data Integration
    Alibaba Cloud Data Integration is a comprehensive data synchronization platform that facilitates both real-time and offline data exchange across various data sources, networks, and locations. It supports data synchronization between more than 400 pairs of disparate data sources, including RDS databases, semi-structured storage, non-structured storage (such as audio, video, and images), NoSQL databases, and big data storage. The platform also enables real-time data reading and writing between data sources such as Oracle, MySQL, and DataHub. Data Integration allows users to schedule offline tasks by setting specific trigger times, including year, month, day, hour, and minute, simplifying the configuration of periodic incremental data extraction. It integrates seamlessly with DataWorks data modeling, providing an operations and maintenance integrated workflow. The platform leverages the computing capability of Hadoop clusters to synchronize HDFS data to MaxCompute.
  • 35
    Redgate Deploy

    Redgate Deploy

    Redgate Software

    Standardize deployments for SQL Server, Oracle, and 18 other databases Increase frequency and reliability of database deployments Flexible toolchain for easy adoption across teams Catch errors and speed up development with Continuous Integration Get oversight of every change to your databases. From version control to continuous delivery, Redgate Deploy lets your teams automate database development processes so you can accelerate software delivery and ensure quality code. Building on your continuous delivery process for applications, and incorporating Redgate’s industry-leading tools and Flyway migrations framework, Redgate Deploy extends DevOps to your databases. Automate your deployments for faster delivery of database changes through your pipeline. To guarantee quality and consistency, Redgate Deploy provides repeatable processes that you can standardize on at every stage, from version control to live deployment.
    Starting Price: $2,499 per user per year
  • 36
    SAS Data Management

    SAS Data Management

    SAS Institute

    No matter where your data is stored, from cloud, to legacy systems, to data lakes, like Hadoop, SAS Data Management helps you access the data you need. Create data management rules once and reuse them, giving you a standard, repeatable method for improving and integrating data, without additional cost. As an IT expert, it's easy to get entangled in tasks outside your normal duties. SAS Data Management enables your business users to update data, tweak processes and analyze results themselves, freeing you up for other projects. Plus, a built-in business glossary, as well as SAS and third-party metadata management and lineage visualization capabilities, keep everyone on the same page. SAS Data Management technology is truly integrated, which means you’re not forced to work with a solution that’s been cobbled together. All our components, from data quality to data federation technology, are part of the same architecture.
  • 37
    Dremio

    Dremio

    Dremio

    Dremio delivers lightning-fast queries and a self-service semantic layer directly on your data lake storage. No moving data to proprietary data warehouses, no cubes, no aggregation tables or extracts. Just flexibility and control for data architects, and self-service for data consumers. Dremio technologies like Data Reflections, Columnar Cloud Cache (C3) and Predictive Pipelining work alongside Apache Arrow to make queries on your data lake storage very, very fast. An abstraction layer enables IT to apply security and business meaning, while enabling analysts and data scientists to explore data and derive new virtual datasets. Dremio’s semantic layer is an integrated, searchable catalog that indexes all of your metadata, so business users can easily make sense of your data. Virtual datasets and spaces make up the semantic layer, and are all indexed and searchable.
  • 38
    SAP HANA
    SAP HANA in-memory database is for transactional and analytical workloads with any data type — on a single data copy. It breaks down the transactional and analytical silos in organizations, for quick decision-making, on premise and in the cloud. Innovate without boundaries on a database management system, where you can develop intelligent and live solutions for quick decision-making on a single data copy. And with advanced analytics, you can support next-generation transactional processing. Build data solutions with cloud-native scalability, speed, and performance. With the SAP HANA Cloud database, you can gain trusted, business-ready information from a single solution, while enabling security, privacy, and anonymization with proven enterprise reliability. An intelligent enterprise runs on insight from data – and more than ever, this insight must be delivered in real time.
  • 39
    DBSync

    DBSync

    DBSync

    Start integrating your apps with clicks, not code. Prebuilt templates and an easy-to-use interface will have you up and running within an hour. DBSync Cloud Workflow has a robust integration platform that can be run on SaaS-based or on cloud. DBSync Cloud Workflow can be integrated into an API interface, laptops or desktops and mobile phones or tablets which facilitates ease for the user. Connect to Apps CRM's, Accounting systems, Popular Databases, Big Data like Cassandra, Hive, and more. We integrate any connectors easily by custom workflow. Leverage out-of-the-box integration Maps and Processes for common use cases of CRM to Accounting integration, Data replication and more. Use as is or extend to fit your needs. Develop, manage and automate business complex processes into simple workflows. Support for newer archiving technologies like Cassandra, Hive, Amazon RedShift, and more.
    Starting Price: $2400.00/year
  • 40
    Navicat Premium
    Navicat Premium is a database development tool that allows you to simultaneously connect to MySQL, MariaDB, MongoDB, SQL Server, Oracle, PostgreSQL, and SQLite databases from a single application. Compatible with cloud databases like Amazon RDS, Amazon Aurora, Amazon Redshift, Microsoft Azure, Oracle Cloud, Google Cloud and MongoDB Atlas. You can quickly and easily build, manage and maintain your databases. Data Transfer, Data Synchronization and Structure Synchronization help you migrate your data easier and faster for less overhead. Deliver detailed, step-by-step guidelines for transferring data across various DBMS. Compare and synchronize databases with Data and Structure Synchronization. Set up and deploy the comparisons in seconds, and get the detailed script to specify the changes you want to execute.
    Starting Price: $64.99 per month
  • 41
    Informatica Intelligent Cloud Services
    Go beyond table stakes with the industry’s most comprehensive, microservices-based, API-driven, and AI-powered enterprise iPaaS. Powered by the CLAIRE engine, IICS supports any cloud-native pattern, from data, application, and API integration to MDM. Our global distribution and multi-cloud support covers Microsoft Azure, AWS, Google Cloud Platform, Snowflake, and more. IICS offers the industry’s highest enterprise scale and trust, with the industry’s most security certifications. Our enterprise iPaaS includes multiple cloud data management products designed to accelerate productivity and improve speed and scale. Informatica is a Leader again in the Gartner 2020 Magic Quadrant for Enterprise iPaaS. Get real-world insights and reviews for Informatica Intelligent Cloud Services. Try our cloud services—for free. Our customers are our number-one priority—across products, services, and support. That’s why we’ve earned top marks in customer loyalty for 12 years in a row.
  • 42
    Stitch
    Stitch is a cloud-based platform for ETL – extract, transform, and load. More than a thousand companies use Stitch to move billions of records every day from SaaS applications and databases into data warehouses and data lakes.
  • 43
    ZetaAnalytics

    ZetaAnalytics

    Halliburton

    The ZetaAnalytics product requires a compatible database appliance for its Data Warehouse. Landmark has qualified the ZetaAnalytics software using Teradata, EMC Greenplum, and IBM Netezza. Please see the ZetaAnalytics Release Notes for the most up to date qualified versions. Before installing and configuring ZetaAnalytics software, ensure that the Data Warehouse you use for drilling data is created and running. Scripts to create the various Zeta-specific database components within the Data Warehouse will need to be run as part of the installation process. These require database administrator (DBA) rights. The ZetaAnalytics product requires Apache Hadoop for model scoring and real-time streaming. If you do not already have an Apache Hadoop cluster installed in your environment, please install it before running the ZetaAnalytics installer, which will prompt you for the name and port number of your Hadoop Name Server and Map Reducer.
  • 44
    Azure Database Migration Service
    Easily migrate your data, schema, and objects from multiple sources to the cloud at scale. Azure Database Migration Service is a tool that helps you simplify, guide, and automate your database migration to Azure. Migrate your database and server objects, including user accounts, agent jobs, and SQL Server Integration Services (SSIS) packages all at once. Migrate your data to Azure from the most common database management systems. Whether you’re moving from an on-premises database or another cloud, Database Migration Service supports key migration scenarios such as SQL Server, MySQL, PostgreSQL, and MongoDB. Save time and effort by automating your move to Azure with PowerShell. Database Migration Service works with PowerShell cmdlets to automatically migrate a list of databases. Supports Microsoft SQL Server, MySQL, PostgreSQL, and MongoDB migration to Azure from on-premises and other clouds.
  • 45
    Etleap

    Etleap

    Etleap

    Etleap was built from the ground up on AWS to support Redshift and snowflake data warehouses and S3/Glue data lakes. Their solution simplifies and automates ETL by offering fully-managed ETL-as-a-service. Etleap's data wrangler and modeling tools let users control how data is transformed for analysis, without writing any code. Etleap monitors and maintains data pipelines for availability and completeness, eliminating the need for constant maintenance, and centralizes data from 50+ disparate sources and silos into your data warehouse or data lake.
  • 46
    The Autonomous Data Engine
    There is a consistent “buzz” today about how leading companies are harnessing big data for competitive advantage. Your organization is striving to become one of those market-leading companies. However, the reality is that over 80% of big data projects fail to deploy to production because project implementation is a complex, resource-intensive effort that takes months or even years. The technology is complicated, and the people who have the necessary skills are either extremely expensive or impossible to find. Automates the complete data workflow from source to consumption. Automates migration of data and workloads from legacy Data Warehouse systems to big data platforms. Automates orchestration and management of complex data pipelines in production. Alternative approaches such as stitching together multiple point solutions or custom development are expensive, inflexible, time-consuming and require specialized skills to assemble and maintain.
  • 47
    AtScale

    AtScale

    AtScale

    AtScale helps accelerate and simplify business intelligence resulting in faster time-to-insight, better business decisions, and more ROI on your Cloud analytics investment. Eliminate repetitive data engineering tasks like curating, maintaining and delivering data for analysis. Define business definitions in one location to ensure consistent KPI reporting across BI tools. Accelerate time to insight from data while efficiently managing cloud compute costs. Leverage existing data security policies for data analytics no matter where data resides. AtScale’s Insights workbooks and models let you perform Cloud OLAP multidimensional analysis on data sets from multiple providers – with no data prep or data engineering required. We provide built-in easy to use dimensions and measures to help you quickly derive insights that you can use for business decisions.
  • 48
    Stacksync

    Stacksync

    Stacksync

    The first AI-native Enterprise Integration Platform. Real-time sync, workflow automation, event queues, databases, EDI, and monitoring, without stitching together MuleSoft, Fivetran, Kafka, and Zapier. Keep your systems perfectly aligned with Stacksync’s reliable two-way data synchronization. Stop building brittle API scripts. With Stacksync, you can trigger complex automated workflows using simple SQL commands. Transform legacy EDI complexity into simple database interactions. Handle massive traffic spikes without losing a single data point. Interact with your CRM, ERP, and payment tools as if they were just another table in your database. Gain complete visibility into your data pipeline health. The only integration cloud built for real-time
    Starting Price: $1000+/month
  • 49
    Qlik Replicate
    Qlik Replicate is a high-performance data replication tool offering optimized data ingestion from a broad array of data sources and platforms and seamless integration with all major big data analytics platforms. Replicate supports bulk replication as well as real-time incremental replication using CDC (change data capture). Our unique zero-footprint architecture eliminates unnecessary overhead on your mission-critical systems and facilitates zero-downtime data migrations and database upgrades. Database replication enables you to move or consolidate data from a production database to a newer version of the database, another type of computing environment, or an alternative database management system, to migrate data from SQL Server to Oracle, for example. Data replication can be used to offload production data from a database, and load it to operational data stores or data warehouses for reporting or analytics.
  • 50
    Orbit Analytics

    Orbit Analytics

    Orbit Analytics

    Empower your business by leveraging a true self-service reporting and analytics platform. Powerful and scalable, Orbit’s operational reporting and business intelligence software enables users to create their own analytics and reports. Orbit Reporting + Analytics offers pre-built integration with enterprise resource planning (ERP) and key cloud business applications that include PeopleSoft, Oracle E-Business Suite, Salesforce, Taleo, and more. With Orbit, you can quickly and efficiently find answers from any data source, determine opportunities, and make smart, data-driven decisions. Orbit comes with more than 200 integrators and connectors that allow you to combine data from multiple data sources, so you can harness the power of collective knowledge to make informed decisions. Orbit Adapters connect with your key business systems, and designed to seamlessly inherit authentication, data security, business roles and apply them to reporting.