25 Integrations with Apache Zeppelin
View a list of Apache Zeppelin integrations and software that integrates with Apache Zeppelin below. Compare the best Apache Zeppelin integrations as well as features, ratings, user reviews, and pricing of software that integrates with Apache Zeppelin. Here are the current Apache Zeppelin integrations in 2025:
-
1
Python
Python
The core of extensible programming is defining functions. Python allows mandatory and optional arguments, keyword arguments, and even arbitrary argument lists. Whether you're new to programming or an experienced developer, it's easy to learn and use Python. Python can be easy to pick up whether you're a first-time programmer or you're experienced with other languages. The following pages are a useful first step to get on your way to writing programs with Python! The community hosts conferences and meetups to collaborate on code, and much more. Python's documentation will help you along the way, and the mailing lists will keep you in touch. The Python Package Index (PyPI) hosts thousands of third-party modules for Python. Both Python's standard library and the community-contributed modules allow for endless possibilities.Starting Price: Free -
2
Domino Enterprise MLOps Platform
Domino Data Lab
The Domino platform helps data science teams improve the speed, quality, and impact of data science at scale. Domino is open and flexible, empowering professional data scientists to use their preferred tools and infrastructure. Data science models get into production fast and are kept operating at peak performance with integrated workflows. Domino also delivers the security, governance and compliance that enterprises expect. The Self-Service Infrastructure Portal makes data science teams become more productive with easy access to their preferred tools, scalable compute, and diverse data sets. The Integrated Model Factory includes a workbench, model and app deployment, and integrated monitoring to rapidly experiment, deploy the best models in production, ensure optimal performance, and collaborate across the end-to-end data science lifecycle. The System of Record allows teams to easily find, reuse, reproduce, and build on any data science work to amplify innovation. -
3
Java
Oracle
The Java™ Programming Language is a general-purpose, concurrent, strongly typed, class-based object-oriented language. It is normally compiled to the bytecode instruction set and binary format defined in the Java Virtual Machine Specification. In the Java programming language, all source code is first written in plain text files ending with the .java extension. Those source files are then compiled into .class files by the javac compiler. A .class file does not contain code that is native to your processor; it instead contains bytecodes — the machine language of the Java Virtual Machine1 (Java VM). The java launcher tool then runs your application with an instance of the Java Virtual Machine.Starting Price: Free -
4
Elasticsearch
Elastic
Elastic is a search company. As the creators of the Elastic Stack (Elasticsearch, Kibana, Beats, and Logstash), Elastic builds self-managed and SaaS offerings that make data usable in real time and at scale for search, logging, security, and analytics use cases. Elastic's global community has more than 100,000 members across 45 countries. Since its initial release, Elastic's products have achieved more than 400 million cumulative downloads. Today thousands of organizations, including Cisco, eBay, Dell, Goldman Sachs, Groupon, HP, Microsoft, Netflix, The New York Times, Uber, Verizon, Yelp, and Wikipedia, use the Elastic Stack, and Elastic Cloud to power mission-critical systems that drive new revenue opportunities and massive cost savings. Elastic has headquarters in Amsterdam, The Netherlands, and Mountain View, California; and has over 1,000 employees in more than 35 countries around the world. -
5
Apache Cassandra
Apache Software Foundation
The Apache Cassandra database is the right choice when you need scalability and high availability without compromising performance. Linear scalability and proven fault-tolerance on commodity hardware or cloud infrastructure make it the perfect platform for mission-critical data. Cassandra's support for replicating across multiple datacenters is best-in-class, providing lower latency for your users and the peace of mind of knowing that you can survive regional outages. -
6
Apache Hive
Apache Software Foundation
The Apache Hive data warehouse software facilitates reading, writing, and managing large datasets residing in distributed storage using SQL. Structure can be projected onto data already in storage. A command line tool and JDBC driver are provided to connect users to Hive. Apache Hive is an open source project run by volunteers at the Apache Software Foundation. Previously it was a subproject of Apache® Hadoop®, but has now graduated to become a top-level project of its own. We encourage you to learn about the project and contribute your expertise. Traditional SQL queries must be implemented in the MapReduce Java API to execute SQL applications and queries over distributed data. Hive provides the necessary SQL abstraction to integrate SQL-like queries (HiveQL) into the underlying Java without the need to implement queries in the low-level Java API. -
7
Alluxio
Alluxio
Alluxio is world’s first open source data orchestration technology for analytics and AI for the cloud. It bridges the gap between data driven applications and storage systems, bringing data from the storage tier closer to the data driven applications and makes it easily accessible enabling applications to connect to numerous storage systems through a common interface. Alluxio’s memory-first tiered architecture enables data access at speeds orders of magnitude faster than existing solutions. Imagine as an IT leader having the flexibility to choose any services that are available in public cloud and on premises. And imagine being able to scale your storage for your data lakes with control over data locality and protection for your organization. With these goals in mind, NetApp and Alluxio are joining forces to help our customers adapt to new requirements for modernizing data architecture with low-touch operations for analytics, machine learning, and artificial intelligence workflows.Starting Price: 26¢ Per SW Instance Per Hour -
8
Scala
Scala
Scala combines object-oriented and functional programming in one concise, high-level language. Scala's static types help avoid bugs in complex applications, and its JVM and JavaScript runtimes let you build high-performance systems with easy access to huge ecosystems of libraries. The Scala compiler is smart about static types. Most of the time, you need not tell it the types of your variables. Instead, its powerful type inference will figure them out for you. In Scala, case classes are used to represent structural data types. They implicitly equip the class with meaningful toString, equals and hashCode methods, as well as the ability to be deconstructed with pattern matching. In Scala, functions are values, and can be defined as anonymous functions with a concise syntax.Starting Price: Free -
9
R
The R Foundation
R is a language and environment for statistical computing and graphics. It is a GNU project which is similar to the S language and environment which was developed at Bell Laboratories (formerly AT&T, now Lucent Technologies) by John Chambers and colleagues. R can be considered as a different implementation of S. There are some important differences, but much code written for S runs unaltered under R. R provides a wide variety of statistical (linear and nonlinear modelling, classical statistical tests, time-series analysis, classification, clustering, …) and graphical techniques, and is highly extensible. The S language is often the vehicle of choice for research in statistical methodology, and R provides an Open Source route to participation in that activity. One of R’s strengths is the ease with which well-designed publication-quality plots can be produced, including mathematical symbols and formulae where needed.Starting Price: Free -
10
Markdown
Markdown
Markdown allows you to write using an easy-to-read, easy-to-write plain text format, then convert it to structurally valid XHTML (or HTML). Thus, “Markdown” is two things: (1) a plain text formatting syntax; and (2) a software tool, written in Perl, that converts the plain text formatting to HTML. See the Syntax page for details pertaining to Markdown’s formatting syntax. You can try it out, right now, using the online Dingus. The overriding design goal for Markdown’s formatting syntax is to make it as readable as possible. The idea is that a Markdown-formatted document should be publishable as-is, as plain text, without looking like it’s been marked up with tags or formatting instructions. While Markdown’s syntax has been influenced by several existing text-to-HTML filters, the single biggest source of inspiration for Markdown’s syntax is the format of plain text email.Starting Price: Free -
11
Yandex Data Proc
Yandex
You select the size of the cluster, node capacity, and a set of services, and Yandex Data Proc automatically creates and configures Spark and Hadoop clusters and other components. Collaborate by using Zeppelin notebooks and other web apps via a UI proxy. You get full control of your cluster with root permissions for each VM. Install your own applications and libraries on running clusters without having to restart them. Yandex Data Proc uses instance groups to automatically increase or decrease computing resources of compute subclusters based on CPU usage indicators. Data Proc allows you to create managed Hive clusters, which can reduce the probability of failures and losses caused by metadata unavailability. Save time on building ETL pipelines and pipelines for training and developing models, as well as describing other iterative tasks. The Data Proc operator is already built into Apache Airflow.Starting Price: $0.19 per hour -
12
Warp 10
SenX
Warp 10 is a modular open source platform that collects, stores, and analyzes data from sensors. Shaped for the IoT with a flexible data model, Warp 10 provides a unique and powerful framework to simplify your processes from data collection to analysis and visualization, with the support of geolocated data in its core model (called Geo Time Series). Warp 10 is both a time series database and a powerful analytics environment, allowing you to make: statistics, extraction of characteristics for training models, filtering and cleaning of data, detection of patterns and anomalies, synchronization or even forecasts. The analysis environment can be implemented within a large ecosystem of software components such as Spark, Kafka Streams, Hadoop, Jupyter, Zeppelin and many more. It can also access data stored in many existing solutions, relational or NoSQL databases, search engines and S3 type object storage system. -
13
Apache HBase
The Apache Software Foundation
Use Apache HBase™ when you need random, realtime read/write access to your Big Data. This project's goal is the hosting of very large tables -- billions of rows X millions of columns -- atop clusters of commodity hardware. Automatic failover support between RegionServers. Easy to use Java API for client access. Thrift gateway and a REST-ful Web service that supports XML, Protobuf, and binary data encoding options. Support for exporting metrics via the Hadoop metrics subsystem to files or Ganglia; or via JMX. -
14
PostgreSQL
PostgreSQL Global Development Group
PostgreSQL is a powerful, open-source object-relational database system with over 30 years of active development that has earned it a strong reputation for reliability, feature robustness, and performance. There is a wealth of information to be found describing how to install and use PostgreSQL through the official documentation. The open-source community provides many helpful places to become familiar with PostgreSQL, discover how it works, and find career opportunities. Learm more on how to engage with the community. The PostgreSQL Global Development Group has released an update to all supported versions of PostgreSQL, including 15.1, 14.6, 13.9, 12.13, 11.18, and 10.23. This release fixes 25 bugs reported over the last several months. This is the final release of PostgreSQL 10. PostgreSQL 10 will no longer receive security and bug fixes. If you are running PostgreSQL 10 in a production environment, we suggest that you make plans to upgrade. -
15
Hadoop
Apache Software Foundation
The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures. A wide variety of companies and organizations use Hadoop for both research and production. Users are encouraged to add themselves to the Hadoop PoweredBy wiki page. Apache Hadoop 3.3.4 incorporates a number of significant enhancements over the previous major release line (hadoop-3.2). -
16
Apache Spark
Apache Software Foundation
Apache Spark™ is a unified analytics engine for large-scale data processing. Apache Spark achieves high performance for both batch and streaming data, using a state-of-the-art DAG scheduler, a query optimizer, and a physical execution engine. Spark offers over 80 high-level operators that make it easy to build parallel apps. And you can use it interactively from the Scala, Python, R, and SQL shells. Spark powers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application. Spark runs on Hadoop, Apache Mesos, Kubernetes, standalone, or in the cloud. It can access diverse data sources. You can run Spark using its standalone cluster mode, on EC2, on Hadoop YARN, on Mesos, or on Kubernetes. Access data in HDFS, Alluxio, Apache Cassandra, Apache HBase, Apache Hive, and hundreds of other data sources. -
17
Apache Geode
Apache
Build high-speed, data-intensive applications that elastically meet performance requirements at any scale. Take advantage of Apache Geode's unique technology that blends advanced techniques for data replication, partitioning and distributed processing. Apache Geode provides a database-like consistency model, reliable transaction processing and a shared-nothing architecture to maintain very low latency performance with high concurrency processing. Data can easily be partitioned (sharded) or replicated between nodes allowing performance to scale as needed. Durability is ensured through redundant in-memory copies and disk-based persistence. Super fast write-ahead-logging (WAL) persistence with a shared-nothing architecture that is optimized for fast parallel recovery of nodes or an entire cluster. -
18
JavaScript
JavaScript
JavaScript is a scripting language and programming language for the web that enables developers to build dynamic elements on the web. Over 97% of the websites in the world use client-side JavaScript. JavaScript is one of the most important scripting languages on the web. Strings in JavaScript are contained within a pair of either single quotation marks '' or double quotation marks "". Both quotes represent Strings but be sure to choose one and STICK WITH IT. If you start with a single quote, you need to end with a single quote. There are pros and cons to using both IE single quotes tend to make it easier to write HTML within Javascript as you don’t have to escape the line with a double quote. Let’s say you’re trying to use quotation marks inside a string. You’ll need to use opposite quotation marks inside and outside of JavaScript single or double quotes. -
19
SQL
SQL
SQL is a domain-specific programming language used for accessing, managing, and manipulating relational databases and relational database management systems. -
20
Zepl
Zepl
Sync, search and manage all the work across your data science team. Zepl’s powerful search lets you discover and reuse models and code. Use Zepl’s enterprise collaboration platform to query data from Snowflake, Athena or Redshift and build your models in Python. Use pivoting and dynamic forms for enhanced interactions with your data using heatmap, radar, and Sankey charts. Zepl creates a new container every time you run your notebook, providing you with the same image each time you run your models. Invite team members to join a shared space and work together in real time or simply leave their comments on a notebook. Use fine-grained access controls to share your work. Allow others have read, edit, and run access as well as enable collaboration and distribution. All notebooks are auto-saved and versioned. You can name, manage and roll back all versions through an easy-to-use interface, and export seamlessly into Github. -
21
QueryPie
QueryPie
QueryPie is a centralized platform to manage scattered data sources and security policies all in one place. Put your company on the fast track to success without changing the existing data environment. Data governance is vital to today's data-driven world. Ensure you're on the right side of data governance standards while giving many users access to growing amounts of critical information. Establish data access policies by including key attributes such as IP address and access time. Privilege types can be created based on SQL commands classified as DML, DCL, and DDL to secure data analysis and editing. Manage details of SQL events at a glance and discover user behavior and potential security concerns by browsing logs based on permissions. All histories can be exported as a file and used for reporting purposes. -
22
Timbr.ai
Timbr.ai
The smart semantic layer integrates data with business meaning and relationships, unifies metrics, and accelerates the delivery of data products with 90% shorter SQL queries. Easily model data using business terms to give it common meaning and align business metrics. Define semantic relationships that substitute JOINs so queries become much simpler. Use hierarchies and classifications to better understand data. Automatically map data to the semantic model. Join multiple data sources with a powerful distributed SQL engine to query data at scale. Consume data as a connected semantic graph. Boost performance and save compute costs with an intelligent cache engine and materialized views. Benefit from advanced query optimizations. Connect to most clouds, datalakes, data warehouses, databases, and any file format. Timbr empowers you to work with your data sources seamlessly. When a query is run, Timbr optimizes the query and pushes it down to the backend. -
23
Apache Flink
Apache Software Foundation
Apache Flink is a framework and distributed processing engine for stateful computations over unbounded and bounded data streams. Flink has been designed to run in all common cluster environments, perform computations at in-memory speed and at any scale. Any kind of data is produced as a stream of events. Credit card transactions, sensor measurements, machine logs, or user interactions on a website or mobile application, all of these data are generated as a stream. Apache Flink excels at processing unbounded and bounded data sets. Precise control of time and state enable Flink’s runtime to run any kind of application on unbounded streams. Bounded streams are internally processed by algorithms and data structures that are specifically designed for fixed sized data sets, yielding excellent performance. Flink is designed to work well each of the previously listed resource managers. -
24
Apache Ignite
Apache Ignite
Use Ignite as a traditional SQL database by leveraging JDBC drivers, ODBC drivers, or the native SQL APIs that are available for Java, C#, C++, Python, and other programming languages. Seamlessly join, group, aggregate, and order your distributed in-memory and on-disk data. Accelerate your existing applications by 100x using Ignite as an in-memory cache or in-memory data grid that is deployed over one or more external databases. Think of a cache that you can query with SQL, transact, and compute on. Build modern applications that support transactional and analytical workloads by using Ignite as a database that scales beyond the available memory capacity. Ignite allocates memory for your hot data and goes to disk whenever applications query cold records. Execute kilobyte-size custom code over petabytes of data. Turn your Ignite database into a distributed supercomputer for low-latency calculations, complex analytics, and machine learning. -
25
OctoData
SoyHuCe
OctoData is deployed at a lower cost, in Cloud hosting and includes personalized support from the definition of your needs to the use of the solution. OctoData is based on innovative open-source technologies and knows how to adapt to open up to future possibilities. Its Supervisor offers a management interface that allows you to quickly capture, store and exploit a growing quantity and variety of data. With OctoData, prototype and industrialize your massive data recovery solutions in the same environment, including in real time. Thanks to the exploitation of your data, obtain precise reports, explore new possibilities, increase your productivity and gain in profitability.
- Previous
- You're on page 1
- Next