9 Integrations with AWS Lake Formation
View a list of AWS Lake Formation integrations and software that integrates with AWS Lake Formation below. Compare the best AWS Lake Formation integrations as well as features, ratings, user reviews, and pricing of software that integrates with AWS Lake Formation. Here are the current AWS Lake Formation integrations in 2026:
-
1
Amazon S3
Amazon
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as data lakes, websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. Amazon S3 provides easy-to-use management features so you can organize your data and configure finely-tuned access controls to meet your specific business, organizational, and compliance requirements. Amazon S3 is designed for 99.999999999% (11 9's) of durability, and stores data for millions of applications for companies all around the world. Scale your storage resources up and down to meet fluctuating demands, without upfront investments or resource procurement cycles. Amazon S3 is designed for 99.999999999% (11 9’s) of data durability. -
2
Amazon Athena
Amazon
Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run. Athena is easy to use. Simply point to your data in Amazon S3, define the schema, and start querying using standard SQL. Most results are delivered within seconds. With Athena, there’s no need for complex ETL jobs to prepare your data for analysis. This makes it easy for anyone with SQL skills to quickly analyze large-scale datasets. Athena is out-of-the-box integrated with AWS Glue Data Catalog, allowing you to create a unified metadata repository across various services, crawl data sources to discover schemas and populate your Catalog with new and modified table and partition definitions, and maintain schema versioning. -
3
Amazon Redshift
Amazon
More customers pick Amazon Redshift than any other cloud data warehouse. Redshift powers analytical workloads for Fortune 500 companies, startups, and everything in between. Companies like Lyft have grown with Redshift from startups to multi-billion dollar enterprises. No other data warehouse makes it as easy to gain new insights from all your data. With Redshift you can query petabytes of structured and semi-structured data across your data warehouse, operational database, and your data lake using standard SQL. Redshift lets you easily save the results of your queries back to your S3 data lake using open formats like Apache Parquet to further analyze from other analytics services like Amazon EMR, Amazon Athena, and Amazon SageMaker. Redshift is the world’s fastest cloud data warehouse and gets faster every year. For performance intensive workloads you can use the new RA3 instances to get up to 3x the performance of any cloud data warehouse.Starting Price: $0.25 per hour -
4
AWS App Mesh
Amazon Web Services
AWS App Mesh is a service mesh that provides application-level networking to facilitate communication between your services across various types of computing infrastructure. App Mesh offers comprehensive visibility and high availability for your applications. Modern applications are generally made up of multiple services. Each service can be developed using various types of compute infrastructure, such as Amazon EC2, Amazon ECS, Amazon EKS, and AWS Fargate. As the number of services within an application grows, it becomes difficult to pinpoint the exact location of errors, redirect traffic after errors, and safely implement code changes. Previously, this required creating monitoring and control logic directly in your code and redeploying your services every time there were changes.Starting Price: Free -
5
Collate
Collate
Collate is an AI‑driven metadata platform that empowers data teams with automated discovery, observability, quality, and governance through agent‑based workflows. Built on the open source OpenMetadata foundation and a unified metadata graph, it offers 90+ turnkey connectors to ingest metadata from databases, data warehouses, BI tools, and pipelines, delivering in‑depth column‑level lineage, data profiling, and no‑code quality tests. Its AI agents automate data discovery, permission‑aware querying, alerting, and incident‑management workflows at scale, while real‑time dashboards, interactive analyses, and a collaborative business glossary enable both technical and non‑technical users to steward high‑quality data assets. Continuous monitoring and governance automations enforce compliance with standards such as GDPR and CCPA, reducing mean time to resolution for data issues and lowering total cost of ownership.Starting Price: Free -
6
Amazon EMR
Amazon
Amazon EMR is the industry-leading cloud big data platform for processing vast amounts of data using open-source tools such as Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto. With EMR you can run Petabyte-scale analysis at less than half of the cost of traditional on-premises solutions and over 3x faster than standard Apache Spark. For short-running jobs, you can spin up and spin down clusters and pay per second for the instances used. For long-running workloads, you can create highly available clusters that automatically scale to meet demand. If you have existing on-premises deployments of open-source tools such as Apache Spark and Apache Hive, you can also run EMR clusters on AWS Outposts. Analyze data using open-source ML frameworks such as Apache Spark MLlib, TensorFlow, and Apache MXNet. Connect to Amazon SageMaker Studio for large-scale model training, analysis, and reporting. -
7
Amazon SageMaker Feature Store is a fully managed, purpose-built repository to store, share, and manage features for machine learning (ML) models. Features are inputs to ML models used during training and inference. For example, in an application that recommends a music playlist, features could include song ratings, listening duration, and listener demographics. Features are used repeatedly by multiple teams and feature quality is critical to ensure a highly accurate model. Also, when features used to train models offline in batch are made available for real-time inference, it’s hard to keep the two feature stores synchronized. SageMaker Feature Store provides a secured and unified store for feature use across the ML lifecycle. Store, share, and manage ML model features for training and inference to promote feature reuse across ML applications. Ingest features from any data source including streaming and batch such as application logs, service logs, clickstreams, sensors, etc.
-
8
Velotix
Velotix
Velotix empowers organizations to maximize the value of their data while ensuring security and compliance in a rapidly evolving regulatory landscape. The Velotix Data Security Platform offers automated policy management, dynamic access controls, and comprehensive data discovery, all driven by advanced AI. With seamless integration across multi-cloud environments, Velotix enables secure, self-service data access, optimizing data utilization without compromising on governance. Trusted by leading enterprises across financial services, healthcare, telecommunications, and more, Velotix is reshaping data governance for the ‘need to share’ era. -
9
Amazon DataZone
Amazon
Amazon DataZone is a data management service that enables customers to catalog, discover, share, and govern data stored across AWS, on-premises, and third-party sources. It allows administrators and data stewards to manage and control access to data using fine-grained controls, ensuring that users have the appropriate level of privileges and context. The service simplifies data access for engineers, data scientists, product managers, analysts, and business users, facilitating data-driven insights through seamless collaboration. Key features of Amazon DataZone include a business data catalog for searching and requesting access to published data, project collaboration tools for managing and monitoring data assets, a web-based portal providing personalized views for data analytics, and governed data sharing workflows to ensure appropriate data access. Additionally, Amazon DataZone automates data discovery and cataloging using machine learning.
- Previous
- You're on page 1
- Next