125 Integrations with Vertex AI
View a list of Vertex AI integrations and software that integrates with Vertex AI below. Compare the best Vertex AI integrations as well as features, ratings, user reviews, and pricing of software that integrates with Vertex AI. Here are the current Vertex AI integrations in 2025:
-
1
Google Cloud Platform
Google
Google Cloud is a cloud-based service that allows you to create anything from simple websites to complex applications for businesses of all sizes. New customers get $300 in free credits to run, test, and deploy workloads. All customers can use 25+ products for free, up to monthly usage limits. Use Google's core infrastructure, data analytics & machine learning. Secure and fully featured for all enterprises. Tap into big data to find answers faster and build better products. Grow from prototype to production to planet-scale, without having to think about capacity, reliability or performance. From virtual machines with proven price/performance advantages to a fully managed app development platform. Scalable, resilient, high performance object storage and databases for your applications. State-of-the-art software-defined networking products on Google’s private fiber network. Fully managed data warehousing, batch and stream processing, data exploration, Hadoop/Spark, and messaging.Starting Price: Free ($300 in free credits) -
2
Google Compute Engine
Google
Compute Engine is Google's infrastructure as a service (IaaS) platform for organizations to create and run cloud-based virtual machines. Computing infrastructure in predefined or custom machine sizes to accelerate your cloud transformation. General purpose (E2, N1, N2, N2D) machines provide a good balance of price and performance. Compute optimized (C2) machines offer high-end vCPU performance for compute-intensive workloads. Memory optimized (M2) machines offer the highest memory and are great for in-memory databases. Accelerator optimized (A2) machines are based on the A100 GPU, for very demanding applications. Integrate Compute with other Google Cloud services such as AI/ML and data analytics. Make reservations to help ensure your applications have the capacity they need as they scale. Save money just for running Compute with sustained-use discounts, and achieve greater savings when you use committed-use discounts.Starting Price: Free ($300 in free credits) -
3
Google Cloud Speech-to-Text
Google
Google Cloud’s Speech API processes more than 1 billion voice minutes per month with close to human levels of understanding for many commonly spoken languages. Powered by the best of Google's AI research and technology, Google Cloud's Speech-to-Text API helps you accurately transcribe speech into text in 73 languages and 137 different local variants. Leverage Google’s most advanced deep learning neural network algorithms for automatic speech recognition (ASR) and deploy ASR wherever you need it, whether in the cloud with the API, on-premises with Speech-to-Text On-Prem, or locally on any device with Speech On-Device.Starting Price: Free ($300 in free credits) -
4
Google Cloud BigQuery
Google
BigQuery is a serverless, multicloud data warehouse that simplifies the process of working with all types of data so you can focus on getting valuable business insights quickly. At the core of Google’s data cloud, BigQuery allows you to simplify data integration, cost effectively and securely scale analytics, share rich data experiences with built-in business intelligence, and train and deploy ML models with a simple SQL interface, helping to make your organization’s operations more data-driven. Gemini in BigQuery offers AI-driven tools for assistance and collaboration, such as code suggestions, visual data preparation, and smart recommendations designed to boost efficiency and reduce costs. BigQuery delivers an integrated platform featuring SQL, a notebook, and a natural language-based canvas interface, catering to data professionals with varying coding expertise. This unified workspace streamlines the entire analytics process.Starting Price: Free ($300 in free credits) -
5
Google AI Studio
Google
Google AI Studio is a comprehensive, web-based development environment that democratizes access to Google's cutting-edge AI models, notably the Gemini family, enabling a broad spectrum of users to explore and build innovative applications. This platform facilitates rapid prototyping by providing an intuitive interface for prompt engineering, allowing developers to meticulously craft and refine their interactions with AI. Beyond basic experimentation, AI Studio supports the seamless integration of AI capabilities into diverse projects, from simple chatbots to complex data analysis tools. Users can rigorously test different prompts, observe model behaviors, and iteratively refine their AI-driven solutions within a collaborative and user-friendly environment. This empowers developers to push the boundaries of AI application development, fostering creativity and accelerating the realization of AI-powered solutions.Starting Price: Free -
6
Axis LMS
Atrixware
Axis LMS is a powerful, flexible learning management system built for businesses that take training seriously. Whether you’re onboarding employees, managing certifications, or delivering compliance training, Axis LMS makes it easy with automation, customization, and robust reporting. Designed for real-world use, it supports video, SCORM, quizzes, surveys and much more - all in a responsive, mobile-friendly interface for users. With powerful admin tools and branding options, Axis LMS helps you create engaging learning experiences that scale with your organization, both online and virtual classrooms. Trusted by organizations of all sizes since 1997, Axis LMS combines enterprise-level power with small-business flexibility. Whether you have 25 users or 10,000+, Axis LMS delivers a reliable, scalable solution that evolves with your training needs.Starting Price: $169/month -
7
StackAI
StackAI
StackAI is an enterprise AI automation platform to build end-to-end internal tools and processes with AI agents in a fully compliant and secure way. Designed for large organizations, it enables teams to automate complex workflows across operations, compliance, finance, IT, and support without heavy engineering. With StackAI you can: • Connect knowledge bases (SharePoint, Confluence, Notion, Google Drive, databases) with versioning, citations, and access controls. • Deploy AI agents as chat assistants, advanced forms, or APIs integrated into Slack, Teams, Salesforce, HubSpot, or ServiceNow. • Govern usage with enterprise security: SSO (Okta, Azure AD, Google), RBAC, audit logs, PII masking, data residency, and cost controls. • Route across OpenAI, Anthropic, Google, or local LLMs with guardrails, evaluations, and testing. • Start fast with templates for Contract Analyzer, Support Desk, RFP Response, Investment Memo Generator, and more.Starting Price: $0 -
8
TensorFlow
TensorFlow
An end-to-end open source machine learning platform. TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML powered applications. Build and train ML models easily using intuitive high-level APIs like Keras with eager execution, which makes for immediate model iteration and easy debugging. Easily train and deploy models in the cloud, on-prem, in the browser, or on-device no matter what language you use. A simple and flexible architecture to take new ideas from concept to code, to state-of-the-art models, and to publication faster. Build, deploy, and experiment easily with TensorFlow.Starting Price: Free -
9
Dialogflow
Google
Dialogflow from Google Cloud is a natural language understanding platform that makes it easy to design and integrate a conversational user interface into your mobile app, web application, device, bot, interactive voice response system, and so on. Using Dialogflow, you can provide new and engaging ways for users to interact with your product. Dialogflow can analyze multiple types of input from your customers, including text or audio inputs (like from a phone or voice recording). It can also respond to your customers in a couple of ways, either through text or with synthetic speech. Dialogflow CX and ES provide virtual agent services for chatbots and contact centers. If you have a contact center that employs human agents, you can use Agent Assist to help your human agents. Agent Assist provides real-time suggestions for human agents while they are in conversations with end-user customers. -
10
Automation Anywhere
Automation Anywhere
Automation Anywhere is the leader in Agentic Process Automation (APA), putting AI to work across organizations. The company’s platform is powered with specialized AI agents, generative AI, and offers process discovery, RPA end-to-end process orchestration, document processing and analytics, with a security and governance-first approach. Automation Anywhere empowers organizations worldwide to unleash productivity gains, improve customer experiences and create new revenue streams. The company is guided by its vision to fuel the future of work by unleashing human potential through Agentic AI-powered automation.Starting Price: $750.00 -
11
Gemini
Google
Gemini is Google's advanced AI chatbot designed to enhance creativity and productivity by engaging in natural language conversations. Accessible via the web and mobile apps, Gemini integrates seamlessly with various Google services, including Docs, Drive, and Gmail, enabling users to draft content, summarize information, and manage tasks efficiently. Its multimodal capabilities allow it to process and generate diverse data types, such as text, images, and audio, providing comprehensive assistance across different contexts. As a continuously learning model, Gemini adapts to user interactions, offering personalized and context-aware responses to meet a wide range of user needs.Starting Price: Free -
12
Python
Python
The core of extensible programming is defining functions. Python allows mandatory and optional arguments, keyword arguments, and even arbitrary argument lists. Whether you're new to programming or an experienced developer, it's easy to learn and use Python. Python can be easy to pick up whether you're a first-time programmer or you're experienced with other languages. The following pages are a useful first step to get on your way to writing programs with Python! The community hosts conferences and meetups to collaborate on code, and much more. Python's documentation will help you along the way, and the mailing lists will keep you in touch. The Python Package Index (PyPI) hosts thousands of third-party modules for Python. Both Python's standard library and the community-contributed modules allow for endless possibilities.Starting Price: Free -
13
Gemini Advanced
Google
Gemini Advanced is a cutting-edge AI model designed for unparalleled performance in natural language understanding, generation, and problem-solving across diverse domains. Featuring a revolutionary neural architecture, it delivers exceptional accuracy, nuanced contextual comprehension, and deep reasoning capabilities. Gemini Advanced is engineered to handle complex, multifaceted tasks, from creating detailed technical content and writing code to conducting in-depth data analysis and providing strategic insights. Its adaptability and scalability make it a powerful solution for both individual users and enterprise-level applications. Gemini Advanced sets a new standard for intelligence, innovation, and reliability in AI-powered solutions. You'll also get access to Gemini in Gmail, Docs, and more, 2 TB storage, and other benefits from Google One. Gemini Advanced also offers access to Gemini with Deep Research. You can conduct in-depth and real-time research on almost any subject.Starting Price: $19.99 per month -
14
Java
Oracle
The Java™ Programming Language is a general-purpose, concurrent, strongly typed, class-based object-oriented language. It is normally compiled to the bytecode instruction set and binary format defined in the Java Virtual Machine Specification. In the Java programming language, all source code is first written in plain text files ending with the .java extension. Those source files are then compiled into .class files by the javac compiler. A .class file does not contain code that is native to your processor; it instead contains bytecodes — the machine language of the Java Virtual Machine1 (Java VM). The java launcher tool then runs your application with an instance of the Java Virtual Machine.Starting Price: Free -
15
Get insightful text analysis with machine learning that extracts, analyzes, and stores text. Train high-quality machine learning custom models without a single line of code with AutoML. Apply natural language understanding (NLU) to apps with Natural Language API. Use entity analysis to find and label fields within a document, including emails, chat, and social media, and then sentiment analysis to understand customer opinions to find actionable product and UX insights. Natural Language with speech-to-text API extracts insights from audio. Vision API adds optical character recognition (OCR) for scanned docs. Translation API understands sentiments in multiple languages. Use custom entity extraction to identify domain-specific entities within documents, many of which don’t appear in standard language models, without having to spend time or money on manual analysis. Train your own high-quality machine learning custom models to classify, extract, and detect sentiment.
-
16
Claude Sonnet 3.5
Anthropic
Claude Sonnet 3.5 sets new industry benchmarks for graduate-level reasoning (GPQA), undergraduate-level knowledge (MMLU), and coding proficiency (HumanEval). It shows marked improvement in grasping nuance, humor, and complex instructions, and is exceptional at writing high-quality content with a natural, relatable tone. Claude Sonnet 3.5 operates at twice the speed of Claude Opus 3. This performance boost, combined with cost-effective pricing, makes Claude Sonnet 3.5 ideal for complex tasks such as context-sensitive customer support and orchestrating multi-step workflows. Claude Sonnet 3.5 is now available for free on Claude.ai and the Claude iOS app, while Claude Pro and Team plan subscribers can access it with significantly higher rate limits. It is also available via the Anthropic API, Amazon Bedrock, and Google Cloud’s Vertex AI. The model costs $3 per million input tokens and $15 per million output tokens, with a 200K token context window.Starting Price: Free -
17
Claude Opus 3
Anthropic
Opus, our most intelligent model, outperforms its peers on most of the common evaluation benchmarks for AI systems, including undergraduate level expert knowledge (MMLU), graduate level expert reasoning (GPQA), basic mathematics (GSM8K), and more. It exhibits near-human levels of comprehension and fluency on complex tasks, leading the frontier of general intelligence. All Claude 3 models show increased capabilities in analysis and forecasting, nuanced content creation, code generation, and conversing in non-English languages like Spanish, Japanese, and French.Starting Price: Free -
18
Gemini Code Assist
Google
Increase software development and delivery velocity using generative AI assistance, with enterprise security and privacy protection. Gemini Code Assist completes your code as you write, and generates whole code blocks or functions on demand. Code assistance is available in many popular IDEs, such as Visual Studio Code, JetBrains IDEs (IntelliJ, PyCharm, GoLand, WebStorm, and more), Cloud Workstations, Cloud Shell Editor, and supports 20+ programming languages, including Java, JavaScript, Python, C, C++, Go, PHP, and SQL. Through a natural language chat interface, you can quickly chat with Gemini Code Assist to get answers to your coding questions, or receive guidance on coding best practices. Chat is available in all supported IDEs. Enterprises can customize Gemini Code Assist using their organization’s private codebases and knowledge sources so that Gemini Code Assist can offer more tailored assistance. Gemini Code Assist enables large-scale changes to entire codebases.Starting Price: Free -
19
Claude Sonnet 3.7
Anthropic
Claude Sonnet 3.7, developed by Anthropic, is a cutting-edge AI model that combines rapid response with deep reflective reasoning. This innovative model allows users to toggle between quick, efficient responses and more thoughtful, reflective answers, making it ideal for complex problem-solving. By allowing Claude to self-reflect before answering, it excels at tasks that require high-level reasoning and nuanced understanding. With its ability to engage in deeper thought processes, Claude Sonnet 3.7 enhances tasks such as coding, natural language processing, and critical thinking applications. Available across various platforms, it offers a powerful tool for professionals and organizations seeking a high-performance, adaptable AI.Starting Price: Free -
20
Claude Opus 4
Anthropic
Claude Opus 4 represents a revolutionary leap in AI model performance, setting a new standard for coding and reasoning capabilities. As the world’s best coding model, Opus 4 excels in handling long-running, complex tasks, and agent workflows. With sustained performance that can run for hours, it outperforms all prior models—including the Sonnet series—making it ideal for demanding coding projects, research, and AI agent applications. It’s the model of choice for organizations looking to enhance their software engineering, streamline workflows, and improve productivity with remarkable precision. Now available on Anthropic API, Amazon Bedrock, and Google Cloud’s Vertex AI, Opus 4 offers unparalleled support for coding, debugging, and collaborative agent tasks.Starting Price: $15 / 1 million tokens (input) -
21
Gemini Flash
Google
Gemini Flash is an advanced large language model (LLM) from Google, specifically designed for high-speed, low-latency language processing tasks. Part of Google DeepMind’s Gemini series, Gemini Flash is tailored to provide real-time responses and handle large-scale applications, making it ideal for interactive AI-driven experiences such as customer support, virtual assistants, and live chat solutions. Despite its speed, Gemini Flash doesn’t compromise on quality; it’s built on sophisticated neural architectures that ensure responses remain contextually relevant, coherent, and precise. Google has incorporated rigorous ethical frameworks and responsible AI practices into Gemini Flash, equipping it with guardrails to manage and mitigate biased outputs, ensuring it aligns with Google’s standards for safe and inclusive AI. With Gemini Flash, Google empowers businesses and developers to deploy responsive, intelligent language tools that can meet the demands of fast-paced environments. -
22
Gemini 2.0
Google
Gemini 2.0 is an advanced AI-powered model developed by Google, designed to offer groundbreaking capabilities in natural language understanding, reasoning, and multimodal interactions. Building on the success of its predecessor, Gemini 2.0 integrates large language processing with enhanced problem-solving and decision-making abilities, enabling it to interpret and generate human-like responses with greater accuracy and nuance. Unlike traditional AI models, Gemini 2.0 is trained to handle multiple data types simultaneously, including text, images, and code, making it a versatile tool for research, business, education, and creative industries. Its core improvements include better contextual understanding, reduced bias, and a more efficient architecture that ensures faster, more reliable outputs. Gemini 2.0 is positioned as a major step forward in the evolution of AI, pushing the boundaries of human-computer interaction.Starting Price: Free -
23
DeepSeek R1
DeepSeek
DeepSeek-R1 is an advanced open-source reasoning model developed by DeepSeek, designed to rival OpenAI's Model o1. Accessible via web, app, and API, it excels in complex tasks such as mathematics and coding, demonstrating superior performance on benchmarks like the American Invitational Mathematics Examination (AIME) and MATH. DeepSeek-R1 employs a mixture of experts (MoE) architecture with 671 billion total parameters, activating 37 billion parameters per token, enabling efficient and accurate reasoning capabilities. This model is part of DeepSeek's commitment to advancing artificial general intelligence (AGI) through open-source innovation.Starting Price: Free -
24
Claude Sonnet 4
Anthropic
Claude Sonnet 4, the latest evolution of Anthropic’s language models, offers a significant upgrade in coding, reasoning, and performance. Designed for diverse use cases, Sonnet 4 builds upon the success of its predecessor, Claude Sonnet 3.7, delivering more precise responses and better task execution. With a state-of-the-art 72.7% performance on the SWE-bench, it stands out in agentic scenarios, offering enhanced steerability and clear reasoning capabilities. Whether handling software development, multi-feature app creation, or complex problem-solving, Claude Sonnet 4 ensures higher code quality, reduced errors, and a smoother development process.Starting Price: $3 / 1 million tokens (input) -
25
Gemini 2.5 Pro
Google
Gemini 2.5 Pro is an advanced AI model designed to handle complex tasks with enhanced reasoning and coding capabilities. Leading common benchmarks, it excels in math, science, and coding, demonstrating strong performance in tasks like web app creation and code transformation. Built on the Gemini 2.5 foundation, it features a 1 million token context window, enabling it to process vast datasets from various sources such as text, images, and code repositories. Available now in Google AI Studio, Gemini 2.5 Pro is optimized for more sophisticated applications and supports advanced users with improved performance for complex problem-solving.Starting Price: $19.99/month -
26
Claude Haiku 3.5
Anthropic
Our fastest model, delivering advanced coding, tool use, and reasoning at an accessible price Claude Haiku 3.5 is the next generation of our fastest model. For a similar speed to Claude Haiku 3, Claude Haiku 3.5 improves across every skill set and surpasses Claude Opus 3, the largest model in our previous generation, on many intelligence benchmarks. Claude Haiku 3.5 is available across our first-party API, Amazon Bedrock, and Google Cloud’s Vertex AI—initially as a text-only model and with image input to follow. -
27
Gemini-Exp-1206
Google
Gemini-Exp-1206 is an experimental AI model now available for preview to Gemini Advanced subscribers. This model significantly enhances performance in complex tasks such as coding, mathematics, reasoning, and following detailed instructions. It's designed to assist users in navigating intricate challenges with greater ease. As an early preview, some features may not function as expected, and it currently lacks access to real-time information. Users can access Gemini-Exp-1206 through the Gemini model drop-down on desktop and mobile web platforms. -
28
Gemini Pro
Google
Gemini is natively multimodal, which gives you the potential to transform any type of input into any type of output. We've built Gemini responsibly from the start, incorporating safeguards and working together with partners to make it safer and more inclusive. Integrate Gemini models into your applications with Google AI Studio and Google Cloud Vertex AI. -
29
Veo 2
Google
Veo 2 is a state-of-the-art video generation model. Veo creates videos with realistic motion and high quality output, up to 4K. Explore different styles and find your own with extensive camera controls. Veo 2 is able to faithfully follow simple and complex instructions, and convincingly simulates real-world physics as well as a wide range of visual styles. Significantly improves over other AI video models in terms of detail, realism, and artifact reduction. Veo represents motion to a high degree of accuracy, thanks to its understanding of physics and its ability to follow detailed instructions. Interprets instructions precisely to create a wide range of shot styles, angles, movements – and combinations of all of these. -
30
Gemini 2.0 Flash
Google
The Gemini 2.0 Flash AI model represents the next generation of high-speed, intelligent computing, designed to set new benchmarks in real-time language processing and decision-making. Building on the robust foundation of its predecessor, it incorporates enhanced neural architecture and breakthrough advancements in optimization, enabling even faster and more accurate responses. Gemini 2.0 Flash is designed for applications requiring instantaneous processing and adaptability, such as live virtual assistants, automated trading systems, and real-time analytics. Its lightweight, efficient design ensures seamless deployment across cloud, edge, and hybrid environments, while its improved contextual understanding and multitasking capabilities make it a versatile tool for tackling complex, dynamic workflows with precision and speed. -
31
Gemini Nano
Google
Gemini Nano from Google is a lightweight, energy-efficient AI model designed for high performance in compact, resource-constrained environments. Tailored for edge computing and mobile applications, Gemini Nano combines Google's advanced AI architecture with cutting-edge optimization techniques to deliver seamless performance without compromising speed or accuracy. Despite its compact size, it excels in tasks like voice recognition, natural language processing, real-time translation, and personalized recommendations. With a focus on privacy and efficiency, Gemini Nano processes data locally, minimizing reliance on cloud infrastructure while maintaining robust security. Its adaptability and low power consumption make it an ideal choice for smart devices, IoT ecosystems, and on-the-go AI solutions. -
32
Gemini 1.5 Pro
Google
The Gemini 1.5 Pro AI model is a state-of-the-art language model designed to deliver highly accurate, context-aware, and human-like responses across a variety of applications. Built with cutting-edge neural architecture, it excels in natural language understanding, generation, and reasoning tasks. The model is fine-tuned for versatility, supporting tasks like content creation, code generation, data analysis, and complex problem-solving. Its advanced algorithms ensure nuanced comprehension, enabling it to adapt to different domains and conversational styles seamlessly. With a focus on scalability and efficiency, the Gemini 1.5 Pro is optimized for both small-scale implementations and enterprise-level integrations, making it a powerful tool for enhancing productivity and innovation. -
33
Gemini 1.5 Flash
Google
The Gemini 1.5 Flash AI model is an advanced, high-speed language model engineered for lightning-fast processing and real-time responsiveness. Designed to excel in dynamic and time-sensitive applications, it combines streamlined neural architecture with cutting-edge optimization techniques to deliver exceptional performance without compromising on accuracy. Gemini 1.5 Flash is tailored for scenarios requiring rapid data processing, instant decision-making, and seamless multitasking, making it ideal for chatbots, customer support systems, and interactive applications. Its lightweight yet powerful design ensures it can be deployed efficiently across a range of platforms, from cloud-based environments to edge devices, enabling businesses to scale their operations with unmatched agility. -
34
Datasaur
Datasaur
Welcome to the best tool for managing your labeling team, improving data quality, and working 70% faster—all in one place.Starting Price: $349/month -
35
Arize AI
Arize AI
Automatically discover issues, diagnose problems, and improve models with Arize’s machine learning observability platform. Machine learning systems address mission critical needs for businesses and their customers every day, yet often fail to perform in the real world. Arize is an end-to-end observability platform to accelerate detecting and resolving issues for your AI models at large. Seamlessly enable observability for any model, from any platform, in any environment. Lightweight SDKs to send training, validation, and production datasets. Link real-time or delayed ground truth to predictions. Gain foresight and confidence that your models will perform as expected once deployed. Proactively catch any performance degradation, data/prediction drift, and quality issues before they spiral. Reduce the time to resolution (MTTR) for even the most complex models with flexible, easy-to-use tools for root cause analysis.Starting Price: $50/month -
36
GPTConsole
GPTConsole
GPTConsole is a revolutionary command-line interface (CLI) designed to empower developers with the benefits of artificial intelligence. It transcends traditional terminal functionalities, enabling you to execute complex tasks using prompts. To install GPTConsole, you can use either yarn or npm package managers: yarn global add gpt-console or npm i gpt-console -g After the installation, simply type gpt-console in any terminal to get started, AI agents are specialized modules built into GPTConsole to automate specific tasks. These agents add a layer of intelligence and convenience to your development workflow. Bird: Manages your Twitter activities such as tweets and replies. Pixie: Generates sophisticated ReactJS landing pages based on your prompts. Chip: An upcoming agent that will answer any code-related queries. GPTConsole isn't just another command-line tool. It's an intelligent, developer-friendly terminal that brings real-time collaboration to your fingertips.Starting Price: Freemium -
37
Google Cloud IoT Core
Google
Cloud IoT Core is a fully managed service that allows you to easily and securely connect, manage, and ingest data from millions of globally dispersed devices. Cloud IoT Core, in combination with other services on Cloud IoT platform, provides a complete solution for collecting, processing, analyzing, and visualizing IoT data in real time to support improved operational efficiency. Cloud IoT Core, using Cloud Pub/Sub underneath, can aggregate dispersed device data into a single global system that integrates seamlessly with Google Cloud data analytics services. Use your IoT data stream for advanced analytics, visualizations, machine learning, and more to help improve operational efficiency, anticipate problems, and build rich models that better describe and optimize your business. Securely connect a few or millions of your globally dispersed devices through protocol endpoints that use automatic load balancing and horizontal scaling to ensure smooth data ingestion under any condition.Starting Price: $0.00045 per MB -
38
NVIDIA Triton™ inference server delivers fast and scalable AI in production. Open-source inference serving software, Triton inference server streamlines AI inference by enabling teams deploy trained AI models from any framework (TensorFlow, NVIDIA TensorRT®, PyTorch, ONNX, XGBoost, Python, custom and more on any GPU- or CPU-based infrastructure (cloud, data center, or edge). Triton runs models concurrently on GPUs to maximize throughput and utilization, supports x86 and ARM CPU-based inferencing, and offers features like dynamic batching, model analyzer, model ensemble, and audio streaming. Triton helps developers deliver high-performance inference aTriton integrates with Kubernetes for orchestration and scaling, exports Prometheus metrics for monitoring, supports live model updates, and can be used in all major public cloud machine learning (ML) and managed Kubernetes platforms. Triton helps standardize model deployment in production.Starting Price: Free
-
39
Go
Golang
With a strong ecosystem of tools and APIs on major cloud providers, it is easier than ever to build services with Go. With popular open source packages and a robust standard library, use Go to create fast and elegant CLIs. With enhanced memory performance and support for several IDEs, Go powers fast and scalable web applications. With fast build times, lean syntax, an automatic formatter and doc generator, Go is built to support both DevOps and SRE. Everything there is to know about Go. Get started on a new project or brush up for your existing Go code. An interactive introduction to Go in three sections. Each section concludes with a few exercises so you can practice what you've learned. The Playground allows anyone with a web browser to write Go code that we immediately compile, link, and run on our servers.Starting Price: Free -
40
Vertex AI Notebooks
Google
Vertex AI Notebooks is a fully managed, scalable solution from Google Cloud that accelerates machine learning (ML) development. It provides a seamless, interactive environment for data scientists and developers to explore data, prototype models, and collaborate in real-time. With integration into Google Cloud’s vast data and ML tools, Vertex AI Notebooks supports rapid prototyping, automated workflows, and deployment, making it easier to scale ML operations. The platform’s support for both Colab Enterprise and Vertex AI Workbench ensures a flexible and secure environment for diverse enterprise needs.Starting Price: $10 per GB -
41
Vertex AI Vision
Google
Easily build, deploy, and manage computer vision applications with a fully managed, end-to-end application development environment that reduces the time to build computer vision applications from days to minutes at one-tenth the cost of current offerings. Quickly and conveniently ingest real-time video and image streams at a global scale. Easily build computer vision applications using a drag-and-drop interface. Store and search petabytes of data with built-in AI capabilities. Vertex AI Vision includes all the tools needed to manage the life cycle of computer vision applications, across ingestion, analysis, storage, and deployment. Easily connect application output to a data destination, like BigQuery for analytics, or live streaming to drive real-time business actions. Ingest thousands of video streams from across the globe. With a monthly pricing model, enjoy up to one-tenth lower costs than previous offerings.Starting Price: $0.0085 per GB -
42
Kedro
Kedro
Kedro is the foundation for clean data science code. It borrows concepts from software engineering and applies them to machine-learning projects. A Kedro project provides scaffolding for complex data and machine-learning pipelines. You spend less time on tedious "plumbing" and focus instead on solving new problems. Kedro standardizes how data science code is created and ensures teams collaborate to solve problems easily. Make a seamless transition from development to production with exploratory code that you can transition to reproducible, maintainable, and modular experiments. A series of lightweight data connectors is used to save and load data across many different file formats and file systems.Starting Price: Free -
43
Athina AI
Athina AI
Athina is a collaborative AI development platform that enables teams to build, test, and monitor AI applications efficiently. It offers features such as prompt management, evaluation tools, dataset handling, and observability, all designed to streamline the development of reliable AI systems. Athina supports integration with various models and services, including custom models, and ensures data privacy through fine-grained access controls and self-hosted deployment options. The platform is SOC-2 Type 2 compliant, providing a secure environment for AI development. Athina's user-friendly interface allows both technical and non-technical team members to collaborate effectively, accelerating the deployment of AI features.Starting Price: Free -
44
OpenLIT
OpenLIT
OpenLIT is an OpenTelemetry-native application observability tool. It's designed to make the integration process of observability into AI projects with just a single line of code. Whether you're working with popular LLM libraries such as OpenAI and HuggingFace. OpenLIT's native support makes adding it to your projects feel effortless and intuitive. Analyze LLM and GPU performance, and costs to achieve maximum efficiency and scalability. Streams data to let you visualize your data and make quick decisions and modifications. Ensures that data is processed quickly without affecting the performance of your application. OpenLIT UI helps you explore LLM costs, token consumption, performance indicators, and user interactions in a straightforward interface. Connect to popular observability systems with ease, including Datadog and Grafana Cloud, to export data automatically. OpenLIT ensures your applications are monitored seamlessly.Starting Price: Free -
45
Mistral Large
Mistral AI
Mistral Large is Mistral AI's flagship language model, designed for advanced text generation and complex multilingual reasoning tasks, including text comprehension, transformation, and code generation. It supports English, French, Spanish, German, and Italian, offering a nuanced understanding of grammar and cultural contexts. With a 32,000-token context window, it can accurately recall information from extensive documents. The model's precise instruction-following and native function-calling capabilities facilitate application development and tech stack modernization. Mistral Large is accessible through Mistral's platform, Azure AI Studio, and Azure Machine Learning, and can be self-deployed for sensitive use cases. Benchmark evaluations indicate that Mistral Large achieves strong results, making it the world's second-ranked model generally available through an API, next to GPT-4.Starting Price: Free -
46
Aider
Aider AI
Aider lets you pair program with LLMs, to edit code in your local git repository. Start a new project or work with an existing git repo. Aider works best with GPT-4o & Claude 3.5 Sonnet and can connect to almost any LLM. Aider has one of the top scores on SWE Bench. SWE Bench is a challenging software engineering benchmark where aider solved real GitHub issues from popular open source projects like django, scikitlearn, matplotlib, etc.Starting Price: Free -
47
Imagen
Google
Imagen is a text-to-image generation model developed by Google Research. It uses advanced deep learning techniques, primarily leveraging large Transformer-based architectures, to generate high-quality, photorealistic images from natural language descriptions. Imagen's core innovation lies in combining the power of large language models (like those used in Google's NLP research) with the generative capabilities of diffusion models—a class of generative models known for creating images by progressively refining noise into detailed outputs. What sets Imagen apart is its ability to produce highly detailed and coherent images, often capturing fine-grained details and textures based on complex text prompts. It builds on the advancements in image generation made by models like DALL-E, but focuses heavily on semantic understanding and fine detail generation.Starting Price: Free -
48
Restack
Restack
A framework built specifically for the challenges of autonomous intelligence. Continue to write software using your language practices, libraries, APIs, data and models. Your proprietary autonomous product that adapts and scales with your development. Autonomous AI can automate video creation by generating, editing, and optimizing content, significantly reducing manual tasks in the production process. By integrating with AI tools like Luma AI or OpenAI for video generation, and scaling text-to-speech on Azure, your autonomous system can produce high-quality video content By integrating with platforms like YouTube your autonomous AI can continuously improve based on feedback and engagement metrics. We believe the most promising path to AGI is in the orchestration of millions of autonomous systems. We are a small group of passionate engineers and researchers dedicated to building autonomous artificial intelligence. If this sounds interesting to you, we would love to hear from you.Starting Price: $10 per month -
49
Arize Phoenix
Arize AI
Phoenix is an open-source observability library designed for experimentation, evaluation, and troubleshooting. It allows AI engineers and data scientists to quickly visualize their data, evaluate performance, track down issues, and export data to improve. Phoenix is built by Arize AI, the company behind the industry-leading AI observability platform, and a set of core contributors. Phoenix works with OpenTelemetry and OpenInference instrumentation. The main Phoenix package is arize-phoenix. We offer several helper packages for specific use cases. Our semantic layer is to add LLM telemetry to OpenTelemetry. Automatically instrumenting popular packages. Phoenix's open-source library supports tracing for AI applications, via manual instrumentation or through integrations with LlamaIndex, Langchain, OpenAI, and others. LLM tracing records the paths taken by requests as they propagate through multiple steps or components of an LLM application.Starting Price: Free -
50
Lunary
Lunary
Lunary is an AI developer platform designed to help AI teams manage, improve, and protect Large Language Model (LLM) chatbots. It offers features such as conversation and feedback tracking, analytics on costs and performance, debugging tools, and a prompt directory for versioning and team collaboration. Lunary supports integration with various LLMs and frameworks, including OpenAI and LangChain, and provides SDKs for Python and JavaScript. Guardrails to deflect malicious prompts and sensitive data leaks. Deploy in your VPC with Kubernetes or Docker. Allow your team to judge responses from your LLMs. Understand what languages your users are speaking. Experiment with prompts and LLM models. Search and filter anything in milliseconds. Receive notifications when agents are not performing as expected. Lunary's core platform is 100% open-source. Self-host or in the cloud, get started in minutes.Starting Price: $20 per month