Alternatives to Magma

Compare Magma alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to Magma in 2026. Compare features, ratings, user reviews, pricing, and more from Magma competitors and alternatives in order to make an informed decision for your business.

  • 1
    Vertex AI
    Build, deploy, and scale machine learning (ML) models faster, with fully managed ML tools for any use case. Through Vertex AI Workbench, Vertex AI is natively integrated with BigQuery, Dataproc, and Spark. You can use BigQuery ML to create and execute machine learning models in BigQuery using standard SQL queries on existing business intelligence tools and spreadsheets, or you can export datasets from BigQuery directly into Vertex AI Workbench and run your models from there. Use Vertex Data Labeling to generate highly accurate labels for your data collection. Vertex AI Agent Builder enables developers to create and deploy enterprise-grade generative AI applications. It offers both no-code and code-first approaches, allowing users to build AI agents using natural language instructions or by leveraging frameworks like LangChain and LlamaIndex.
    Compare vs. Magma View Software
    Visit Website
  • 2
    GPT-4o mini
    A small model with superior textual intelligence and multimodal reasoning. GPT-4o mini enables a broad range of tasks with its low cost and latency, such as applications that chain or parallelize multiple model calls (e.g., calling multiple APIs), pass a large volume of context to the model (e.g., full code base or conversation history), or interact with customers through fast, real-time text responses (e.g., customer support chatbots). Today, GPT-4o mini supports text and vision in the API, with support for text, image, video and audio inputs and outputs coming in the future. The model has a context window of 128K tokens, supports up to 16K output tokens per request, and has knowledge up to October 2023. Thanks to the improved tokenizer shared with GPT-4o, handling non-English text is now even more cost effective.
  • 3
    LLaVA

    LLaVA

    LLaVA

    LLaVA (Large Language-and-Vision Assistant) is an innovative multimodal model that integrates a vision encoder with the Vicuna language model to facilitate comprehensive visual and language understanding. Through end-to-end training, LLaVA exhibits impressive chat capabilities, emulating the multimodal functionalities of models like GPT-4. Notably, LLaVA-1.5 has achieved state-of-the-art performance across 11 benchmarks, utilizing publicly available data and completing training in approximately one day on a single 8-A100 node, surpassing methods that rely on billion-scale datasets. The development of LLaVA involved the creation of a multimodal instruction-following dataset, generated using language-only GPT-4. This dataset comprises 158,000 unique language-image instruction-following samples, including conversations, detailed descriptions, and complex reasoning tasks. This data has been instrumental in training LLaVA to perform a wide array of visual and language tasks effectively.
    Starting Price: Free
  • 4
    Qwen3.5

    Qwen3.5

    Alibaba

    Qwen3.5 is a next-generation open-weight multimodal large language model designed to power native vision-language agents. The flagship release, Qwen3.5-397B-A17B, combines a hybrid linear attention architecture with sparse mixture-of-experts, activating only 17 billion parameters per forward pass out of 397 billion total to maximize efficiency. It delivers strong benchmark performance across reasoning, coding, multilingual understanding, visual reasoning, and agent-based tasks. The model expands language support from 119 to 201 languages and dialects while introducing a 1M-token context window in its hosted version, Qwen3.5-Plus. Built for multimodal tasks, it processes text, images, and video with advanced spatial reasoning and tool integration. Qwen3.5 also incorporates scalable reinforcement learning environments to improve general agent capabilities. Designed for developers and enterprises, it enables efficient, tool-augmented, multimodal AI workflows.
    Starting Price: Free
  • 5
    SmolVLM

    SmolVLM

    Hugging Face

    SmolVLM-Instruct is a compact, AI-powered multimodal model that combines the capabilities of vision and language processing, designed to handle tasks like image captioning, visual question answering, and multimodal storytelling. It works with both text and image inputs, providing highly efficient results while being optimized for smaller, resource-constrained environments. Built with SmolLM2 as its text decoder and SigLIP as its image encoder, the model offers improved performance for tasks that require integration of both textual and visual information. SmolVLM-Instruct can be fine-tuned for specific applications, offering businesses and developers a versatile tool for creating intelligent, interactive systems that require multimodal inputs.
    Starting Price: Free
  • 6
    NVIDIA Cosmos
    NVIDIA Cosmos is a developer-first platform of state-of-the-art generative World Foundation Models (WFMs), advanced video tokenizers, guardrails, and an accelerated data processing and curation pipeline designed to supercharge physical AI development. It enables developers working on autonomous vehicles, robotics, and video analytics AI agents to generate photorealistic, physics-aware synthetic video data, trained on an immense dataset including 20 million hours of real-world and simulated video, to rapidly simulate future scenarios, train world models, and fine‑tune custom behaviors. It includes three core WFM types; Cosmos Predict, capable of generating up to 30 seconds of continuous video from multimodal inputs; Cosmos Transfer, which adapts simulations across environments and lighting for versatile domain augmentation; and Cosmos Reason, a vision-language model that applies structured reasoning to interpret spatial-temporal data for planning and decision-making.
    Starting Price: Free
  • 7
    Aya

    Aya

    Cohere AI

    Aya is a new state-of-the-art, open-source, massively multilingual, generative large language research model (LLM) covering 101 different languages — more than double the number of languages covered by existing open-source models. Aya helps researchers unlock the powerful potential of LLMs for dozens of languages and cultures largely ignored by most advanced models on the market today. We are open-sourcing both the Aya model, as well as the largest multilingual instruction fine-tuned dataset to-date with a size of 513 million covering 114 languages. This data collection includes rare annotations from native and fluent speakers all around the world, ensuring that AI technology can effectively serve a broad global audience that have had limited access to-date.
  • 8
    Seed1.8

    Seed1.8

    ByteDance

    Seed1.8 is ByteDance’s latest generalized agentic AI model designed to bridge understanding and real-world action by combining multimodal perception, agent-like task execution, and wide-ranging reasoning capabilities into a single foundation model that goes beyond simple language generation. It supports multimodal inputs, including text, images, and video, processes very large context windows (hundreds of thousands of tokens at once), and is optimized to handle complex workflows in real environments, such as information retrieval, code generation, GUI interaction, and multi-step decision logic, with efficient, accurate responses suitable for real-world applications. Seed1.8 unifies skills such as search, code understanding, visual context interpretation, and autonomous reasoning so developers and AI systems can build interactive agents and next-generation workflows capable of synthesizing evidence, following instructions deeply, and acting on tasks like automation.
  • 9
    Magma

    Magma

    Meta Platforms

    Communications service providers leverage Magma's open network core solution to connect people using LTE, 5G, Wi-Fi, and beyond. A cost-effective, open, flexible, and commercial-grade EPC. Meta Connectivity is a development contributor to Magma, an Evolved Packet Core (EPC) that enables Communication Service Providers (CSPs) to deliver fast, reliable internet connectivity with innovative features that only a robust open-source community of developers can bring. Magma is an open-source software platform enabling operators to easily deploy mobile networks in hard-to-reach areas at a price point that makes sense. When working with qualified partners to deploy and operate Magma, CSPs can rest assured that their most stringent requirements are met. Magma is vendor, hardware, and network agnostic. CSPs can use what makes sense for them from radio access network (RAN) equipment to commodity hardware to licensed or un-licensed spectrum.
  • 10
    Pixtral Large

    Pixtral Large

    Mistral AI

    Pixtral Large is a 124-billion-parameter open-weight multimodal model developed by Mistral AI, building upon their Mistral Large 2 architecture. It integrates a 123-billion-parameter multimodal decoder with a 1-billion-parameter vision encoder, enabling advanced understanding of documents, charts, and natural images while maintaining leading text comprehension capabilities. With a context window of 128,000 tokens, Pixtral Large can process at least 30 high-resolution images simultaneously. The model has demonstrated state-of-the-art performance on benchmarks such as MathVista, DocVQA, and VQAv2, surpassing models like GPT-4o and Gemini-1.5 Pro. Pixtral Large is available under the Mistral Research License for research and educational use, and under the Mistral Commercial License for commercial applications.
    Starting Price: Free
  • 11
    GPT-4V (Vision)
    GPT-4 with vision (GPT-4V) enables users to instruct GPT-4 to analyze image inputs provided by the user, and is the latest capability we are making broadly available. Incorporating additional modalities (such as image inputs) into large language models (LLMs) is viewed by some as a key frontier in artificial intelligence research and development. Multimodal LLMs offer the possibility of expanding the impact of language-only systems with novel interfaces and capabilities, enabling them to solve new tasks and provide novel experiences for their users. In this system card, we analyze the safety properties of GPT-4V. Our work on safety for GPT-4V builds on the work done for GPT-4 and here we dive deeper into the evaluations, preparation, and mitigation work done specifically for image inputs.
  • 12
    Mistral Small

    Mistral Small

    Mistral AI

    On September 17, 2024, Mistral AI announced several key updates to enhance the accessibility and performance of their AI offerings. They introduced a free tier on "La Plateforme," their serverless platform for tuning and deploying Mistral models as API endpoints, enabling developers to experiment and prototype at no cost. Additionally, Mistral AI reduced prices across their entire model lineup, with significant cuts such as a 50% reduction for Mistral Nemo and an 80% decrease for Mistral Small and Codestral, making advanced AI more cost-effective for users. The company also unveiled Mistral Small v24.09, a 22-billion-parameter model offering a balance between performance and efficiency, suitable for tasks like translation, summarization, and sentiment analysis. Furthermore, they made Pixtral 12B, a vision-capable model with image understanding capabilities, freely available on "Le Chat," allowing users to analyze and caption images without compromising text-based performance.
    Starting Price: Free
  • 13
    Qwen2.5-VL

    Qwen2.5-VL

    Alibaba

    Qwen2.5-VL is the latest vision-language model from the Qwen series, representing a significant advancement over its predecessor, Qwen2-VL. This model excels in visual understanding, capable of recognizing a wide array of objects, including text, charts, icons, graphics, and layouts within images. It functions as a visual agent, capable of reasoning and dynamically directing tools, enabling applications such as computer and phone usage. Qwen2.5-VL can comprehend videos exceeding one hour in length and can pinpoint relevant segments within them. Additionally, it accurately localizes objects in images by generating bounding boxes or points and provides stable JSON outputs for coordinates and attributes. The model also supports structured outputs for data like scanned invoices, forms, and tables, benefiting sectors such as finance and commerce. Available in base and instruct versions across 3B, 7B, and 72B sizes, Qwen2.5-VL is accessible through platforms like Hugging Face and ModelScope.
    Starting Price: Free
  • 14
    GPT-4o

    GPT-4o

    OpenAI

    GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, image, and video and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time (opens in a new window) in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.
    Starting Price: $5.00 / 1M tokens
  • 15
    Falcon 2

    Falcon 2

    Technology Innovation Institute (TII)

    Falcon 2 11B is an open-source, multilingual, and multimodal AI model, uniquely equipped with vision-to-language capabilities. It surpasses Meta’s Llama 3 8B and delivers performance on par with Google’s Gemma 7B, as independently confirmed by the Hugging Face Leaderboard. Looking ahead, the next phase of development will integrate a 'Mixture of Experts' approach to further enhance Falcon 2’s capabilities, pushing the boundaries of AI innovation.
    Starting Price: Free
  • 16
    Ray2

    Ray2

    Luma AI

    Ray2 is a large-scale video generative model capable of creating realistic visuals with natural, coherent motion. It has a strong understanding of text instructions and can take images and video as input. Ray2 exhibits advanced capabilities as a result of being trained on Luma’s new multi-modal architecture scaled to 10x compute of Ray1. Ray2 marks the beginning of a new generation of video models capable of producing fast coherent motion, ultra-realistic details, and logical event sequences. This increases the success rate of usable generations and makes videos generated by Ray2 substantially more production-ready. Text-to-video generation is available in Ray2 now, with image-to-video, video-to-video, and editing capabilities coming soon. Ray2 brings a whole new level of motion fidelity. Smooth, cinematic, and jaw-dropping, transform your vision into reality. Tell your story with stunning, cinematic visuals. Ray2 lets you craft breathtaking scenes with precise camera movements.
    Starting Price: $9.99 per month
  • 17
    Palmyra LLM
    Palmyra is a suite of Large Language Models (LLMs) engineered for precise, dependable performance in enterprise applications. These models excel in tasks such as question-answering, image analysis, and support for over 30 languages, with fine-tuning available for industries like healthcare and finance. Notably, Palmyra models have achieved top rankings in benchmarks like Stanford HELM and PubMedQA, and Palmyra-Fin is the first model to pass the CFA Level III exam. Writer ensures data privacy by not using client data to train or modify their models, adopting a zero data retention policy. The Palmyra family includes specialized models such as Palmyra X 004, featuring tool-calling capabilities; Palmyra Med, tailored for healthcare; Palmyra Fin, designed for finance; and Palmyra Vision, which offers advanced image and video processing. These models are available through Writer's full-stack generative AI platform, which integrates graph-based Retrieval Augmented Generation (RAG).
    Starting Price: $18 per month
  • 18
    Florence-2

    Florence-2

    Microsoft

    Florence-2-large is an advanced vision foundation model developed by Microsoft, capable of handling a wide variety of vision and vision-language tasks, such as captioning, object detection, segmentation, and OCR. Built with a sequence-to-sequence architecture, it uses the FLD-5B dataset containing over 5 billion annotations and 126 million images to master multi-task learning. Florence-2-large excels in both zero-shot and fine-tuned settings, providing high-quality results with minimal training. The model supports tasks including detailed captioning, object detection, and dense region captioning, and can process images with text prompts to generate relevant responses. It offers great flexibility by handling diverse vision-related tasks through prompt-based approaches, making it a competitive tool in AI-powered visual tasks. The model is available on Hugging Face with pre-trained weights, enabling users to quickly get started with image processing and task execution.
    Starting Price: Free
  • 19
    QVQ-Max

    QVQ-Max

    Alibaba

    QVQ-Max is a visual reasoning model designed to analyze and understand visual content, allowing users to solve complex problems with the help of images, videos, and diagrams. By combining deep reasoning and detailed observation, QVQ-Max can identify objects in photos, process mathematical problems, and even predict the next scene in a video. It also aids in creative tasks, from generating illustrations to writing video scripts, offering a versatile tool for both work and personal use. This first iteration, though still evolving, demonstrates impressive potential in various fields like education, professional work, and everyday problem-solving.
    Starting Price: Free
  • 20
    Moondream

    Moondream

    Moondream

    ​Moondream is an open source vision language model designed for efficient image understanding across various devices, including servers, PCs, mobile phones, and edge devices. It offers two primary variants, Moondream 2B, a 1.9-billion-parameter model providing robust performance for general-purpose tasks, and Moondream 0.5B, a compact 500-million-parameter model optimized for resource-constrained hardware. Both models support quantization formats like fp16, int8, and int4, allowing for reduced memory usage without significant performance loss. Moondream's capabilities include generating detailed image captions, answering visual queries, performing object detection, and pinpointing specific items within images. Its design emphasizes versatility and accessibility, enabling deployment across a wide range of platforms. ​
    Starting Price: Free
  • 21
    Gemini Robotics

    Gemini Robotics

    Google DeepMind

    Gemini Robotics brings Gemini’s capacity for multimodal reasoning and world understanding into the physical world, allowing robots of any shape and size to perform a wide range of real-world tasks. Built on Gemini 2.0, it augments advanced vision-language-action models with the ability to reason about physical spaces, generalize to novel situations, including unseen objects, diverse instructions, and new environments, and understand and respond to everyday conversational commands while adapting to sudden changes in instructions or surroundings without further input. Its dexterity module enables complex tasks requiring fine motor skills and precise manipulation, such as folding origami, packing lunch boxes, or preparing salads, and it supports multiple embodiments, from bi-arm platforms like ALOHA 2 to humanoid robots such as Apptronik’s Apollo. It is optimized for local execution and has an SDK for seamless adaptation to new tasks and environments.
  • 22
    PaliGemma 2
    PaliGemma 2, the next evolution in tunable vision-language models, builds upon the performant Gemma 2 models, adding the power of vision and making it easier than ever to fine-tune for exceptional performance. With PaliGemma 2, these models can see, understand, and interact with visual input, opening up a world of new possibilities. It offers scalable performance with multiple model sizes (3B, 10B, 28B parameters) and resolutions (224px, 448px, 896px). PaliGemma 2 generates detailed, contextually relevant captions for images, going beyond simple object identification to describe actions, emotions, and the overall narrative of the scene. Our research demonstrates leading performance in chemical formula recognition, music score recognition, spatial reasoning, and chest X-ray report generation, as detailed in the technical report. Upgrading to PaliGemma 2 is a breeze for existing PaliGemma users.
  • 23
    Grok 4
    Grok 4 is the latest AI model from Elon Musk’s xAI, marking a significant advancement in AI reasoning and natural language understanding. Developed on the Colossus supercomputer, Grok 4 supports multimodal inputs including text and images, with plans to add video capabilities soon. It features enhanced precision in language tasks and has demonstrated superior performance in scientific reasoning and visual problem-solving compared to other leading AI models. Designed for developers, researchers, and technical users, Grok 4 offers powerful tools for complex tasks. The model incorporates improved moderation to address previous concerns about biased or problematic outputs. Grok 4 represents a major leap forward in AI’s ability to understand and generate human-like responses.
  • 24
    Qwen2-VL

    Qwen2-VL

    Alibaba

    Qwen2-VL is the latest version of the vision language models based on Qwen2 in the Qwen model familities. Compared with Qwen-VL, Qwen2-VL has the capabilities of: SoTA understanding of images of various resolution & ratio: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc. Understanding videos of 20 min+: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc. Agent that can operate your mobiles, robots, etc.: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions. Multilingual Support: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images
    Starting Price: Free
  • 25
    Seed2.0 Lite

    Seed2.0 Lite

    ByteDance

    Seed2.0 Lite is part of ByteDance’s Seed2.0 family of general-purpose multimodal AI agent models designed to handle complex, real-world tasks with a balanced focus on performance and efficiency. It offers enhanced multimodal understanding and instruction-following capabilities compared with earlier Seed models, enabling it to process and reason about text, visual elements, and structured information reliably for production-grade applications. As a mid-sized model in the series, Lite is optimized to deliver good quality outputs with responsive performance at lower cost and faster inference than the Pro variant while surpassing the previous generation’s capabilities, making it suitable for workflows that require stable reasoning, long-context understanding, and multimodal task execution without needing the highest possible raw performance.
  • 26
    Magma

    Magma

    Magma

    With a community of over 2 million artists, mentors, and creative studios, we're on a mission to make art multiplayer. Magma is a browser-based digital art platform that helps creators and art teams work more collaboratively. Combining powerful drawing tools with in-app chat and call capabilities, we enable up to 50 artists to work on a canvas at once. Supporting industries from animation to game development, we empower artists to bring their boldest ideas to life. We believe that collaborative processes are key to consistently creating high-quality art on schedule. After just one month, adding Magma to your workflow will offer: 🏎️ 25% reduction in delivery time ✍️ 40% fewer revisions for creative teams ⏰ 32% less time spent in meetings
  • 27
    Claude Haiku 3
    Claude Haiku 3 is the fastest and most affordable model in its intelligence class. With state-of-the-art vision capabilities and strong performance on industry benchmarks, Haiku is a versatile solution for a wide range of enterprise applications. The model is now available alongside Sonnet and Opus in the Claude API and on claude.ai for our Claude Pro subscribers.
  • 28
    Grok 4.20
    Grok 4.20 is an advanced artificial intelligence model developed by xAI to elevate reasoning and natural language understanding. Built on the high-performance Colossus supercomputer, it is engineered for speed, scale, and accuracy. Grok 4.20 processes multimodal inputs such as text and images, with video support planned for future releases. The model excels in scientific, technical, and linguistic tasks, delivering highly precise and context-aware responses. Its architecture supports deep reasoning and sophisticated problem-solving capabilities. Enhanced moderation improves output reliability and reduces bias compared to earlier versions. Overall, Grok 4.20 represents a significant step toward more human-like AI reasoning and interpretation.
  • 29
    AI Verse

    AI Verse

    AI Verse

    When real-life data capture is challenging, we generate diverse, fully labeled image datasets. Our procedural technology ensures the highest quality, unbiased, labeled synthetic datasets that will improve your computer vision model’s accuracy. AI Verse empowers users with full control over scene parameters, ensuring you can fine-tune the environments for unlimited image generation, giving you an edge in the competitive landscape of computer vision development.
  • 30
    GLM-4.1V

    GLM-4.1V

    Zhipu AI

    GLM-4.1V is a vision-language model, providing a powerful, compact multimodal model designed for reasoning and perception across images, text, and documents. The 9-billion-parameter variant (GLM-4.1V-9B-Thinking) is built on the GLM-4-9B foundation and enhanced through a specialized training paradigm using Reinforcement Learning with Curriculum Sampling (RLCS). It supports a 64k-token context window and accepts high-resolution inputs (up to 4K images, any aspect ratio), enabling it to handle complex tasks such as optical character recognition, image captioning, chart and document parsing, video and scene understanding, GUI-agent workflows (e.g., interpreting screenshots, recognizing UI elements), and general vision-language reasoning. In benchmark evaluations at the 10 B-parameter scale, GLM-4.1V-9B-Thinking achieved top performance on 23 of 28 tasks.
    Starting Price: Free
  • 31
    fullmoon

    fullmoon

    fullmoon

    Fullmoon is a free, open source application that enables users to interact with large language models directly on their devices, ensuring privacy and offline accessibility. Optimized for Apple silicon, it operates seamlessly across iOS, iPadOS, macOS, and visionOS platforms. Users can personalize the app by adjusting themes, fonts, and system prompts, and it integrates with Apple's Shortcuts for enhanced functionality. Fullmoon supports models like Llama-3.2-1B-Instruct-4bit and Llama-3.2-3B-Instruct-4bit, facilitating efficient on-device AI interactions without the need for an internet connection.
    Starting Price: Free
  • 32
    Seed2.0 Pro

    Seed2.0 Pro

    ByteDance

    Seed2.0 Pro is an advanced general-purpose agent model designed for large-scale production environments and complex real-world tasks. It focuses on long-chain inference capabilities and stability, making it ideal for handling multi-step workflows and intricate business applications. As part of the Seed 2.0 model series, it delivers major upgrades in multimodal understanding, including visual reasoning, motion perception, and instruction-following accuracy. The model demonstrates state-of-the-art performance across leading benchmarks in mathematics, science, coding, and visual reasoning. Seed2.0 Pro excels at interactive visual applications, such as recreating webpages from a single image and generating runnable front-end code with animations. It also supports professional workflows like CAD modeling, biotechnology research assistance, and structured data extraction from complex charts.
  • 33
    Mistral Medium 3.1
    Mistral Medium 3.1 is the latest frontier-class multimodal foundation model released in August 2025, designed to deliver advanced reasoning, coding, and multimodal capabilities while dramatically reducing deployment complexity and costs. It builds on the highly efficient architecture of Mistral Medium 3, renowned for offering state-of-the-art performance at up to 8-times lower cost than leading large models, enhancing tone consistency, responsiveness, and accuracy across diverse tasks and modalities. The model supports deployment across hybrid environments, on-premises systems, and virtual private clouds, and it achieves competitive performance relative to high-end models such as Claude Sonnet 3.7, Llama 4 Maverick, and Cohere Command A. Ideal for professional and enterprise use cases, Mistral Medium 3.1 excels in coding, STEM reasoning, language understanding, and multimodal comprehension, while maintaining broad compatibility with custom workflows and infrastructure.
  • 34
    HunyuanOCR

    HunyuanOCR

    Tencent

    Tencent Hunyuan is a large-scale, multimodal AI model family developed by Tencent that spans text, image, video, and 3D modalities, designed for general-purpose AI tasks like content generation, visual reasoning, and business automation. Its model lineup includes variants optimized for natural language understanding, multimodal vision-language comprehension (e.g., image & video understanding), text-to-image creation, video generation, and 3D content generation. Hunyuan models leverage a mixture-of-experts architecture and other innovations (like hybrid “mamba-transformer” designs) to deliver strong performance on reasoning, long-context understanding, cross-modal tasks, and efficient inference. For example, the vision-language model Hunyuan-Vision-1.5 supports “thinking-on-image”, enabling deep multimodal understanding and reasoning on images, video frames, diagrams, or spatial data.
  • 35
    Amazon Nova
    Amazon Nova is a new generation of state-of-the-art (SOTA) foundation models (FMs) that deliver frontier intelligence and industry leading price-performance, available exclusively on Amazon Bedrock. Amazon Nova Micro, Amazon Nova Lite, and Amazon Nova Pro are understanding models that accept text, image, or video inputs and generate text output. They provide a broad selection of capability, accuracy, speed, and cost operation points. Amazon Nova Micro is a text only model that delivers the lowest latency responses at very low cost. Amazon Nova Lite is a very low-cost multimodal model that is lightning fast for processing image, video, and text inputs. Amazon Nova Pro is a highly capable multimodal model with the best combination of accuracy, speed, and cost for a wide range of tasks. Amazon Nova Pro’s capabilities, coupled with its industry-leading speed and cost efficiency, makes it a compelling model for almost any task, including video summarization, Q&A, math & more.
  • 36
    GLM-4.5V-Flash
    GLM-4.5V-Flash is an open source vision-language model, designed to bring strong multimodal capabilities into a lightweight, deployable package. It supports image, video, document, and GUI inputs, enabling tasks such as scene understanding, chart and document parsing, screen reading, and multi-image analysis. Compared to larger models in the series, GLM-4.5V-Flash offers a compact footprint while retaining core VLM capabilities like visual reasoning, video understanding, GUI task handling, and complex document parsing. It can serve in “GUI agent” workflows, meaning it can interpret screenshots or desktop captures, recognize icons or UI elements, and assist with automated desktop or web-based tasks. Although it forgoes some of the largest-model performance gains, GLM-4.5V-Flash remains versatile for real-world multimodal tasks where efficiency, lower resource usage, and broad modality support are prioritized.
    Starting Price: Free
  • 37
    GWM-1

    GWM-1

    Runway AI

    GWM-1 is Runway’s state-of-the-art General World Model designed to simulate the real world in real time. It is an interactive, controllable, and general-purpose model built on top of Runway’s Gen-4.5 architecture. GWM-1 generates high-fidelity video frame by frame while maintaining long-term spatial and behavioral consistency. The model supports action-conditioning through inputs such as camera movement, robot actions, events, and speech. GWM-1 enables realistic visual simulation paired with synchronized video and audio outputs. It is designed to help AI systems experience environments rather than just describe them. GWM-1 represents a major step toward general-purpose simulation beyond language-only models.
  • 38
    LFM2.5

    LFM2.5

    Liquid AI

    Liquid AI’s LFM2.5 is the next generation of on-device AI foundation models designed to deliver high-performance, efficient AI inference on edge devices such as phones, laptops, vehicles, IoT systems, and embedded hardware without relying on cloud compute. It extends the previous LFM2 architecture by significantly increasing the pretraining scale and reinforcement learning stages, yielding a family of hybrid models around 1.2 billion parameters that balance instruction following, reasoning, and multimodal capabilities for real-world agentic use cases. The LFM2.5 family includes Base (for fine-tuning and customization), Instruct (general-purpose instruction-tuned), Japanese-optimized, Vision-Language, and Audio-Language variants, all optimized for fast, on-device inference under tight memory constraints and available as open-weight models deployable via frameworks like llama.cpp, MLX, vLLM, and ONNX.
    Starting Price: Free
  • 39
    DeepSeek-VL

    DeepSeek-VL

    DeepSeek

    DeepSeek-VL is an open source Vision-Language (VL) model designed for real-world vision and language understanding applications. Our approach is structured around three key dimensions: We strive to ensure our data is diverse, scalable, and extensively covers real-world scenarios, including web screenshots, PDFs, OCR, charts, and knowledge-based content, aiming for a comprehensive representation of practical contexts. Further, we create a use case taxonomy from real user scenarios and construct an instruction tuning dataset accordingly. The fine-tuning with this dataset substantially improves the model's user experience in practical applications. Considering efficiency and the demands of most real-world scenarios, DeepSeek-VL incorporates a hybrid vision encoder that efficiently processes high-resolution images (1024 x 1024), while maintaining a relatively low computational overhead.
    Starting Price: Free
  • 40
    Jurassic-2
    Announcing the launch of Jurassic-2, the latest generation of AI21 Studio’s foundation models, a game-changer in the field of AI, with top-tier quality and new capabilities. And that's not all, we're also releasing our task-specific APIs, with plug-and-play reading and writing capabilities that outperform competitors. Our focus at AI21 Studio is to help developers and businesses leverage reading and writing AI to build real-world products with tangible value. Today marks two important milestones with the release of Jurassic-2 and Task-Specific APIs, empowering you to bring generative AI to production. Jurassic-2 (or J2, as we like to call it) is the next generation of our foundation models with significant improvements in quality and new capabilities including zero-shot instruction-following, reduced latency, and multi-language support. Task-specific APIs provide developers with industry-leading APIs that perform specialized reading and writing tasks out-of-the-box.
    Starting Price: $29 per month
  • 41
    Uni-1

    Uni-1

    Luma AI

    UNI-1 is a multimodal artificial intelligence model developed by Luma AI that unifies visual generation and reasoning capabilities within a single architecture, representing a step toward multimodal general intelligence. It was designed to overcome the limitations of traditional AI pipelines, where language models, image generators, and other systems operate independently without shared reasoning. UNI-1 integrates these capabilities so that language, visual understanding, and image generation work together inside one system, allowing the model to reason about scenes, interpret instructions, and generate visual outputs that follow logical and spatial constraints. At its core, UNI-1 is a decoder-only autoregressive transformer that processes text and images as a single interleaved sequence of tokens, enabling the model to treat language and visual information within the same computational framework rather than through separate encoders.
  • 42
    Amazon Titan
    Amazon Titan is a series of advanced foundation models (FMs) from AWS, designed to enhance generative AI applications with high performance and flexibility. Built on AWS's 25 years of AI and machine learning experience, Titan models support a range of use cases such as text generation, summarization, semantic search, and image generation. Titan models are optimized for responsible AI use, incorporating built-in safety features and fine-tuning capabilities. They can be customized with your own data through Retrieval Augmented Generation (RAG) to improve accuracy and relevance, making them ideal for both general-purpose and specialized AI tasks.
  • 43
    Azure AI Services
    Build cutting-edge, market-ready AI applications with out-of-the-box and customizable APIs and models. Quickly infuse generative AI into production workloads using studios, SDKs, and APIs. Gain a competitive edge by building AI apps powered by foundation models, including those from OpenAI, Meta, and Microsoft. Detect and mitigate harmful use with built-in responsible AI, enterprise-grade Azure security, and responsible AI tooling. Build your own copilot and generative AI applications with cutting-edge language and vision models. Retrieve the most relevant data using keyword, vector, and hybrid search. Monitor text and images to detect offensive or inappropriate content. Translate documents and text in real time across more than 100 languages.
  • 44
    Marengo

    Marengo

    TwelveLabs

    Marengo is a multimodal video foundation model that transforms video, audio, image, and text inputs into unified embeddings, enabling powerful “any-to-any” search, retrieval, classification, and analysis across vast video and multimedia libraries. It integrates visual frames (with spatial and temporal dynamics), audio (speech, ambient sound, music), and textual content (subtitles, overlays, metadata) to create a rich, multidimensional representation of each media item. With this embedding architecture, Marengo supports robust tasks such as search (text-to-video, image-to-video, video-to-audio, etc.), semantic content discovery, anomaly detection, hybrid search, clustering, and similarity-based recommendation. The latest versions introduce multi-vector embeddings, separating representations for appearance, motion, and audio/text features, which significantly improve precision and context awareness, especially for complex or long-form content.
    Starting Price: $0.042 per minute
  • 45
    Veo 2

    Veo 2

    Google

    Veo 2 is a state-of-the-art video generation model. Veo creates videos with realistic motion and high quality output, up to 4K. Explore different styles and find your own with extensive camera controls. Veo 2 is able to faithfully follow simple and complex instructions, and convincingly simulates real-world physics as well as a wide range of visual styles. Significantly improves over other AI video models in terms of detail, realism, and artifact reduction. Veo represents motion to a high degree of accuracy, thanks to its understanding of physics and its ability to follow detailed instructions. Interprets instructions precisely to create a wide range of shot styles, angles, movements – and combinations of all of these.
  • 46
    AskUI

    AskUI

    AskUI

    AskUI is an innovative platform that enables AI agents to visually perceive and interact with any computer interface, facilitating seamless automation across various operating systems and applications. Leveraging advanced vision models, AskUI's PTA-1 prompt-to-action model allows users to execute AI-driven actions on Windows, macOS, Linux, and mobile devices without the need for jailbreaking. This technology is particularly beneficial for tasks such as desktop and mobile automation, visual testing, and document or data processing. By integrating with tools like Jira, Jenkins, GitLab, and Docker, AskUI enhances workflow efficiency and reduces the burden on developers. Companies like Deutsche Bahn have reported significant improvements in internal processes, citing over a 90% increase in efficiency through the use of AskUI's test automation capabilities.
  • 47
    Janus-Pro-7B
    Janus-Pro-7B is an innovative open-source multimodal AI model from DeepSeek, designed to excel in both understanding and generating content across text, images, and videos. It leverages a unique autoregressive architecture with separate pathways for visual encoding, enabling high performance in tasks ranging from text-to-image generation to complex visual comprehension. This model outperforms competitors like DALL-E 3 and Stable Diffusion in various benchmarks, offering scalability with versions from 1 billion to 7 billion parameters. Licensed under the MIT License, Janus-Pro-7B is freely available for both academic and commercial use, providing a significant leap in AI capabilities while being accessible on major operating systems like Linux, MacOS, and Windows through Docker.
    Starting Price: Free
  • 48
    Muse

    Muse

    Microsoft

    Microsoft has unveiled Muse, a groundbreaking generative AI model designed to revolutionize gameplay ideation. Developed in collaboration with Ninja Theory, Muse is a World and Human Action Model (WHAM) trained on data from the game Bleeding Edge. This AI model possesses a comprehensive understanding of 3D game environments, including physics and player interactions, enabling it to generate consistent and diverse gameplay sequences. Muse can produce game visuals and predict controller actions, facilitating rapid prototyping and creative exploration for game developers. By analyzing over 1 billion images and actions, Muse demonstrates the potential to assist in game preservation by recreating classic titles for modern platforms. While still in the early stages, with current outputs at a resolution of 300×180 pixels, Muse represents a significant advancement in integrating AI into the game development process, aiming to enhance, not replace, human creativity.
  • 49
    Codestral

    Codestral

    Mistral AI

    We introduce Codestral, our first-ever code model. Codestral is an open-weight generative AI model explicitly designed for code generation tasks. It helps developers write and interact with code through a shared instruction and completion API endpoint. As it masters code and English, it can be used to design advanced AI applications for software developers. Codestral is trained on a diverse dataset of 80+ programming languages, including the most popular ones, such as Python, Java, C, C++, JavaScript, and Bash. It also performs well on more specific ones like Swift and Fortran. This broad language base ensures Codestral can assist developers in various coding environments and projects.
    Starting Price: Free
  • 50
    GPT-4.1

    GPT-4.1

    OpenAI

    GPT-4.1 is an advanced AI model from OpenAI, designed to enhance performance across key tasks such as coding, instruction following, and long-context comprehension. With a large context window of up to 1 million tokens, GPT-4.1 can process and understand extensive datasets, making it ideal for tasks like software development, document analysis, and AI agent workflows. Available through the API, GPT-4.1 offers significant improvements over previous models, excelling at real-world applications where efficiency and accuracy are crucial.
    Starting Price: $2 per 1M tokens (input)