AI Image Generators for ChromeOS

Browse free open source AI Image Generators and projects for ChromeOS below. Use the toggles on the left to filter open source AI Image Generators by OS, license, language, programming language, and project status.

  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • MongoDB 8.0 on Atlas | Run anywhere Icon
    MongoDB 8.0 on Atlas | Run anywhere

    Now available in even more cloud regions across AWS, Azure, and Google Cloud.

    MongoDB 8.0 brings enhanced performance and flexibility to Atlas—with expanded availability across 125+ regions globally. Build modern apps anywhere your users are, with the power of a modern database behind you.
    Learn More
  • 1
    Stable Diffusion

    Stable Diffusion

    High-Resolution Image Synthesis with Latent Diffusion Models

    Stable Diffusion Version 2. The Stable Diffusion project, developed by Stability AI, is a cutting-edge image synthesis model that utilizes latent diffusion techniques for high-resolution image generation. It offers an advanced method of generating images based on text input, making it highly flexible for various creative applications. The repository contains pretrained models, various checkpoints, and tools to facilitate image generation tasks, such as fine-tuning and modifying the models. Stability AI's approach to image synthesis has contributed to creating detailed, scalable images while maintaining efficiency.
    Leader badge
    Downloads: 81 This Week
    Last Update:
    See Project
  • 2
    Janus-Pro

    Janus-Pro

    Janus-Series: Unified Multimodal Understanding and Generation Models

    Janus is a cutting-edge, unified multimodal model designed to advance both multimodal understanding and generation. It features a decoupled visual encoding approach that allows it to handle visual tasks separately from the generative tasks, resulting in enhanced flexibility and performance. With a singular transformer architecture, Janus outperforms previous models by surpassing specialized task-specific models in its ability to handle diverse multimodal inputs and generate high-quality outputs. Its latest iteration, Janus-Pro, improves on this with a more optimized training strategy, expanded data, and larger model scaling, leading to significant advancements in both multimodal understanding and text-to-image generation.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 3
     stable-diffusion-v1-4

    stable-diffusion-v1-4

    Text-to-image diffusion model for high-quality image generation

    stable-diffusion-v1-4 is a high-performance text-to-image latent diffusion model developed by CompVis. It generates photo-realistic images from natural language prompts using a pretrained CLIP ViT-L/14 text encoder and a UNet-based denoising architecture. This version builds on v1-2, fine-tuned over 225,000 steps at 512×512 resolution on the “laion-aesthetics v2 5+” dataset, with 10% text-conditioning dropout for improved classifier-free guidance. It is optimized for use with Hugging Face’s Diffusers library and supports both PyTorch and JAX/Flax frameworks, offering flexibility across GPUs and TPUs. Though powerful, the model has limitations with compositional logic, photorealism, non-English prompts, and rendering accurate text or faces. Intended for research and creative exploration, it includes safety tools to detect NSFW content but may still reflect dataset biases. Users are advised to follow responsible AI practices and avoid harmful, unethical, or out-of-scope applications.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    ControlNet

    ControlNet

    Extension for Stable Diffusion using edge, depth, pose, and more

    ControlNet is a neural network architecture that enhances Stable Diffusion by enabling image generation conditioned on specific visual structures such as edges, poses, depth maps, and segmentation masks. By injecting these auxiliary inputs into the diffusion process, ControlNet gives users powerful control over the layout and composition of generated images while preserving the style and flexibility of generative models. It supports a wide range of conditioning types through pretrained modules, including Canny edges, HED (soft edges), Midas depth, OpenPose skeletons, normal maps, MLSD lines, scribbles, and ADE20k-based semantic segmentation. The system includes both ControlNet+SD1.5 model weights and compatible third-party detectors like Midas and OpenPose to extract input features. Each conditioning type is matched with a specific .pth model file to be used alongside Stable Diffusion for fine-grained control.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Picsart Enterprise Background Removal API for Stunning eCommerce Visuals Icon
    Picsart Enterprise Background Removal API for Stunning eCommerce Visuals

    Instantly remove the background from your images in just one click.

    With our Remove Background API tool, you can access the transformative capabilities of automation , which will allow you to turn any photo asset into compelling product imagery. With elevated visuals quality on your digital platforms, you can captivate your audience, and therefore achieve higher engagement and sales.
    Learn More
  • 5
    ERNIE-4.5-VL-424B-A47B-Base-Paddle

    ERNIE-4.5-VL-424B-A47B-Base-Paddle

    Latent diffusion model generating high-quality text-to-image outputs

    ERNIE-4.5-VL-424B-A47B-Base-Paddle is a multimodal Mixture-of-Experts (MoE) model developed by Baidu, designed to understand and generate both text and image-based information. It utilizes a heterogeneous MoE architecture with modality-isolated routing and specialized loss functions to ensure effective learning across both modalities. Pretrained with trillions of tokens, the model activates 47B parameters per token out of a total of 424B, optimizing for scalability and precision. Its training incorporates a staged approach, first focusing on language, then extending to vision with additional modules like ViT and visual experts. The model supports extremely long contexts (up to 131,072 tokens), enabling complex reasoning and narrative generation. Built on the PaddlePaddle framework, it leverages FP8 mixed precision, hybrid parallelism, and quantization techniques for efficient performance.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    FLUX.1-dev

    FLUX.1-dev

    Powerful 12B parameter model for top-tier text-to-image creation

    FLUX.1-dev is a powerful 12-billion parameter rectified flow transformer designed for generating high-quality images from text prompts. It delivers cutting-edge output quality, just slightly below the flagship FLUX.1 [pro] model, and matches or exceeds many closed-source competitors in prompt adherence. The model is trained using guidance distillation, making it more efficient and accessible for developers and artists alike. FLUX.1-dev is openly available with weights provided to support scientific research and innovative creative workflows under a non-commercial license. It integrates smoothly with popular tools like the Diffusers library and ComfyUI, offering flexible options for local and cloud-based inference. The model supports generating large, detailed images up to 1024x1024 resolution with customizable parameters such as guidance scale and inference steps.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    OrangeMixs

    OrangeMixs

    Diffusion model extension enabling image generation with fine control

    OrangeMixs is a collection of popular merged Stable Diffusion models widely used in the Japanese AI art community, curated and maintained by WarriorMama777. The repository provides various high-quality anime-style and photorealistic merge models, designed to work seamlessly with StableDiffusionWebui:Automatic1111 and similar tools. OrangeMixs models are known for blending anime aesthetics with improved anatomical accuracy, vivid colors, and diverse artistic styles, including flat anime shading and oil painting textures. The project regularly updates models, offering detailed merge recipes and instructions using tools like the SuperMerger extension. It includes variants suitable for safe-for-work (SFW), soft NSFW, and hardcore NSFW content, giving users control over output style and content. The models are open access under the CreativeML OpenRAIL-M license, allowing commercial use with clear usage guidelines.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    stable-diffusion-2-1

    stable-diffusion-2-1

    Latent diffusion model for high-quality text-to-image generation

    Stable Diffusion 2.1 is a text-to-image generation model developed by Stability AI, building on the 768-v architecture with additional fine-tuning for improved safety and image quality. It uses a latent diffusion framework that operates in a compressed image space, enabling faster and more efficient image synthesis while preserving detail. The model is conditioned on text prompts via the OpenCLIP-ViT/H encoder and supports generation at resolutions up to 768×768. Released under the OpenRAIL++ license, it permits research and commercial use with specific content restrictions. Stable Diffusion 2.1 is designed for creative tasks such as digital art, design prototyping, and educational tools, but is not suitable for generating factual representations or non-English content. The model was trained on filtered subsets of LAION-5B, with additional steps to reduce NSFW content.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    stable-diffusion-3-medium

    stable-diffusion-3-medium

    Efficient text-to-image model with enhanced quality and typography

    Stable Diffusion 3 Medium is a next-generation text-to-image model by Stability AI, designed using a Multimodal Diffusion Transformer (MMDiT) architecture. It offers notable improvements in image quality, prompt comprehension, typography, and computational efficiency over previous versions. The model integrates three fixed, pretrained text encoders—OpenCLIP-ViT/G, CLIP-ViT/L, and T5-XXL—to interpret complex prompts more effectively. Trained on 1 billion synthetic and filtered public images, it was fine-tuned on 30 million high-quality aesthetic images and 3 million preference-labeled samples. SD3 Medium is optimized for both local deployment and cloud API use, with support via ComfyUI, Diffusers, and other tooling. It is distributed under the Stability AI Community License, permitting research and commercial use for organizations under $1M in annual revenue. While equipped with safety mitigations, developers are encouraged to apply additional safeguards.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Get Avast Free Antivirus | Your top-rated shield against malware and online scams Icon
    Get Avast Free Antivirus | Your top-rated shield against malware and online scams

    Boost your PC's defense against cyberthreats and web-based scams.

    Our antivirus software scans for security and performance issues and helps you to fix them instantly. It also protects you in real time by analyzing unknown files before they reach your desktop PC or laptop — all for free.
    Free Download
  • 10
    stable-diffusion-v-1-4-original

    stable-diffusion-v-1-4-original

    Stable Diffusion v1.4 generates photorealistic images from text prompt

    Stable Diffusion v1.4 is a latent diffusion model that generates images from text, trained at 512×512 resolution using the LAION-Aesthetics v2.5+ dataset. Built on the weights of v1.2, it uses a CLIP ViT-L/14 encoder to guide image generation through cross-attention mechanisms. It supports classifier-free guidance by dropping 10% of text conditioning during training, enhancing creative control. The model runs efficiently while producing visually coherent and high-quality results, though it struggles with compositional prompts, fine details, and photorealistic faces. Stable Diffusion v1.4 primarily supports English and may underperform in other languages. It is licensed under CreativeML OpenRAIL-M and is intended for research and creative use, not for generating factual or identity-representative content. Developers emphasize safety, bias awareness, and the importance of responsible deployment due to its training on unfiltered web data.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    stable-diffusion-xl-base-1.0

    stable-diffusion-xl-base-1.0

    Advanced base model for high-quality text-to-image generation

    stable-diffusion-xl-base-1.0 is a next-generation latent diffusion model developed by Stability AI for producing highly detailed images from text prompts. It forms the core of the SDXL pipeline and can be used on its own or paired with a refinement model for enhanced results. This base model utilizes two pretrained text encoders—OpenCLIP-ViT/G and CLIP-ViT/L—for richer text understanding and improved image quality. The model supports two-stage generation, where the base model creates initial latents and the refiner further denoises them using techniques like SDEdit for sharper outputs. SDXL-base shows significant performance improvement over previous versions such as Stable Diffusion 1.5 and 2.1, especially when paired with the refiner. It is compatible with PyTorch, ONNX, and OpenVINO runtimes, offering flexibility for various hardware setups. Although it delivers high visual fidelity, it still faces challenges with complex composition, photorealism, and rendering legible text.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.