Browse free open source Python AI Image Generators and projects below. Use the toggles on the left to filter open source Python AI Image Generators by OS, license, language, programming language, and project status.

  • Enterprise-grade ITSM, for every business Icon
    Enterprise-grade ITSM, for every business

    Give your IT, operations, and business teams the ability to deliver exceptional services—without the complexity.

    Freshservice is an intuitive, AI-powered platform that helps IT, operations, and business teams deliver exceptional service without the usual complexity. Automate repetitive tasks, resolve issues faster, and provide seamless support across the organization. From managing incidents and assets to driving smarter decisions, Freshservice makes it easy to stay efficient and scale with confidence.
    Try it Free
  • Auth for GenAI | Auth0 Icon
    Auth for GenAI | Auth0

    Enable AI agents to securely access tools, workflows, and data with fine-grained control and just a few lines of code.

    Easily implement secure login experiences for AI Agents - from interactive chatbots to background workers with Auth0. Auth for GenAI is now available in Developer Preview
    Try free now
  • 1
    Fooocus

    Fooocus

    Focus on prompting and generating

    Fooocus is an open-source image generation software that simplifies the process of creating images from text prompts. Built on Gradio and leveraging Stable Diffusion XL, Fooocus eliminates the need for manual parameter tweaking, allowing users to focus solely on crafting prompts. It offers a user-friendly interface with minimal setup, making advanced image synthesis accessible to a broader audience.
    Downloads: 120 This Week
    Last Update:
    See Project
  • 2
    ComfyUI

    ComfyUI

    The most powerful and modular diffusion model GUI, api and backend

    The most powerful and modular diffusion model is GUI and backend. This UI will let you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart-based interface. We are a team dedicated to iterating and improving ComfyUI, supporting the ComfyUI ecosystem with tools like node manager, node registry, cli, automated testing, and public documentation. Open source AI models will win in the long run against closed models and we are only at the beginning. Our core mission is to advance and democratize AI tooling. We believe that the future of AI tooling is open-source and community-driven.
    Downloads: 110 This Week
    Last Update:
    See Project
  • 3
    AUTOMATIC1111 Stable Diffusion web UI
    AUTOMATIC1111's stable-diffusion-webui is a powerful, user-friendly web interface built on the Gradio library that allows users to easily interact with Stable Diffusion models for AI-powered image generation. Supporting both text-to-image (txt2img) and image-to-image (img2img) generation, this open-source UI offers a rich feature set including inpainting, outpainting, attention control, and multiple advanced upscaling options. With a flexible installation process across Windows, Linux, and Apple Silicon, plus support for GPUs and CPUs, it caters to a wide range of users—from hobbyists to professionals. The interface also supports prompt editing, batch processing, custom scripts, and many community extensions, making it a highly customizable and continually evolving platform for creative AI art generation.
    Downloads: 45 This Week
    Last Update:
    See Project
  • 4
    InvokeAI

    InvokeAI

    InvokeAI is a leading creative engine for Stable Diffusion models

    InvokeAI is an implementation of Stable Diffusion, the open source text-to-image and image-to-image generator. It provides a streamlined process with various new features and options to aid the image generation process. It runs on Windows, Mac and Linux machines, and runs on GPU cards with as little as 4 GB or RAM. InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products. This fork is supported across Linux, Windows and Macintosh. Linux users can use either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm driver). We do not recommend the GTX 1650 or 1660 series video cards. They are unable to run in half-precision mode and do not have sufficient VRAM to render 512x512 images.
    Downloads: 27 This Week
    Last Update:
    See Project
  • Gen AI apps are built with MongoDB Atlas Icon
    Gen AI apps are built with MongoDB Atlas

    Build gen AI apps with an all-in-one modern database: MongoDB Atlas

    MongoDB Atlas provides built-in vector search and a flexible document model so developers can build, scale, and run gen AI apps without stitching together multiple databases. From LLM integration to semantic search, Atlas simplifies your AI architecture—and it’s free to get started.
    Start Free
  • 5
    Stable Diffusion WebUI

    Stable Diffusion WebUI

    Web interface for generating images using Stable Diffusion models

    This project provides a powerful web-based interface for running Stable Diffusion, a text-to-image generation model. Developed by AUTOMATIC1111, it supports numerous features like model customization, prompt history, image upscaling, inpainting, and batch processing. The WebUI is beginner-friendly yet powerful enough for advanced users, becoming one of the most popular community-run UIs for AI image generation.
    Downloads: 20 This Week
    Last Update:
    See Project
  • 6
    Diffusers

    Diffusers

    State-of-the-art diffusion models for image and audio generation

    Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Whether you're looking for a simple inference solution or training your own diffusion models, Diffusers is a modular toolbox that supports both. Our library is designed with a focus on usability over performance, simple over easy, and customizability over abstractions. State-of-the-art diffusion pipelines that can be run in inference with just a few lines of code. Interchangeable noise schedulers for different diffusion speeds and output quality. Pretrained models that can be used as building blocks, and combined with schedulers, for creating your own end-to-end diffusion systems. We recommend installing Diffusers in a virtual environment from PyPi or Conda. For more details about installing PyTorch and Flax, please refer to their official documentation.
    Downloads: 17 This Week
    Last Update:
    See Project
  • 7
    Dream Textures

    Dream Textures

    Stable Diffusion built-in to Blender

    Create textures, concept art, background assets, and more with a simple text prompt. Use the 'Seamless' option to create textures that tile perfectly with no visible seam. Texture entire scenes with 'Project Dream Texture' and depth to image. Re-style animations with the Cycles render pass. Run the models on your machine to iterate without slowdowns from a service. Create textures, concept art, and more with text prompts. Learn how to use the various configuration options to get exactly what you're looking for. Texture entire models and scenes with depth to image. Inpaint to fix up images and convert existing textures into seamless ones automatically. Outpaint to increase the size of an image by extending it in any direction. Perform style transfer and create novel animations with Stable Diffusion as a post processing step. Dream Textures has been tested with CUDA and Apple Silicon GPUs. Over 4GB of VRAM is recommended.
    Downloads: 17 This Week
    Last Update:
    See Project
  • 8
    DALL·E Mini

    DALL·E Mini

    Generate images from a text prompt

    DALL·E Mini, generate images from a text prompt. OpenAI had the first impressive model for generating images with DALL·E. Craiyon/DALL·E mini is an attempt at reproducing those results with an open-source model. The model is trained by looking at millions of images from the internet with their associated captions. Over time, it learns how to draw an image from a text prompt. Some concepts are learned from memory as they may have seen similar images. However, it can also learn how to create unique images that don't exist, such as "the Eiffel tower is landing on the moon," by combining multiple concepts together. Optimizer updated to Distributed Shampoo, which proved to be more efficient following comparison of different optimizers. New architecture based on NormFormer and GLU variants following comparison of transformer variants, including DeepNet, Swin v2, NormFormer, Sandwich-LN, RMSNorm with GeLU/Swish/SmeLU.
    Downloads: 9 This Week
    Last Update:
    See Project
  • 9
    KoboldCpp

    KoboldCpp

    Run GGUF models easily with a UI or API. One File. Zero Install.

    KoboldCpp is an easy-to-use AI text-generation software for GGML and GGUF models, inspired by the original KoboldAI. It's a single self-contained distributable that builds off llama.cpp and adds many additional powerful features.
    Leader badge
    Downloads: 192 This Week
    Last Update:
    See Project
  • Crowdtesting That Delivers | Testeum Icon
    Crowdtesting That Delivers | Testeum

    Unfixed bugs delaying your launch? Test with real users globally – check it out for free, results in days.

    Testeum connects your software, app, or website to a worldwide network of testers, delivering detailed feedback in under 48 hours. Ensure functionality and refine UX on real devices, all at a fraction of traditional costs. Trusted by startups and enterprises alike, our platform streamlines quality assurance with actionable insights. Click to perfect your product now.
    Click to perfect your product now.
  • 10
    Stable Diffusion

    Stable Diffusion

    High-Resolution Image Synthesis with Latent Diffusion Models

    Stable Diffusion Version 2. The Stable Diffusion project, developed by Stability AI, is a cutting-edge image synthesis model that utilizes latent diffusion techniques for high-resolution image generation. It offers an advanced method of generating images based on text input, making it highly flexible for various creative applications. The repository contains pretrained models, various checkpoints, and tools to facilitate image generation tasks, such as fine-tuning and modifying the models. Stability AI's approach to image synthesis has contributed to creating detailed, scalable images while maintaining efficiency.
    Leader badge
    Downloads: 56 This Week
    Last Update:
    See Project
  • 11
    Core ML Stable Diffusion

    Core ML Stable Diffusion

    Stable Diffusion with Core ML on Apple Silicon

    Run Stable Diffusion on Apple Silicon with Core ML. python_coreml_stable_diffusion, a Python package for converting PyTorch models to Core ML format and performing image generation with Hugging Face diffusers in Python. StableDiffusion, a Swift package that developers can add to their Xcode projects as a dependency to deploy image generation capabilities in their apps. The Swift package relies on the Core ML model files generated by python_coreml_stable_diffusion. Hugging Face ran the conversion procedure on the following models and made the Core ML weights publicly available on the Hub. If you would like to convert a version of Stable Diffusion that is not already available on the Hub, please refer to the Converting Models to Core ML. Log in to or register for your Hugging Face account, generate a User Access Token and use this token to set up Hugging Face API access by running huggingface-cli login in a Terminal window.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 12
    texturize

    texturize

    Generate photo-realistic textures based on source images

    Generate photo-realistic textures based on source images. Remix, remake, mashup! Useful if you want to create variations on a theme or elaborate on an existing texture. A command-line tool and Python library to automatically generate new textures similar to a source image or photograph. It's useful in the context of computer graphics if you want to make variations on a theme or expand the size of an existing texture. This software is powered by deep learning technology, using a combination of convolution networks and example-based optimization to synthesize images. We're building texturize as the highest-quality open source library available! The examples are available as notebooks, and you can run them directly in-browser thanks to Jupyter and Google Colab.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 13
    Deep Daze

    Deep Daze

    Simple command line tool for text to image generation

    Simple command-line tool for text to image generation using OpenAI's CLIP and Siren (Implicit neural representation network). In true deep learning fashion, more layers will yield better results. Default is at 16, but can be increased to 32 depending on your resources. Technique first devised and shared by Mario Klingemann, it allows you to prime the generator network with a starting image, before being steered towards the text. Simply specify the path to the image you wish to use, and optionally the number of initial training steps. We can also feed in an image as an optimization goal, instead of only priming the generator network. Deepdaze will then render its own interpretation of that image. The regular mode for texts only allows 77 tokens. If you want to visualize a full story/paragraph/song/poem, set create_story to True.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 14
    Stable-Dreamfusion

    Stable-Dreamfusion

    Text-to-3D & Image-to-3D & Mesh Exportation with NeRF + Diffusion

    A pytorch implementation of the text-to-3D model Dreamfusion, powered by the Stable Diffusion text-to-2D model. This project is a work-in-progress and contains lots of differences from the paper. The current generation quality cannot match the results from the original paper, and many prompts still fail badly! Since the Imagen model is not publicly available, we use Stable Diffusion to replace it (implementation from diffusers). Different from Imagen, Stable-Diffusion is a latent diffusion model, which diffuses in a latent space instead of the original image space. Therefore, we need the loss to propagate back from the VAE's encoder part too, which introduces extra time costs in training. We use the multi-resolution grid encoder to implement the NeRF backbone (implementation from torch-ngp), which enables much faster rendering.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 15
    Big Sleep

    Big Sleep

    A simple command line tool for text to image generation

    A simple command line tool for text to image generation, using OpenAI's CLIP and a BigGAN. Ryan Murdock has done it again, combining OpenAI's CLIP and the generator from a BigGAN! This repository wraps up his work so it is easily accessible to anyone who owns a GPU. You will be able to have the GAN dream-up images using natural language with a one-line command in the terminal. User-made notebook with bug fixes and added features, like google drive integration. Images will be saved to wherever the command is invoked. If you have enough memory, you can also try using a bigger vision model released by OpenAI for improved generations. You can set the number of classes that you wish to restrict Big Sleep to use for the Big GAN with the --max-classes flag as follows (ex. 15 classes). This may lead to extra stability during training, at the cost of lost expressivity.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 16
    DALL-E 2 - Pytorch

    DALL-E 2 - Pytorch

    Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis

    Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch. The main novelty seems to be an extra layer of indirection with the prior network (whether it is an autoregressive transformer or a diffusion network), which predicts an image embedding based on the text embedding from CLIP. Specifically, this repository will only build out the diffusion prior network, as it is the best performing variant (but which incidentally involves a causal transformer as the denoising network) To train DALLE-2 is a 3 step process, with the training of CLIP being the most important. To train CLIP, you can either use x-clip package, or join the LAION discord, where a lot of replication efforts are already underway. Then, you will need to train the decoder, which learns to generate images based on the image embedding coming from the trained CLIP.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 17
    Deep Exemplar-based Video Colorization

    Deep Exemplar-based Video Colorization

    The source code of CVPR 2019 paper "Deep Exemplar-based Colorization"

    The source code of CVPR 2019 paper "Deep Exemplar-based Video Colorization". End-to-end network for exemplar-based video colorization. The main challenge is to achieve temporal consistency while remaining faithful to the reference style. To address this issue, we introduce a recurrent framework that unifies the semantic correspondence and color propagation steps. Both steps allow a provided reference image to guide the colorization of every frame, thus reducing accumulated propagation errors. Video frames are colorized in sequence based on the colorization history, and its coherency is further enforced by the temporal consistency loss. All of these components, learned end-to-end, help produce realistic videos with good temporal stability. Experiments show our result is superior to the state-of-the-art methods both quantitatively and qualitatively. In order to colorize your own video, it requires to extract the video frames, and provide a reference image as an example.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 18
    Stable Diffusion in Docker

    Stable Diffusion in Docker

    Run the Stable Diffusion releases in a Docker container

    Run the Stable Diffusion releases in a Docker container with txt2img, img2img, depth2img, pix2pix, upscale4x, and inpaint. Run the Stable Diffusion releases on Huggingface in a GPU-accelerated Docker container. By default, the pipeline uses the full model and weights which requires a CUDA capable GPU with 8GB+ of VRAM. It should take a few seconds to create one image. On less powerful GPUs you may need to modify some of the options; see the Examples section for more details. If you lack a suitable GPU you can set the options --device cpu and --onnx instead. Since it uses the model, you will need to create a user access token in your Huggingface account. Save the user access token in a file called token.txt and make sure it is available when building the container. Create an image from an existing image and a text prompt. Modify an existing image with its depth map and a text prompt.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 19
    ruDALL-E

    ruDALL-E

    Generate images from texts. In Russian

    We present a family of generative models from SberDevices and Sber AI! Models allow you to create images that did not exist before. All you need is a text description in Russian or another language. Try to create unique images together with generative artists using your own formulations. Ask generative artists to depict something special for you as well. The Kandinsky 2.0 model uses the reverse diffusion method and creates colorful images on various topics in a matter of seconds by text query in Russian and other languages. You can even combine different languages within a single query. This neural network has been developed and trained by Sber AI researchers in close collaboration with scientists from Artificial Intelligence Research Institute using joined datasets by Sber AI and SberDevices. Russian text-to-image model that generates images from text. The architecture is the same as ruDALL-E XL. Even more parameters in the new version.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 20
    Janus-Pro

    Janus-Pro

    Janus-Series: Unified Multimodal Understanding and Generation Models

    Janus is a cutting-edge, unified multimodal model designed to advance both multimodal understanding and generation. It features a decoupled visual encoding approach that allows it to handle visual tasks separately from the generative tasks, resulting in enhanced flexibility and performance. With a singular transformer architecture, Janus outperforms previous models by surpassing specialized task-specific models in its ability to handle diverse multimodal inputs and generate high-quality outputs. Its latest iteration, Janus-Pro, improves on this with a more optimized training strategy, expanded data, and larger model scaling, leading to significant advancements in both multimodal understanding and text-to-image generation.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 21
     stable-diffusion-v1-4

    stable-diffusion-v1-4

    Text-to-image diffusion model for high-quality image generation

    stable-diffusion-v1-4 is a high-performance text-to-image latent diffusion model developed by CompVis. It generates photo-realistic images from natural language prompts using a pretrained CLIP ViT-L/14 text encoder and a UNet-based denoising architecture. This version builds on v1-2, fine-tuned over 225,000 steps at 512×512 resolution on the “laion-aesthetics v2 5+” dataset, with 10% text-conditioning dropout for improved classifier-free guidance. It is optimized for use with Hugging Face’s Diffusers library and supports both PyTorch and JAX/Flax frameworks, offering flexibility across GPUs and TPUs. Though powerful, the model has limitations with compositional logic, photorealism, non-English prompts, and rendering accurate text or faces. Intended for research and creative exploration, it includes safety tools to detect NSFW content but may still reflect dataset biases. Users are advised to follow responsible AI practices and avoid harmful, unethical, or out-of-scope applications.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    AI Atelier

    AI Atelier

    Based on the Disco Diffusion, version of the AI art creation software

    Based on the Disco Diffusion, we have developed a Chinese & English version of the AI art creation software "AI Atelier". We offer both Text-To-Image models (Disco Diffusion and VQGAN+CLIP) and Text-To-Text (GPT-J-6B and GPT-NEOX-20B) as options. Making available complete source code of licensed works and modifications, which include larger works using a licensed work, under the same license. Copyright and license notices must be preserved. When a modified version is used to provide a service over a network, the complete source code of the modified version must be made available. Create 2D and 3D animations and not only still frames (from Disco Diffusion v5 and VQGAN Animations). Input audio and images for generation instead of just text. Simplify tool setup process on colab, and enable ‘one-click’ sharing of the generated link to other users. Experiment with the possibilities for multi-user access to the same link.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    BCI

    BCI

    BCI: Breast Cancer Immunohistochemical Image Generation

    Breast Cancer Immunohistochemical Image Generation through Pyramid Pix2pix. We have released the trained model on BCI and LLVIP datasets. We host a competition for breast cancer immunohistochemistry image generation on Grand Challenge. Project pix2pix provides a python script to generate pix2pix training data in the form of pairs of images {A,B}, where A and B are two different depictions of the same underlying scene, these can be pairs {HE, IHC}. Then we can learn to translate A(HE images) to B(IHC images). The evaluation of human epidermal growth factor receptor 2 (HER2) expression is essential to formulate a precise treatment for breast cancer. The routine evaluation of HER2 is conducted with immunohistochemical techniques (IHC), which is very expensive. Therefore, for the first time, we propose a breast cancer immunohistochemical (BCI) benchmark attempting to synthesize IHC data directly with the paired hematoxylin and eosin (HE) stained images.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    CLIP Guided Diffusion

    CLIP Guided Diffusion

    A CLI tool/python module for generating images from text

    A CLI tool/python module for generating images from text using guided diffusion and CLIP from OpenAI. Text to image generation (multiple prompts with weights). Non-square Generations (experimental) Generate portrait or landscape images by specifying a number to offset the width and/or height. Uses fewer timesteps over the same diffusion schedule. Sacrifices accuracy/alignment for quicker runtime. options: - 25, 50, 150, 250, 500, 1000, ddim25,ddim50,ddim150, ddim250,ddim500,ddim1000 (default: 1000) Prepending a number with ddim will use the ddim scheduler. e.g. ddim25 will use the 25 timstep ddim scheduler. This method may be better at shorter timestep_respacing values. Multiple prompts can be specified with the | character. You may optionally specify a weight for each prompt.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    ChatFred

    ChatFred

    Alfred workflow using ChatGPT, DALL·E 2 and other models for chatting

    Alfred workflow using ChatGPT, DALL·E 2 and other models for chatting, image generation and more. Access ChatGPT, DALL·E 2, and other OpenAI models. Language models often give wrong information. Verify answers if they are important. Talk with ChatGPT via the cf keyword. Answers will show as Large Type. Alternatively, use the Universal Action, Fallback Search, or Hotkey. To generate text with InstructGPT models and see results in-line, use the cft keyword. ⤓ Install on the Alfred Gallery or download it over GitHub and add your OpenAI API key. If you have used ChatGPT or DALL·E 2, you already have an OpenAI account. Otherwise, you can sign up here - You will receive $5 in free credit, no payment data is required. Afterward you can create your API key. To start a conversation with ChatGPT either use the keyword cf, setup the workflow as a fallback search in Alfred or create your custom hotkey to directly send the clipboard content to ChatGPT.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.