AI Models for BSD

Browse free open source AI Models and projects for BSD below. Use the toggles on the left to filter open source AI Models by OS, license, language, programming language, and project status.

  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • MongoDB 8.0 on Atlas | Run anywhere Icon
    MongoDB 8.0 on Atlas | Run anywhere

    Now available in even more cloud regions across AWS, Azure, and Google Cloud.

    MongoDB 8.0 brings enhanced performance and flexibility to Atlas—with expanded availability across 125+ regions globally. Build modern apps anywhere your users are, with the power of a modern database behind you.
    Learn More
  • 1
    DeepSeek-V3

    DeepSeek-V3

    Powerful AI language model (MoE) optimized for efficiency/performance

    DeepSeek-V3 is a robust Mixture-of-Experts (MoE) language model developed by DeepSeek, featuring a total of 671 billion parameters, with 37 billion activated per token. It employs Multi-head Latent Attention (MLA) and the DeepSeekMoE architecture to enhance computational efficiency. The model introduces an auxiliary-loss-free load balancing strategy and a multi-token prediction training objective to boost performance. Trained on 14.8 trillion diverse, high-quality tokens, DeepSeek-V3 underwent supervised fine-tuning and reinforcement learning to fully realize its capabilities. Evaluations indicate that it outperforms other open-source models and rivals leading closed-source models, achieving this with a training duration of 55 days on 2,048 Nvidia H800 GPUs, costing approximately $5.58 million.
    Downloads: 76 This Week
    Last Update:
    See Project
  • 2
    llama.cpp

    llama.cpp

    Port of Facebook's LLaMA model in C/C++

    The llama.cpp project enables the inference of Meta's LLaMA model (and other models) in pure C/C++ without requiring a Python runtime. It is designed for efficient and fast model execution, offering easy integration for applications needing LLM-based capabilities. The repository focuses on providing a highly optimized and portable implementation for running large language models directly within C/C++ environments.
    Downloads: 76 This Week
    Last Update:
    See Project
  • 3
    DeepSeek R1

    DeepSeek R1

    Open-source, high-performance AI model with advanced reasoning

    DeepSeek-R1 is an open-source large language model developed by DeepSeek, designed to excel in complex reasoning tasks across domains such as mathematics, coding, and language. DeepSeek R1 offers unrestricted access for both commercial and academic use. The model employs a Mixture of Experts (MoE) architecture, comprising 671 billion total parameters with 37 billion active parameters per token, and supports a context length of up to 128,000 tokens. DeepSeek-R1's training regimen uniquely integrates large-scale reinforcement learning (RL) without relying on supervised fine-tuning, enabling the model to develop advanced reasoning capabilities. This approach has resulted in performance comparable to leading models like OpenAI's o1, while maintaining cost-efficiency. To further support the research community, DeepSeek has released distilled versions of the model based on architectures such as LLaMA and Qwen.
    Downloads: 72 This Week
    Last Update:
    See Project
  • 4
    Qwen2.5

    Qwen2.5

    Open source large language model by Alibaba

    Qwen2.5 is a series of large language models developed by the Qwen team at Alibaba Cloud, designed to enhance natural language understanding and generation across multiple languages. The models are available in various sizes, including 0.5B, 1.5B, 3B, 7B, 14B, 32B, and 72B parameters, catering to diverse computational requirements. Trained on a comprehensive dataset of up to 18 trillion tokens, Qwen2.5 models exhibit significant improvements in instruction following, long-text generation (exceeding 8,000 tokens), and structured data comprehension, such as tables and JSON formats. They support context lengths up to 128,000 tokens and offer multilingual capabilities in over 29 languages, including Chinese, English, French, Spanish, and more. The models are open-source under the Apache 2.0 license, with resources and documentation available on platforms like Hugging Face and ModelScope. This is a full ZIP snapshot of the Qwen2.5 code.
    Leader badge
    Downloads: 136 This Week
    Last Update:
    See Project
  • Build Securely on AWS with Proven Frameworks Icon
    Build Securely on AWS with Proven Frameworks

    Lay a foundation for success with Tested Reference Architectures developed by Fortinet’s experts. Learn more in this white paper.

    Moving to the cloud brings new challenges. How can you manage a larger attack surface while ensuring great network performance? Turn to Fortinet’s Tested Reference Architectures, blueprints for designing and securing cloud environments built by cybersecurity experts. Learn more and explore use cases in this white paper.
    Download Now
  • 5
    Stable Diffusion

    Stable Diffusion

    High-Resolution Image Synthesis with Latent Diffusion Models

    Stable Diffusion Version 2. The Stable Diffusion project, developed by Stability AI, is a cutting-edge image synthesis model that utilizes latent diffusion techniques for high-resolution image generation. It offers an advanced method of generating images based on text input, making it highly flexible for various creative applications. The repository contains pretrained models, various checkpoints, and tools to facilitate image generation tasks, such as fine-tuning and modifying the models. Stability AI's approach to image synthesis has contributed to creating detailed, scalable images while maintaining efficiency.
    Leader badge
    Downloads: 81 This Week
    Last Update:
    See Project
  • 6
    Phi-3-MLX

    Phi-3-MLX

    Phi-3.5 for Mac: Locally-run Vision and Language Models

    Phi-3-Vision-MLX is an Apple MLX (machine learning on Apple silicon) implementation of Phi-3 Vision, a lightweight multi-modal model designed for vision and language tasks. It focuses on running vision-language AI efficiently on Apple hardware like M1 and M2 chips.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 7
    FinGPT

    FinGPT

    Open-Source Financial Large Language Models!

    FinGPT is an open-source large language model tailored specifically for financial tasks. Developed by AI4Finance Foundation, it is designed to assist with various financial applications, such as forecasting, financial sentiment analysis, and portfolio management. FinGPT has been trained on a diverse range of financial datasets, making it a powerful tool for finance professionals looking to leverage AI for data-driven decision-making. The model is freely available on platforms like Hugging Face, allowing for easy access and customization. FinGPT's capabilities are extended by its ability to integrate with existing financial systems and enhance predictive analytics in finance.
    Downloads: 36 This Week
    Last Update:
    See Project
  • 8
    Qwen

    Qwen

    Qwen (通义千问) chat/pretrained large language model Alibaba Cloud

    Qwen is a series of large language models developed by Alibaba Cloud, consisting of various pretrained versions like Qwen-1.8B, Qwen-7B, Qwen-14B, and Qwen-72B. These models, which range from smaller to larger configurations, are designed for a wide range of natural language processing tasks. They are openly available for research and commercial use, with Qwen's code and model weights shared on GitHub. Qwen's capabilities include text generation, comprehension, and conversation, making it a versatile tool for developers looking to integrate advanced AI functionalities into their applications.
    Downloads: 29 This Week
    Last Update:
    See Project
  • 9
    DiffRhythm

    DiffRhythm

    Di♪♪Rhythm: Blazingly Fast & Simple End-to-End Song Generation

    DiffRhythm is an open-source, diffusion-based model designed to generate full-length songs. Focused on music creation, it combines advanced AI techniques to produce coherent and creative audio compositions. The model utilizes a latent diffusion architecture, making it capable of producing high-quality, long-form music. It can be accessed on Huggingface, where users can interact with a demo or download the model for further use. DiffRhythm offers tools for both training and inference, and its flexibility makes it ideal for AI-based music production and research in music generation.
    Leader badge
    Downloads: 18 This Week
    Last Update:
    See Project
  • Powering the best of the internet | Fastly Icon
    Powering the best of the internet | Fastly

    Fastly's edge cloud platform delivers faster, safer, and more scalable sites and apps to customers.

    Ensure your websites, applications and services can effortlessly handle the demands of your users with Fastly. Fastly’s portfolio is designed to be highly performant, personalized and secure while seamlessly scaling to support your growth.
    Try for free
  • 10
    GPT-NeoX

    GPT-NeoX

    Implementation of model parallel autoregressive transformers on GPUs

    This repository records EleutherAI's library for training large-scale language models on GPUs. Our current framework is based on NVIDIA's Megatron Language Model and has been augmented with techniques from DeepSpeed as well as some novel optimizations. We aim to make this repo a centralized and accessible place to gather techniques for training large-scale autoregressive language models, and accelerate research into large-scale training. For those looking for a TPU-centric codebase, we recommend Mesh Transformer JAX. If you are not looking to train models with billions of parameters from scratch, this is likely the wrong library to use. For generic inference needs, we recommend you use the Hugging Face transformers library instead which supports GPT-NeoX models.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 11
    Grok-1

    Grok-1

    Open-source, high-performance Mixture-of-Experts large language model

    Grok-1 is a 314-billion-parameter Mixture-of-Experts (MoE) large language model developed by xAI. Designed to optimize computational efficiency, it activates only 25% of its weights for each input token. In March 2024, xAI released Grok-1's model weights and architecture under the Apache 2.0 license, making them openly accessible to developers. The accompanying GitHub repository provides JAX example code for loading and running the model. Due to its substantial size, utilizing Grok-1 requires a machine with significant GPU memory. The repository's MoE layer implementation prioritizes correctness over efficiency, avoiding the need for custom kernels. This is a full repo snapshot ZIP file of the Grok-1 code.
    Downloads: 11 This Week
    Last Update:
    See Project
  • 12
    Hunyuan3D 2.0

    Hunyuan3D 2.0

    High-Resolution 3D Assets Generation with Large Scale Diffusion Models

    The Hunyuan3D-2 model, developed by Tencent, is designed for generating high-resolution 3D assets using large-scale diffusion models. This model offers advanced capabilities for creating detailed 3D models, including texture enhancements, multi-view shape generation, and rapid inference for real-time applications. It is particularly useful for industries requiring high-quality 3D content, such as gaming, film, and virtual reality. Hunyuan3D-2 supports various enhancements and is available for deployment through tools like Blender and Hugging Face.
    Leader badge
    Downloads: 18 This Week
    Last Update:
    See Project
  • 13
    Qwen2.5-Coder

    Qwen2.5-Coder

    Qwen2.5-Coder is the code version of Qwen2.5, the large language model

    Qwen2.5-Coder, developed by QwenLM, is an advanced open-source code generation model designed for developers seeking powerful and diverse coding capabilities. It includes multiple model sizes—ranging from 0.5B to 32B parameters—providing solutions for a wide array of coding needs. The model supports over 92 programming languages and offers exceptional performance in generating code, debugging, and mathematical problem-solving. Qwen2.5-Coder, with its long context length of 128K tokens, is ideal for a variety of use cases, from simple code assistants to complex programming scenarios, matching the capabilities of models like GPT-4o.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 14
    MoveNet

    MoveNet

    A CNN model that predicts human joints from RGB images of a person

    The MoveNet model is an efficient, real-time human pose estimation system designed for detecting and tracking keypoints of human bodies. It utilizes deep learning to accurately locate 17 key points across the body, providing precise tracking even with fast movements. Optimized for mobile and embedded devices, MoveNet can be integrated into applications for fitness tracking, augmented reality, and interactive systems.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 15
    Janus-Pro

    Janus-Pro

    Janus-Series: Unified Multimodal Understanding and Generation Models

    Janus is a cutting-edge, unified multimodal model designed to advance both multimodal understanding and generation. It features a decoupled visual encoding approach that allows it to handle visual tasks separately from the generative tasks, resulting in enhanced flexibility and performance. With a singular transformer architecture, Janus outperforms previous models by surpassing specialized task-specific models in its ability to handle diverse multimodal inputs and generate high-quality outputs. Its latest iteration, Janus-Pro, improves on this with a more optimized training strategy, expanded data, and larger model scaling, leading to significant advancements in both multimodal understanding and text-to-image generation.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 16
    CSM (Conversational Speech Model)

    CSM (Conversational Speech Model)

    A Conversational Speech Generation Model

    The CSM (Conversational Speech Model) is a speech generation model developed by Sesame AI that creates RVQ audio codes from text and audio inputs. It uses a Llama backbone and a smaller audio decoder to produce audio codes for realistic speech synthesis. The model has been fine-tuned for interactive voice demos and is hosted on platforms like Hugging Face for testing. CSM offers a flexible setup and is compatible with CUDA-enabled GPUs for efficient execution.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 17
    MediaPipe Face Detection

    MediaPipe Face Detection

    Detect faces in an image

    The MediaPipe Face Detection model is a high-performance, real-time face detection solution that uses machine learning to identify faces in images and video streams. It is optimized for mobile and embedded platforms, offering fast and accurate face detection while maintaining a small memory footprint. This model supports multiple face detections and is highly efficient, making it suitable for a variety of applications such as augmented reality, user authentication, and facial expression analysis.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 18
     stable-diffusion-v1-4

    stable-diffusion-v1-4

    Text-to-image diffusion model for high-quality image generation

    stable-diffusion-v1-4 is a high-performance text-to-image latent diffusion model developed by CompVis. It generates photo-realistic images from natural language prompts using a pretrained CLIP ViT-L/14 text encoder and a UNet-based denoising architecture. This version builds on v1-2, fine-tuned over 225,000 steps at 512×512 resolution on the “laion-aesthetics v2 5+” dataset, with 10% text-conditioning dropout for improved classifier-free guidance. It is optimized for use with Hugging Face’s Diffusers library and supports both PyTorch and JAX/Flax frameworks, offering flexibility across GPUs and TPUs. Though powerful, the model has limitations with compositional logic, photorealism, non-English prompts, and rendering accurate text or faces. Intended for research and creative exploration, it includes safety tools to detect NSFW content but may still reflect dataset biases. Users are advised to follow responsible AI practices and avoid harmful, unethical, or out-of-scope applications.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    BLEURT-20-D12

    BLEURT-20-D12

    Custom BLEURT model for evaluating text similarity using PyTorch

    BLEURT-20-D12 is a PyTorch implementation of BLEURT, a model designed to assess the semantic similarity between two text sequences. It serves as an automatic evaluation metric for natural language generation tasks like summarization and translation. The model predicts a score indicating how similar a candidate sentence is to a reference sentence, with higher scores indicating greater semantic overlap. Unlike standard BLEURT models from TensorFlow, this version is built from a custom PyTorch transformer library. It requires installing the model-specific library from GitHub to function properly. Once set up, it can be used to compute similarity scores with minimal code. BLEURT-20-D12 enables more flexible deployment in PyTorch-based workflows for evaluating language generation outputs.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Bio_ClinicalBERT

    Bio_ClinicalBERT

    ClinicalBERT model trained on MIMIC notes for clinical NLP tasks

    Bio_ClinicalBERT is a domain-specific language model tailored for clinical natural language processing (NLP), extending BioBERT with additional training on clinical notes. It was initialized from BioBERT-Base v1.0 and further pre-trained on all clinical notes from the MIMIC-III database (~880M words), which includes ICU patient records. The training focused on improving performance in tasks like named entity recognition and natural language inference within the healthcare domain. Notes were processed using rule-based sectioning and tokenized with SciSpacy. Training was done for 150,000 steps using a batch size of 32, max sequence length of 128, and a masked language modeling objective with a 0.15 mask probability. Bio_ClinicalBERT is available through Hugging Face's Transformers library for easy integration. It supports medical AI research and applications involving electronic health record understanding, clinical decision support, and biomedical information extraction.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    Blazeface

    Blazeface

    Blazeface is a lightweight model that detects faces in images

    Blazeface is a lightweight, high-performance face detection model designed for mobile and embedded devices, developed by TensorFlow. It is optimized for real-time face detection tasks and runs efficiently on mobile CPUs, ensuring minimal latency and power consumption. Blazeface is based on a fast architecture and uses deep learning techniques to detect faces with high accuracy, even in challenging conditions. It supports multiple face detection in varying lighting and poses, and is designed to work in real-world applications like mobile apps, robotics, and other resource-constrained environments.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    CLIP-ViT-bigG-14-laion2B-39B-b160k

    CLIP-ViT-bigG-14-laion2B-39B-b160k

    CLIP ViT-bigG/14: Zero-shot image-text model trained on LAION-2B

    CLIP-ViT-bigG-14-laion2B-39B-b160k is a powerful vision-language model trained on the English subset of the LAION-5B dataset using the OpenCLIP framework. Developed by LAION and trained by Mitchell Wortsman on Stability AI’s compute infrastructure, it pairs a ViT-bigG/14 vision transformer with a text encoder to perform contrastive learning on image-text pairs. This model excels at zero-shot image classification, image-to-text and text-to-image retrieval, and can be adapted for tasks such as image captioning or generation guidance. It achieves an impressive 80.1% top-1 accuracy on ImageNet-1k without any fine-tuning, showcasing its robustness in open-domain settings. Its training dataset is uncurated and web-sourced, meaning it reflects the biases and risks of large-scale internet data. The model is intended for research use and is not recommended for real-world deployment without domain-specific testing and safety evaluations.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    ChatGLM2-6B

    ChatGLM2-6B

    An Open Bilingual Chat LLM | Open Source Bilingual Conversation LLM

    ChatGLM2-6B is an advanced open-source bilingual dialogue model developed by THUDM. It is the second iteration of the ChatGLM series, designed to offer enhanced performance while maintaining the strengths of its predecessor, including smooth conversation flow and low deployment barriers. The model is fine-tuned for both Chinese and English languages, making it a versatile tool for various multilingual applications. ChatGLM2-6B aims to push the boundaries of natural language understanding and generation, offering improved accuracy and user experience compared to earlier models.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    ControlNet

    ControlNet

    Extension for Stable Diffusion using edge, depth, pose, and more

    ControlNet is a neural network architecture that enhances Stable Diffusion by enabling image generation conditioned on specific visual structures such as edges, poses, depth maps, and segmentation masks. By injecting these auxiliary inputs into the diffusion process, ControlNet gives users powerful control over the layout and composition of generated images while preserving the style and flexibility of generative models. It supports a wide range of conditioning types through pretrained modules, including Canny edges, HED (soft edges), Midas depth, OpenPose skeletons, normal maps, MLSD lines, scribbles, and ADE20k-based semantic segmentation. The system includes both ControlNet+SD1.5 model weights and compatible third-party detectors like Midas and OpenPose to extract input features. Each conditioning type is matched with a specific .pth model file to be used alongside Stable Diffusion for fine-grained control.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    ControlNet-v1-1

    ControlNet-v1-1

    ControlNet-1 enables precise image generation via input conditioning

    ControlNet-v1-1 is an updated version of the ControlNet architecture designed to enhance control over image generation by conditioning diffusion models with additional inputs such as edges, depth, poses, or other structural cues. Built to work alongside models like Stable Diffusion, it allows users to guide the generation process more precisely while maintaining high visual fidelity. This version improves upon the original with refinements to stability, performance, and compatibility. ControlNet-v1-1 enables use cases like pose transfer, depth-aware rendering, and detailed sketch-to-image workflows. It is especially useful for tasks that require consistency between input structure and output style or content. While official documentation is pending, it is actively integrated into community spaces and tools across Hugging Face. The model is released under the OpenRAIL license and is suitable for creative and research applications with responsible use constraints.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • 3
  • 4
  • 5
  • Next
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.