Open Source ChromeOS Computer Vision Libraries

Computer Vision Libraries for ChromeOS

Browse free open source Computer Vision Libraries and projects for ChromeOS below. Use the toggles on the left to filter open source Computer Vision Libraries by OS, license, language, programming language, and project status.

  • Gen AI apps are built with MongoDB Atlas Icon
    Gen AI apps are built with MongoDB Atlas

    Build gen AI apps with an all-in-one modern database: MongoDB Atlas

    MongoDB Atlas provides built-in vector search and a flexible document model so developers can build, scale, and run gen AI apps without stitching together multiple databases. From LLM integration to semantic search, Atlas simplifies your AI architecture—and it’s free to get started.
    Start Free
  • Keep company data safe with Chrome Enterprise Icon
    Keep company data safe with Chrome Enterprise

    Protect your business with AI policies and data loss prevention in the browser

    Make AI work your way with Chrome Enterprise. Block unapproved sites and set custom data controls that align with your company's policies.
    Download Chrome
  • 1
    Armadillo

    Armadillo

    fast C++ library for linear algebra & scientific computing

    * Fast C++ library for linear algebra (matrix maths) and scientific computing * Easy to use functions and syntax, deliberately similar to Matlab / Octave * Uses template meta-programming techniques to increase efficiency * Provides user-friendly wrappers for OpenBLAS, Intel MKL, LAPACK, ATLAS, ARPACK, SuperLU and FFTW libraries * Useful for machine learning, pattern recognition, signal processing, bioinformatics, statistics, finance, etc. * Downloads: http://arma.sourceforge.net/download.html * Documentation: http://arma.sourceforge.net/docs.html * Bug reports: http://arma.sourceforge.net/faq.html * Git repo: https://gitlab.com/conradsnicta/armadillo-code
    Leader badge
    Downloads: 2,239 This Week
    Last Update:
    See Project
  • 2
    COLMAP

    COLMAP

    Structure-from-Motion and Multi-View Stereo

    COLMAP is a general-purpose Structure-from-Motion (SfM) and Multi-View Stereo (MVS) pipeline with a graphical and command-line interface. It offers a wide range of features for the reconstruction of ordered and unordered image collections. The software is licensed under the new BSD license.
    Downloads: 45 This Week
    Last Update:
    See Project
  • 3
    DensePose

    DensePose

    A real-time approach for mapping all human pixels of 2D RGB images

    DensePose is a computer vision system that maps all human pixels in an RGB image to the 3D surface of a human body model. It extends human pose estimation from predicting joint keypoints to providing dense correspondences between 2D images and a canonical 3D mesh (such as the SMPL model). This enables detailed understanding of human shape, motion, and surface appearance directly from images or videos. The repository includes the DensePose network architecture, training code, pretrained models, and dataset tools for annotation and visualization. DensePose is widely used in augmented reality, motion capture, virtual try-on, and visual effects applications because it enables real-time 3D human mapping from 2D inputs. The model architecture builds on Mask R-CNN, using additional regression heads to predict UV coordinates that map image pixels to 3D surfaces.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 4
    PIFuHD

    PIFuHD

    High-Resolution 3D Human Digitization from A Single Image

    PIFuHD (Pixel-Aligned Implicit Function for 3D human reconstruction at high resolution) is a method and codebase to reconstruct high-fidelity 3D human meshes from a single image. It extends prior PIFu work by increasing resolution and detail, enabling fine geometry in cloth folds, hair, and subtle surface features. The method operates by learning an implicit occupancy / surface function conditioned on the image and camera projection; at inference time it queries dense points to reconstruct a mesh via marching cubes. It also uses a two-stage architecture: a coarse global model followed by local refinement patches to capture fine detail, balancing global consistency and local detail. The repo includes training pipelines, dataset loaders (for Multi-POP, etc.), and inference scripts for mesh output including depth maps for postprocessing. To help practical use, there are utilities for normal estimation, texture back-projection, mesh cleanup, and integration with rendering pipelines.
    Downloads: 3 This Week
    Last Update:
    See Project
  • Level Up Your Cyber Defense with External Threat Management Icon
    Level Up Your Cyber Defense with External Threat Management

    See every risk before it hits. From exposed data to dark web chatter. All in one unified view.

    Move beyond alerts. Gain full visibility, context, and control over your external attack surface to stay ahead of every threat.
    Try for Free
  • 5
    Vision Transformer Pytorch

    Vision Transformer Pytorch

    Implementation of Vision Transformer, a simple way to achieve SOTA

    This repository provides a from-scratch, minimalist implementation of the Vision Transformer (ViT) in PyTorch, focusing on the core architectural pieces needed for image classification. It breaks down the model into patch embedding, positional encoding, multi-head self-attention, feed-forward blocks, and a classification head so you can understand each component in isolation. The code is intentionally compact and modular, which makes it easy to tinker with hyperparameters, depth, width, and attention dimensions. Because it stays close to vanilla PyTorch, you can integrate custom datasets and training loops without framework lock-in. It’s widely used as an educational reference for people learning transformers in vision and as a lightweight baseline for research prototypes. The project encourages experimentation—swap optimizers, change augmentations, or plug the transformer backbone into downstream tasks.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 6
    Segment Anything

    Segment Anything

    Provides code for running inference with the SegmentAnything Model

    Segment Anything (SAM) is a foundation model for image segmentation that’s designed to work “out of the box” on a wide variety of images without task-specific fine-tuning. It’s a promptable segmenter: you guide it with points, boxes, or rough masks, and it predicts high-quality object masks consistent with the prompt. The architecture separates a powerful image encoder from a lightweight mask decoder, so the heavy vision work can be computed once and the interactive part stays fast. A bundled automatic mask generator can sweep an image and propose many object masks, which is useful for dataset bootstrapping or bulk annotation. The repository includes ready-to-use weights, Python APIs, and example notebooks demonstrating both interactive and automatic modes. Because SAM was trained with an extremely large and diverse mask dataset, it tends to generalize well to new domains, making it a practical starting point for research and production annotation tools.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 7
    Self-learning-Computer-Science

    Self-learning-Computer-Science

    Resources to learn computer science in your spare time

    Self-learning Computer Science is a curated, open-source guide repository designed to help learners independently study computer science topics using high-quality university-level resources. The author (an undergraduate CS student) assembled links to courses from institutions like MIT, UC Berkeley, Stanford, etc., covering mathematics, programming, data structures/algorithms, computer architecture, machine learning, software engineering and more. It’s aimed at learners who find traditional course structures restrictive and want a flexible, self-paced path through CS, with a focus on building depth and breadth rather than shortcut exam skills. The repository provides a roadmap, references, teaching materials, and sometimes the author’s own project examples, offering both guidance and community support. Because the CS field is broad, the structure helps learners allocate study time, avoid duplication, and benefit from “best in class” resources instead of randomly browsing.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 8
    OpenNN - Open Neural Networks Library

    OpenNN - Open Neural Networks Library

    Machine learning algorithms for advanced analytics

    OpenNN is a software library written in C++ for advanced analytics. It implements neural networks, the most successful machine learning method. Some typical applications of OpenNN are business intelligence (customer segmentation, churn prevention…), health care (early diagnosis, microarray analysis…) and engineering (performance optimization, predictive maitenance…). OpenNN does not deal with computer vision or natural language processing. The main advantage of OpenNN is its high performance. This library outstands in terms of execution speed and memory allocation. It is constantly optimized and parallelized in order to maximize its efficiency. The documentation is composed by tutorials and examples to offer a complete overview about the library. OpenNN is developed by Artelnics, a company specialized in artificial intelligence.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 9
    Blazeface

    Blazeface

    Blazeface is a lightweight model that detects faces in images

    Blazeface is a lightweight, high-performance face detection model designed for mobile and embedded devices, developed by TensorFlow. It is optimized for real-time face detection tasks and runs efficiently on mobile CPUs, ensuring minimal latency and power consumption. Blazeface is based on a fast architecture and uses deep learning techniques to detect faces with high accuracy, even in challenging conditions. It supports multiple face detection in varying lighting and poses, and is designed to work in real-world applications like mobile apps, robotics, and other resource-constrained environments.
    Downloads: 3 This Week
    Last Update:
    See Project
  • Build Securely on AWS with Proven Frameworks Icon
    Build Securely on AWS with Proven Frameworks

    Lay a foundation for success with Tested Reference Architectures developed by Fortinet’s experts. Learn more in this white paper.

    Moving to the cloud brings new challenges. How can you manage a larger attack surface while ensuring great network performance? Turn to Fortinet’s Tested Reference Architectures, blueprints for designing and securing cloud environments built by cybersecurity experts. Learn more and explore use cases in this white paper.
    Download Now
  • 10
    OpenPR
    OpenPR stands for Open Pattern Recognition project and is intended to be an open source library for algorithms of image processing, computer vision, natural language processing, pattern recognition, machine learning and the related fields.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 11
    pipeless

    pipeless

    A computer vision framework to create and deploy apps in minutes

    Pipeless is an open-source computer vision framework to create and deploy applications without the complexity of building and maintaining multimedia pipelines. It ships everything you need to create and deploy efficient computer vision applications that work in real-time in just minutes. Pipeless is inspired by modern serverless technologies. It provides the development experience of serverless frameworks applied to computer vision. You provide some functions that are executed for new video frames and Pipeless takes care of everything else. You can easily use industry-standard models, such as YOLO, or load your custom model in one of the supported inference runtimes. Pipeless ships some of the most popular inference runtimes, such as the ONNX Runtime, allowing you to run inference with high performance on CPU or GPU out-of-the-box. You can deploy your Pipeless application with a single command to edge and IoT devices or the cloud.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 12
    CAM

    CAM

    Class Activation Mapping

    This repository implements Class Activation Mapping (CAM), a technique to expose the implicit attention of convolutional neural networks by generating heatmaps that highlight the most discriminative image regions influencing a network’s class prediction. The method involves modifying a CNN model slightly (e.g., using global average pooling before the final layer) to produce a weighted combination of feature maps as the class activation map. Integration with existing CNNs (with light modifications). Sample scripts/examples using standard architectures. The repo provides example code and instructions for applying CAM to existing CNN architectures. Visualization of discriminative regions per class.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    CoTracker

    CoTracker

    CoTracker is a model for tracking any point (pixel) on a video

    CoTracker is a learning-based point tracking system that jointly follows many user-specified points across a video, rather than tracking each point independently. By reasoning about all tracks together, it can maintain temporal consistency, handle mutual occlusions, and reduce identity swaps when trajectories cross. The model takes sparse point queries on one frame and predicts their sub-pixel locations and a visibility score for every subsequent frame, producing long, coherent trajectories. Its transformer-style architecture aggregates information both along time and across points, allowing it to recover tracks even after brief disappearances. The repository ships with inference scripts, pretrained weights, and simple interfaces to seed points, run tracking, and export trajectories for downstream tasks. Typical uses include correspondence building, motion analysis, dynamic SLAM priors, video editing masks, and evaluation of geometric consistency in real scenes.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    CVSharp (aka Computer Vision in C#) is a Computer Vision project. Until the present day just one part of the whole project was actually developed. It's called CVSharp Lab, an Image Processing Tool.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    ConvNeXt

    ConvNeXt

    Code release for ConvNeXt model

    ConvNeXt is a modernized convolutional neural network (CNN) architecture designed to rival Vision Transformers (ViTs) in accuracy and scalability while retaining the simplicity and efficiency of CNNs. It revisits classic ResNet-style backbones through the lens of transformer design trends—large kernel sizes, inverted bottlenecks, layer normalization, and GELU activations—to bridge the performance gap between convolutions and attention-based models. ConvNeXt’s clean, hierarchical structure makes it efficient for both pretraining and fine-tuning across a wide range of visual recognition tasks. It achieves competitive or superior results on ImageNet and downstream datasets while being easier to deploy and train than transformers. The repository provides pretrained models, training recipes, and ablation studies demonstrating how incremental design choices collectively yield state-of-the-art performance.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    The Data Fusion Peer is a multitier computer vision internet application. The system provides image processing, motion tracking, and visualization information. Application will convert data into 3-Deminsional and other digital environments.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Detectron

    Detectron

    FAIR's research platform for object detection research

    Detectron is an object detection and instance segmentation research framework that popularized many modern detection models in a single, reproducible codebase. Built on Caffe2 with custom CUDA/C++ operators, it provided reference implementations for models like Faster R-CNN, Mask R-CNN, RetinaNet, and Feature Pyramid Networks. The framework emphasized a clean configuration system, strong baselines, and a “model zoo” so researchers could compare results under consistent settings. It includes training and evaluation pipelines that handle multi-GPU setups, standard datasets, and common augmentations, which helped standardize experimental practice in detection research. Visualization utilities and diagnostic scripts make it straightforward to inspect predictions, proposals, and losses while training. Although the project has since been superseded by Detectron2, the original Detectron remains a historically important, reproducible reference that still informs many productions.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Diglo is a Music Information Retrieval System based on Computer Vision and Audio Spectrum Analysis, using algorithmic operations to find emergent patterns in musical performance. Also it functions as a low-cost Motion Capture Analysis system.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Computer Vision library using GPU environment acceleration. Based on openCV and openGL
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Geometric Computer Vision library in C++. Provides functions and structures of projective geometry, taylored for 3D computer vision.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    Hiera

    Hiera

    A fast, powerful, and simple hierarchical vision transformer

    Hiera is a hierarchical vision transformer designed to be fast, simple, and strong across image and video recognition tasks. The core idea is to use straightforward hierarchical attention with a minimal set of architectural “bells and whistles,” achieving competitive or superior accuracy while being markedly faster at inference and often faster to train. The repository provides installation options (from source or Torch Hub), a model zoo with pre-trained checkpoints, and code for evaluation and fine-tuning on standard benchmarks. Documentation emphasizes that model weights may have separate licensing and that the code targets practical experimentation for both research and downstream tasks. Community discussions cover topics like dataset pretrains, integration in other frameworks, and comparisons with related implementations. Security and contribution guidelines follow Meta’s open-source practices, and activity shows ongoing interest and usage across the community.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    i3D-converter creates a 3D representation from a couple of images (or a pair of stereo images). This program also performs other Computer Vision operations such as, edge and corner detection, image filtering, getting geometric shapes,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    According to the CBS news report, "if you use a computer more than two hours a day, you could be suffering from Computer Vision Syndrome (CVS)". The project's main objective is making a software that will help us protect our eyes from CVS.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    MTCNN Face Detection Alignment

    MTCNN Face Detection Alignment

    Joint Face Detection and Alignment

    MTCNN_face_detection_alignment is an implementation of the “Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks” algorithm. The algorithm uses a cascade of three convolutional networks (P-Net, R-Net, O-Net) to jointly detect faces (bounding boxes) and align facial landmarks in a coarse-to-fine manner, leveraging multi-task learning. Non-maximum suppression and bounding box regression at each stage. The repository includes Caffe / MATLAB code, support scripts, and instructions for dependencies. Non-maximum suppression and bounding box regression at each stage. Online hard sample mining to improve training robustness.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    Math Transformations Library
    A library analog to those included in Matlab without the need of external libraries; just right for embedded or static linking. MTL was used to build a 3d Scanner. MTL consists of pars B - Basic Functions, Matrices, Images, Hypermodels (3d Models and up) N - Numeric Functions ranging from linear regression over nonlinear optimization to singular-value computation I - Image filters and Image enhancement H - Hardware related (optional part), does require additional libraries and is only useful on certain hosts. G - Hyper-Model functions such as ray-plane intersections etc.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.