A real time inference engine for temporal logical specifications
A library for accelerating Transformer models on NVIDIA GPUs
A high-throughput and memory-efficient inference and serving engine
RGBD video generation model conditioned on camera input
User-friendly AI Interface
High-performance reactive message-passing based Bayesian engine
Lightweight inference library for ONNX files, written in C++
Fast inference engine for Transformer models
lightweight, standalone C++ inference engine for Google's Gemma models
TypeDB: a strongly-typed database
Pruna is a model optimization framework built for developers
Open-Source AI Camera. Empower any camera/CCTV
Enables the best performance on NVIDIA RTX Graphics Cards
Inference Llama 2 in one file of pure C
Supercharge Your LLM with the Fastest KV Cache Layer
Hardware-accelerated video transcoding using Android MediaCodec APIs
Tensor search for humans
A GPU-accelerated library containing highly optimized building blocks
Superduper: Integrate AI models and machine learning workflows
A library for adding scripting to .NET applications
Toolbox of models, callbacks, and datasets for AI/ML researchers
A Strong and Easy-to-use Single View 3D Hand+Body Pose Estimator
Deep learning inference framework optimized for mobile platforms
Auto-diff neural network library for high-dimensional sparse tensors
Open source embedded speech-to-text engine