A sparsity-aware enterprise inferencing system for AI models on CPUs. Maximize your CPU infrastructure with DeepSparse to run performant computer vision (CV), natural language processing (NLP), and large language models (LLMs).

Features

  • Optimized for sparse deep learning models
  • Enables high-speed inference on CPUs
  • Supports ONNX model format for broad compatibility
  • Works with sparsified versions of popular deep learning models
  • Scales from edge devices to cloud deployments
  • Integrates with PyTorch and TensorFlow models

Project Samples

Project Activity

See All Activity >

License

MIT License

Follow DeepSparse

DeepSparse Web Site

Other Useful Business Software
Easily Host LLMs and Web Apps on Cloud Run Icon
Easily Host LLMs and Web Apps on Cloud Run

Run everything from popular models with on-demand NVIDIA L4 GPUs to web apps without infrastructure management.

Run frontend and backend services, batch jobs, host LLMs, and queue processing workloads without the need to manage infrastructure. Cloud Run gives you on-demand GPU access for hosting LLMs and running real-time AI—with 5-second cold starts and automatic scale-to-zero so you only pay for actual usage. New customers get $300 in free credit to start.
Try Cloud Run Free
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of DeepSparse!

Additional Project Details

Operating Systems

Linux, Mac, Windows

Programming Language

Python

Related Categories

Python Natural Language Processing (NLP) Tool, Python LLM Inference Tool

Registered

2025-01-21