+
+

Related Products

  • Vertex AI
    827 Ratings
    Visit Website
  • LM-Kit.NET
    24 Ratings
    Visit Website
  • RunPod
    205 Ratings
    Visit Website
  • Google AI Studio
    11 Ratings
    Visit Website
  • Google Cloud Speech-to-Text
    374 Ratings
    Visit Website
  • Google Cloud Platform
    60,456 Ratings
    Visit Website
  • PBRS Power BI Reports Distribution
    12 Ratings
    Visit Website
  • Synchredible
    13 Ratings
    Visit Website
  • ManageEngine OpManager
    1,629 Ratings
    Visit Website
  • Azore CFD
    24 Ratings
    Visit Website

About

This is a model quantization tool for convolution neural networks(CNN). This tool could quantize both weights/biases and activations from 32-bit floating-point (FP32) format to 8-bit integer(INT8) format or any other bit depths. With this tool, you can boost the inference performance and efficiency significantly, while maintaining the accuracy. This tool supports common layer types in neural networks, including convolution, pooling, fully-connected, batch normalization and so on. The quantization tool does not need the retraining of the network or labeled datasets, only one batch of pictures are needed. The process time ranges from a few seconds to several minutes depending on the size of neural network, which makes rapid model update possible. This tool is collaborative optimized for DeePhi DPU and could generate INT8 format model files required by DNNC.

About

vLLM is a high-performance library designed to facilitate efficient inference and serving of Large Language Models (LLMs). Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has evolved into a community-driven project with contributions from both academia and industry. It offers state-of-the-art serving throughput by efficiently managing attention key and value memory through its PagedAttention mechanism. It supports continuous batching of incoming requests and utilizes optimized CUDA kernels, including integration with FlashAttention and FlashInfer, to enhance model execution speed. Additionally, vLLM provides quantization support for GPTQ, AWQ, INT4, INT8, and FP8, as well as speculative decoding capabilities. Users benefit from seamless integration with popular Hugging Face models, support for various decoding algorithms such as parallel sampling and beam search, and compatibility with NVIDIA GPUs, AMD CPUs and GPUs, Intel CPUs, and more.

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Platforms Supported

Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook

Audience

Anyone searching for a neural network solution

Audience

AI infrastructure engineers looking for a solution to optimize the deployment and serving of large-scale language models in production environments

Support

Phone Support
24/7 Live Support
Online

Support

Phone Support
24/7 Live Support
Online

API

Offers API

API

Offers API

Screenshots and Videos

Screenshots and Videos

Pricing

$0.90 per hour
Free Version
Free Trial

Pricing

No information available.
Free Version
Free Trial

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Reviews/Ratings

Overall 0.0 / 5
ease 0.0 / 5
features 0.0 / 5
design 0.0 / 5
support 0.0 / 5

This software hasn't been reviewed yet. Be the first to provide a review:

Review this Software

Training

Documentation
Webinars
Live Online
In Person

Training

Documentation
Webinars
Live Online
In Person

Company Information

DeePhi Quantization Tool
aws.amazon.com/marketplace/pp/prodview-bwtx6kzwg3gva

Company Information

vLLM
United States
vllm.ai

Alternatives

Alternatives

OpenVINO

OpenVINO

Intel
Deci

Deci

Deci AI

Categories

Categories

Integrations

Database Mart
Docker
Hugging Face
KServe
Kubernetes
NGINX
NVIDIA DRIVE
OpenAI
PyTorch

Integrations

Database Mart
Docker
Hugging Face
KServe
Kubernetes
NGINX
NVIDIA DRIVE
OpenAI
PyTorch
Claim DeePhi Quantization Tool and update features and information
Claim DeePhi Quantization Tool and update features and information
Claim vLLM and update features and information
Claim vLLM and update features and information