VespaVespa.ai
|
||||||
Related Products
|
||||||
About
VLLM is a high-performance library designed to facilitate efficient inference and serving of Large Language Models (LLMs). Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has evolved into a community-driven project with contributions from both academia and industry. It offers state-of-the-art serving throughput by efficiently managing attention key and value memory through its PagedAttention mechanism. It supports continuous batching of incoming requests and utilizes optimized CUDA kernels, including integration with FlashAttention and FlashInfer, to enhance model execution speed. Additionally, vLLM provides quantization support for GPTQ, AWQ, INT4, INT8, and FP8, as well as speculative decoding capabilities. Users benefit from seamless integration with popular Hugging Face models, support for various decoding algorithms such as parallel sampling and beam search, and compatibility with NVIDIA GPUs, AMD CPUs and GPUs, Intel CPUs, and more.
|
About
Vespa is forBig Data + AI, online. At any scale, with unbeatable performance.
To build production-worthy online applications that combine data and AI, you need more than point solutions: You need a platform that integrates data and compute to achieve true scalability and availability - and which does this without limiting your freedom to innovate. Only Vespa does this.
Vespa is a fully featured search engine and vector database. It supports vector search (ANN), lexical search, and search in structured data, all in the same query. Users can easily build recommendation applications on Vespa. Integrated machine-learned model inference allows you to apply AI to make sense of your data in real-time.
Together with Vespa's proven scaling and high availability, this empowers you to create production-ready search applications at any scale and with any combination of features.
|
|||||
Platforms Supported
Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook
|
Platforms Supported
Windows
Mac
Linux
Cloud
On-Premises
iPhone
iPad
Android
Chromebook
|
|||||
Audience
AI infrastructure engineers looking for a solution to optimize the deployment and serving of large-scale language models in production environments
|
Audience
Developers interested in a fully featured search engine and vector database
|
|||||
Support
Phone Support
24/7 Live Support
Online
|
Support
Phone Support
24/7 Live Support
Online
|
|||||
API
Offers API
|
API
Offers API
|
|||||
Screenshots and Videos |
Screenshots and Videos |
|||||
Pricing
No information available.
Free Version
Free Trial
|
Pricing
Free
Free Version
Free Trial
|
|||||
Reviews/
|
Reviews/
|
|||||
Training
Documentation
Webinars
Live Online
In Person
|
Training
Documentation
Webinars
Live Online
In Person
|
|||||
Company InformationVLLM
United States
docs.vllm.ai/en/latest/
|
Company InformationVespa.ai
Founded: 2023
Trondheim, Norway
vespa.ai/
|
|||||
Alternatives |
Alternatives |
|||||
|
|
|
|||||
Categories |
Categories |
|||||
Integrations
Coral
Database Mart
Docker
Hugging Face
IBM watsonx.data
KServe
Kong AI Gateway
Kubernetes
NGINX
NVIDIA DRIVE
|
Integrations
Coral
Database Mart
Docker
Hugging Face
IBM watsonx.data
KServe
Kong AI Gateway
Kubernetes
NGINX
NVIDIA DRIVE
|
|||||
|
|
|