Get inferencing running on Kubernetes: LLMs, Embeddings, Speech-to-Text. KubeAI serves an OpenAI compatible HTTP API. Admins can configure ML models by using the Model Kubernetes Custom Resources. KubeAI can be thought of as a Model Operator (See Operator Pattern) that manages vLLM and Ollama servers.

Features

  • Drop-in replacement for OpenAI with API compatibility
  • Serve top OSS models (LLMs, Whisper, etc.)
  • Multi-platform: CPU-only, GPU, coming soon: TPU
  • Scale from zero, autoscale based on load
  • Zero dependencies (does not depend on Istio, Knative, etc.)
  • Chat UI included (OpenWebUI)
  • Operates OSS model servers (vLLM, Ollama, FasterWhisper, Infinity)
  • Stream/batch inference via messaging integrations (Kafka, PubSub, etc.)

Project Samples

Project Activity

See All Activity >

License

Apache License V2.0

Follow KubeAI

KubeAI Web Site

Other Useful Business Software
Easily Host LLMs and Web Apps on Cloud Run Icon
Easily Host LLMs and Web Apps on Cloud Run

Run everything from popular models with on-demand NVIDIA L4 GPUs to web apps without infrastructure management.

Run frontend and backend services, batch jobs, host LLMs, and queue processing workloads without the need to manage infrastructure. Cloud Run gives you on-demand GPU access for hosting LLMs and running real-time AI—with 5-second cold starts and automatic scale-to-zero so you only pay for actual usage. New customers get $300 in free credit to start.
Try Cloud Run Free
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of KubeAI!

Additional Project Details

Operating Systems

Linux, Mac, Windows

Programming Language

Go

Related Categories

Go Large Language Models (LLM), Go LLM Inference Tool

Registered

2024-09-25