Seldon Server is a machine learning platform and recommendation engine built on Kubernetes. Seldon reduces time-to-value so models can get to work faster. Scale with confidence and minimize risk through interpretable results and transparent model performance. Seldon Core focuses purely on deploying a wide range of ML models on Kubernetes, allowing complex runtime serving graphs to be managed in production. Seldon Core is a progression of the goals of the Seldon-Server project but also a more restricted focus to solving the final step in a machine learning project which is serving models in production. Seldon Server is a machine learning platform that helps your data science team deploy models into production. It provides an open-source data science stack that runs within a Kubernetes Cluster. You can use Seldon to deploy machine learning and deep learning models into production on-premise or in the cloud (e.g. GCP, AWS, Azure).
Features
- Deploy machine learning models at scale with more accuracy
- Seldon Server is 85% Faster
- Turn R&D into ROI with more models into production
- At scale, faster, with increased accuracy
- Seldon reduces time-to-value so models can get to work faster
- Scale with confidence and minimise risk through interpretable results and transparent model performance