Optimize and deploy in production Hugging Face Transformer models in a single command line. At Lefebvre Dalloz we run in-production semantic search engines in the legal domain, in the non-marketing language it's a re-ranker, and we based ours on Transformer. In that setup, latency is key to providing a good user experience, and relevancy inference is done online for hundreds of snippets per user query. Most tutorials on Transformer deployment in production are built over Pytorch and FastAPI. Both are great tools but not very performant in inference. Then, if you spend some time, you can build something over ONNX Runtime and Triton inference server. You will usually get from 2X to 4X faster inference compared to vanilla Pytorch. It's cool! However, if you want the best in class performances on GPU, there is only a single possible combination: Nvidia TensorRT and Triton. You will usually get 5X faster inference compared to vanilla Pytorch.
Features
- Heavily optimize transformer models for inference (CPU and GPU) -> between 5X and 10X speedup
- Deploy models on Nvidia Triton inference servers (enterprise grade), 6X faster than FastAPI
- Add quantization support for both CPU and GPU
- Simple to use: optimization done in a single command line!
- Supported model: any model that can be exported to ONNX (-> most of them)
- Supported tasks: document classification, token classification (NER), feature extraction (aka sentence-transformers dense embeddings), text generation