text-generation-inference documentation
Using TGI with Nvidia GPUs
Getting started
Text Generation InferenceQuick TourSupported ModelsUsing TGI with Nvidia GPUsUsing TGI with AMD GPUsUsing TGI with Intel GaudiUsing TGI with AWS Trainium and InferentiaUsing TGI with Google TPUsUsing TGI with Intel GPUsInstallation from sourceMulti-backend supportInternal ArchitectureUsage Statistics
Tutorials
Consuming TGIPreparing Model for ServingServing Private & Gated ModelsUsing TGI CLIDeploying on AWS (EC2 and SageMaker)Non-core Model ServingSafetyUsing Guidance, JSON, toolsVisual Language ModelsMonitoring TGI with Prometheus and GrafanaTrain Medusa
Backends
Reference
Conceptual Guides
Using TGI with Nvidia GPUs
TGI optimized models are supported on NVIDIA H100, A100, A10G and T4 GPUs with CUDA 12.2+. Note that you have to install NVIDIA Container Toolkit to use it.
For other NVIDIA GPUs, continuous batching will still apply, but some operations like flash attention and paged attention will not be executed.
TGI can be used on NVIDIA GPUs through its official docker image:
model=teknium/OpenHermes-2.5-Mistral-7B
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run
docker run --gpus all --shm-size 64g -p 8080:80 -v $volume:/data \
ghcr.io/huggingface/text-generation-inference:3.3.5 \
--model-id $modelThe launched TGI server can then be queried from clients, make sure to check out the Consuming TGI guide.
Update on GitHub