TensorRT-LLMs/examples/pytorch
Mike Iovine 5416966ddb
Add initial EAGLE-3 implementation (#3035)
Signed-off-by: Mike Iovine <miovine@nvidia.com>
2025-03-29 22:31:24 +08:00
..
out_of_tree_example Add initial EAGLE-3 implementation (#3035) 2025-03-29 22:31:24 +08:00
quickstart_advanced.py Add initial EAGLE-3 implementation (#3035) 2025-03-29 22:31:24 +08:00
quickstart_multimodal.py Update (#2978) 2025-03-23 16:39:35 +08:00
quickstart.py chore: Simplify quickstart of PyTorch flow (#3000) 2025-03-24 14:32:17 +08:00
README.md Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
star_attention.py move BuildConfig functional args to llmargs (#3036) 2025-03-29 02:20:18 +08:00

TRT-LLM with PyTorch

Run the quick start script:

python3 quickstart.py

Run the advanced usage example script:

# BF16
python3 quickstart_advanced.py --model_dir meta-llama/Llama-3.1-8B-Instruct

# FP8
python3 quickstart_advanced.py --model_dir nvidia/Llama-3.1-8B-Instruct-FP8

# BF16 + TP=2
python3 quickstart_advanced.py --model_dir meta-llama/Llama-3.1-8B-Instruct --tp_size 2

# FP8 + TP=2
python3 quickstart_advanced.py --model_dir nvidia/Llama-3.1-8B-Instruct-FP8 --tp_size 2

# FP8(e4m3) kvcache
python3 quickstart_advanced.py --model_dir nvidia/Llama-3.1-8B-Instruct-FP8 --kv_cache_dtype fp8

Run the multimodal example script:

# default inputs
python3 quickstart_multimodal.py --model_dir Efficient-Large-Model/NVILA-8B --modality image [--use_cuda_graph]

# user inputs
# supported modes:
# (1) N prompt, N media (N requests are in-flight batched)
# (2) 1 prompt, N media
# Note: media should be either image or video. Mixing image and video is not supported.
python3 quickstart_multimodal.py --model_dir Efficient-Large-Model/NVILA-8B --modality video --prompt "Tell me what you see in the video briefly." "Describe the scene in the video briefly." --media "https://huggingface.co/datasets/Efficient-Large-Model/VILA-inference-demos/resolve/main/OAI-sora-tokyo-walk.mp4" "https://huggingface.co/datasets/Efficient-Large-Model/VILA-inference-demos/resolve/main/world.mp4" --max_tokens 128 [--use_cuda_graph]