mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-14 06:27:45 +08:00
Co-authored-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> open source f8c0381a2bc50ee2739c3d8c2be481b31e5f00bd (#2736) Co-authored-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> Add note for blackwell (#2742) Update the docs to workaround the extra-index-url issue (#2744) update README.md (#2751) Fix github io pages (#2761) Update |
||
|---|---|---|
| .. | ||
| out_of_tree_example | ||
| quickstart.py | ||
| README.md | ||
| star_attention_example.py | ||
| test_vila.py | ||
TRT-LLM with PyTorch
Run example:
# BF16
python3 quickstart.py --model_dir meta-llama/Llama-3.1-8B-Instruct
# FP8
python3 quickstart.py --model_dir nvidia/Llama-3.1-8B-Instruct-FP8
# BF16 + TP=2
python3 quickstart.py --model_dir meta-llama/Llama-3.1-8B-Instruct --tp_size 2
# FP8 + TP=2
python3 quickstart.py --model_dir nvidia/Llama-3.1-8B-Instruct-FP8 --tp_size 2
# FP8(e4m3) kvcache
python3 quickstart.py --model_dir nvidia/Llama-3.1-8B-Instruct-FP8 --kv_cache_dtype fp8