TensorRT-LLMs/examples/pytorch
Sharan Chetlur 258c7540c0 open source 09df54c0cc99354a60bbc0303e3e8ea33a96bef0 (#2725)
Co-authored-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>

open source f8c0381a2bc50ee2739c3d8c2be481b31e5f00bd (#2736)

Co-authored-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>

Add note for blackwell (#2742)

Update the docs to workaround the extra-index-url issue (#2744)

update README.md (#2751)

Fix github io pages (#2761)

Update
2025-02-11 02:21:51 +00:00
..
out_of_tree_example open source 09df54c0cc99354a60bbc0303e3e8ea33a96bef0 (#2725) 2025-02-11 02:21:51 +00:00
quickstart.py open source 09df54c0cc99354a60bbc0303e3e8ea33a96bef0 (#2725) 2025-02-11 02:21:51 +00:00
README.md open source 09df54c0cc99354a60bbc0303e3e8ea33a96bef0 (#2725) 2025-02-11 02:21:51 +00:00
star_attention_example.py open source 09df54c0cc99354a60bbc0303e3e8ea33a96bef0 (#2725) 2025-02-11 02:21:51 +00:00
test_vila.py open source 09df54c0cc99354a60bbc0303e3e8ea33a96bef0 (#2725) 2025-02-11 02:21:51 +00:00

TRT-LLM with PyTorch

Run example:

# BF16
python3 quickstart.py --model_dir meta-llama/Llama-3.1-8B-Instruct

# FP8
python3 quickstart.py --model_dir nvidia/Llama-3.1-8B-Instruct-FP8

# BF16 + TP=2
python3 quickstart.py --model_dir meta-llama/Llama-3.1-8B-Instruct --tp_size 2

# FP8 + TP=2
python3 quickstart.py --model_dir nvidia/Llama-3.1-8B-Instruct-FP8 --tp_size 2

# FP8(e4m3) kvcache
python3 quickstart.py --model_dir nvidia/Llama-3.1-8B-Instruct-FP8 --kv_cache_dtype fp8