TensorRT-LLMs/examples/llm-api
dominicshanshan c9dca69e1b
[None][chore] Mass integration of release/1.0 - 3rd (#7519)
Signed-off-by: Nave Assaf <nassaf@nvidia.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>
Signed-off-by: qqiao <qqiao@nvidia.com>
Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>
Signed-off-by: Bo Deng <deemod@nvidia.com>
Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com>
Signed-off-by: Yifei Zhang <219273404+yifeizhang-c@users.noreply.github.com>
Signed-off-by: Amit Zuker <203509407+amitz-nv@users.noreply.github.com>
Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>
Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
Signed-off-by: Christina Zhang <83400082+ChristinaZ@users.noreply.github.com>
Signed-off-by: Venky Ganesh <23023424+venkywonka@users.noreply.github.com>
Signed-off-by: Pamela <179191831+pamelap-nvidia@users.noreply.github.com>
Signed-off-by: Hui Gao <huig@nvidia.com>
Signed-off-by: Alexandre Milesi <30204471+milesial@users.noreply.github.com>
Signed-off-by: Shixiaowei02 <39303645+Shixiaowei02@users.noreply.github.com>
Signed-off-by: Michal Guzek <mguzek@nvidia.com>
Signed-off-by: peaceh <103117813+peaceh-nv@users.noreply.github.com>
Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com>
Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com>
Signed-off-by: ruodil <200874449+ruodil@users.noreply.github.com>
Signed-off-by: Linda-Stadter <57756729+Linda-Stadter@users.noreply.github.com>
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
Signed-off-by: Jiagan Cheng <jiaganc@nvidia.com>
Signed-off-by: William Zhang <133824995+2ez4bz@users.noreply.github.com>
Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>
Co-authored-by: Nave Assaf <55059536+Naveassaf@users.noreply.github.com>
Co-authored-by: Yechan Kim <161688079+yechank-nvidia@users.noreply.github.com>
Co-authored-by: brb-nv <169953907+brb-nv@users.noreply.github.com>
Co-authored-by: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com>
Co-authored-by: Emma Qiao <qqiao@nvidia.com>
Co-authored-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com>
Co-authored-by: Bo Deng <deemod@nvidia.com>
Co-authored-by: Jin Li <59594262+liji-nv@users.noreply.github.com>
Co-authored-by: yifeizhang-c <219273404+yifeizhang-c@users.noreply.github.com>
Co-authored-by: amitz-nv <203509407+amitz-nv@users.noreply.github.com>
Co-authored-by: Erin <14718778+hchings@users.noreply.github.com>
Co-authored-by: chenfeiz0326 <chenfeiz@nvidia.com>
Co-authored-by: ChristinaZ <83400082+ChristinaZ@users.noreply.github.com>
Co-authored-by: Venky <23023424+venkywonka@users.noreply.github.com>
Co-authored-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>
Co-authored-by: HuiGao-NV <huig@nvidia.com>
Co-authored-by: milesial <milesial@users.noreply.github.com>
Co-authored-by: Shi Xiaowei <39303645+Shixiaowei02@users.noreply.github.com>
Co-authored-by: Michal Guzek <moraxu@users.noreply.github.com>
Co-authored-by: peaceh-nv <103117813+peaceh-nv@users.noreply.github.com>
Co-authored-by: Guoming Zhang <137257613+nv-guomingz@users.noreply.github.com>
Co-authored-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
Co-authored-by: pcastonguay <55748270+pcastonguay@users.noreply.github.com>
Co-authored-by: ruodil <200874449+ruodil@users.noreply.github.com>
Co-authored-by: Linda <57756729+Linda-Stadter@users.noreply.github.com>
Co-authored-by: Zhanrui Sun <184402041+ZhanruiSunCh@users.noreply.github.com>
Co-authored-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
Co-authored-by: Jiagan Cheng <jiaganc@nvidia.com>
Co-authored-by: William Zhang <133824995+2ez4bz@users.noreply.github.com>
Co-authored-by: Larry <197874197+LarryXFly@users.noreply.github.com>
Co-authored-by: Sharan Chetlur <116769508+schetlur-nv@users.noreply.github.com>
Co-authored-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>
2025-09-08 14:03:04 +08:00
..
_tensorrt_engine [None][chore] Mass integration of release/1.0 - 3rd (#7519) 2025-09-08 14:03:04 +08:00
out_of_tree_example [None][fix] Refactoring input prep to allow out-of-tree models (#6497) 2025-08-12 20:29:10 -04:00
llm_guided_decoding.py [TRTLLM-6406] feat: Enable guided decoding with overlap scheduler (#6000) 2025-07-17 17:46:10 +08:00
llm_inference_async_streaming.py chores: merge examples for v1.0 doc (#5736) 2025-07-08 21:00:42 -07:00
llm_inference_async.py chores: merge examples for v1.0 doc (#5736) 2025-07-08 21:00:42 -07:00
llm_inference_distributed.py chores: merge examples for v1.0 doc (#5736) 2025-07-08 21:00:42 -07:00
llm_inference.py chores: merge examples for v1.0 doc (#5736) 2025-07-08 21:00:42 -07:00
llm_kv_cache_connector.py [None][chore] rm executor config in kv cache connector (#7372) 2025-09-03 08:13:13 +08:00
llm_logits_processor.py [TRTLLM-5277] chore: refine llmapi examples for 1.0 (part1) (#5431) 2025-07-01 19:06:41 +08:00
llm_mgmn_llm_distributed.sh chores: merge examples for v1.0 doc (#5736) 2025-07-08 21:00:42 -07:00
llm_mgmn_trtllm_bench.sh [None][chore] Mass integration of release/1.0 - 3rd (#7519) 2025-09-08 14:03:04 +08:00
llm_mgmn_trtllm_serve.sh chore: update trtllm-serve usage doc by removing backend parameter when it use torch as backend. (#6419) 2025-07-30 11:11:06 -04:00
llm_multilora.py [None][fix] Refactoring to avoid circular import when importing torch models (#6720) 2025-08-11 18:00:42 -04:00
llm_runtime.py [None][doc] add legacy section for tensorrt engine (#6724) 2025-09-01 11:02:31 +08:00
llm_sampling.py [TRTLLM-6160] chore: add sampling examples for pytorch (#5951) 2025-07-14 15:28:32 +09:00
llm_speculative_decoding.py [refactor] Simplification of Speculative decoding configs (#5639) 2025-07-10 11:37:30 -04:00
quickstart_advanced.py [TRTLLM-7157][feat] BREAKING CHANGE Introduce sampler_type, detect sampler according to options (#6831) 2025-08-16 00:27:24 -04:00
quickstart_example.py [None][doc] add legacy section for tensorrt engine (#6724) 2025-09-01 11:02:31 +08:00
quickstart_multimodal.py [TRTLLM-6975][test] Add multi-turn test cases for VLM models (#6749) 2025-09-01 11:02:31 +08:00
README.md [doc] Add NGram tech blog (#6311) 2025-07-25 10:26:33 -07:00
star_attention.py [None][chore] Mass integration of release/1.0 (#6864) 2025-08-22 09:25:15 +08:00

LLM API Examples

Please refer to the official documentation including customization for detailed information and usage guidelines regarding the LLM API.

Run the advanced usage example script:

# FP8 + TP=2
python3 quickstart_advanced.py --model_dir nvidia/Llama-3.1-8B-Instruct-FP8 --tp_size 2

# FP8 (e4m3) kvcache
python3 quickstart_advanced.py --model_dir nvidia/Llama-3.1-8B-Instruct-FP8 --kv_cache_dtype fp8

# BF16 + TP=8
python3 quickstart_advanced.py --model_dir nvidia/Llama-3_1-Nemotron-Ultra-253B-v1 --tp_size 8

# Nemotron-H requires disabling cache reuse in kv cache
python3 quickstart_advanced.py --model_dir nvidia/Nemotron-H-8B-Base-8K --disable_kv_cache_reuse --max_batch_size 8

Run the multimodal example script:

# default inputs
python3 quickstart_multimodal.py --model_dir Efficient-Large-Model/NVILA-8B --modality image [--use_cuda_graph]

# user inputs
# supported modes:
# (1) N prompt, N media (N requests are in-flight batched)
# (2) 1 prompt, N media
# Note: media should be either image or video. Mixing image and video is not supported.
python3 quickstart_multimodal.py --model_dir Efficient-Large-Model/NVILA-8B --modality video --prompt "Tell me what you see in the video briefly." "Describe the scene in the video briefly." --media "https://huggingface.co/datasets/Efficient-Large-Model/VILA-inference-demos/resolve/main/OAI-sora-tokyo-walk.mp4" "https://huggingface.co/datasets/Efficient-Large-Model/VILA-inference-demos/resolve/main/world.mp4" --max_tokens 128 [--use_cuda_graph]

Run the speculative decoding script:

# NGram drafter
python3 quickstart_advanced.py \
    --model_dir meta-llama/Llama-3.1-8B-Instruct \
    --spec_decode_algo NGRAM \
    --spec_decode_max_draft_len 4 \
    --max_matching_ngram_size 2 \
    --disable_overlap_scheduler \
    --disable_kv_cache_reuse
# Draft Target
python3 quickstart_advanced.py \
    --model_dir meta-llama/Llama-3.1-8B-Instruct \
    --spec_decode_algo draft_target \
    --spec_decode_max_draft_len 5 \
    --draft_model_dir meta-llama/Llama-3.2-1B-Instruct \
    --disable_overlap_scheduler \
    --disable_kv_cache_reuse