TensorRT-LLMs/examples/llm-api
Yi Zhang 0306c0f12c
[TRTLLM-9766][feat] Integration of the KVCacheManager V2 to TRTLLM Runtime (#10659)
Signed-off-by: yizhang-nv <187001205+yizhang-nv@users.noreply.github.com>
2026-02-02 14:29:02 +08:00
..
_tensorrt_engine [None][feat] Auto download speculative models from HF for pytorch backend, add speculative_model field alias (#10099) 2026-01-14 21:06:07 -08:00
out_of_tree_example [TRTLLM-9089][chore] Port prepare_dataset into trtllm-bench (#9250) 2025-12-08 10:37:40 -08:00
extra-llm-api-config.yml
llm_guided_decoding.py
llm_inference_async_streaming.py
llm_inference_async.py
llm_inference_distributed.py
llm_inference.py
llm_kv_cache_connector.py
llm_kv_cache_offloading.py
llm_logits_processor.py
llm_mgmn_llm_distributed.sh
llm_mgmn_trtllm_bench.sh [TRTC-102][docs] --extra_llm_api_options->--config in docs/examples/tests (#10005) 2025-12-19 13:48:43 -05:00
llm_mgmn_trtllm_serve.sh
llm_multilora.py
llm_runtime.py
llm_sampling.py
llm_sparse_attention.py [TRTLLM-9581][infra] Use /home/scratch.trt_llm_data_ci in computelab (#10616) 2026-01-19 00:40:40 -05:00
llm_speculative_decoding.py [TRTC-122][feat] Eagle3 Specdec UX improvements (#10124) 2026-01-22 07:24:11 -08:00
quickstart_advanced.py [TRTLLM-9766][feat] Integration of the KVCacheManager V2 to TRTLLM Runtime (#10659) 2026-02-02 14:29:02 +08:00
quickstart_example.py
quickstart_multimodal.py [TRTLLM-9601][feat] Expose mmKeys for multimodal to integrate with dynamo. (#9604) 2025-12-15 08:42:30 +08:00
README.md
star_attention.py

LLM API Examples

Please refer to the official documentation including customization for detailed information and usage guidelines regarding the LLM API.

Run the advanced usage example script:

# FP8 + TP=2
python3 quickstart_advanced.py --model_dir nvidia/Llama-3.1-8B-Instruct-FP8 --tp_size 2

# FP8 (e4m3) kvcache
python3 quickstart_advanced.py --model_dir nvidia/Llama-3.1-8B-Instruct-FP8 --kv_cache_dtype fp8

# BF16 + TP=8
python3 quickstart_advanced.py --model_dir nvidia/Llama-3_1-Nemotron-Ultra-253B-v1 --tp_size 8

# Nemotron-H requires disabling cache reuse in kv cache
python3 quickstart_advanced.py --model_dir nvidia/Nemotron-H-8B-Base-8K --disable_kv_cache_reuse --max_batch_size 8

Run the multimodal example script:

# default inputs
python3 quickstart_multimodal.py --model_dir Efficient-Large-Model/NVILA-8B --modality image [--use_cuda_graph]

# user inputs
# supported modes:
# (1) N prompt, N media (N requests are in-flight batched)
# (2) 1 prompt, N media
# Note: media should be either image or video. Mixing image and video is not supported.
python3 quickstart_multimodal.py --model_dir Efficient-Large-Model/NVILA-8B --modality video --prompt "Tell me what you see in the video briefly." "Describe the scene in the video briefly." --media "https://huggingface.co/datasets/Efficient-Large-Model/VILA-inference-demos/resolve/main/OAI-sora-tokyo-walk.mp4" "https://huggingface.co/datasets/Efficient-Large-Model/VILA-inference-demos/resolve/main/world.mp4" --max_tokens 128 [--use_cuda_graph]

Run the speculative decoding script:

# NGram drafter
python3 quickstart_advanced.py \
    --model_dir meta-llama/Llama-3.1-8B-Instruct \
    --spec_decode_algo NGRAM \
    --spec_decode_max_draft_len 4 \
    --max_matching_ngram_size 2 \
    --disable_overlap_scheduler \
    --disable_kv_cache_reuse
# Draft Target
python3 quickstart_advanced.py \
    --model_dir meta-llama/Llama-3.1-8B-Instruct \
    --spec_decode_algo draft_target \
    --spec_decode_max_draft_len 5 \
    --draft_model_dir meta-llama/Llama-3.2-1B-Instruct \
    --disable_overlap_scheduler \
    --disable_kv_cache_reuse