TensorRT-LLMs/examples/llm-api
Yan Chunwei 1af95b53cd
[https://nvbugs/5409420][fix] Fix test_ptp_star_attention_example (#6584)
Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>
2025-08-11 10:14:20 +08:00
..
_tensorrt_engine [refactor] Simplification of Speculative decoding configs (#5639) 2025-07-10 11:37:30 -04:00
out_of_tree_example chores: merge examples for v1.0 doc (#5736) 2025-07-08 21:00:42 -07:00
llm_guided_decoding.py [TRTLLM-6406] feat: Enable guided decoding with overlap scheduler (#6000) 2025-07-17 17:46:10 +08:00
llm_inference_async_streaming.py chores: merge examples for v1.0 doc (#5736) 2025-07-08 21:00:42 -07:00
llm_inference_async.py chores: merge examples for v1.0 doc (#5736) 2025-07-08 21:00:42 -07:00
llm_inference_distributed.py chores: merge examples for v1.0 doc (#5736) 2025-07-08 21:00:42 -07:00
llm_inference.py chores: merge examples for v1.0 doc (#5736) 2025-07-08 21:00:42 -07:00
llm_logits_processor.py [TRTLLM-5277] chore: refine llmapi examples for 1.0 (part1) (#5431) 2025-07-01 19:06:41 +08:00
llm_mgmn_llm_distributed.sh chores: merge examples for v1.0 doc (#5736) 2025-07-08 21:00:42 -07:00
llm_mgmn_trtllm_bench.sh [TRTLLM-5277] chore: refine llmapi examples for 1.0 (part1) (#5431) 2025-07-01 19:06:41 +08:00
llm_mgmn_trtllm_serve.sh chore: update trtllm-serve usage doc by removing backend parameter when it use torch as backend. (#6419) 2025-07-30 11:11:06 -04:00
llm_multilora.py [5830][feat] Improve LoRA cache memory control (#6220) 2025-07-31 09:26:38 +03:00
llm_runtime.py chore: [Breaking Change] Rename cuda_graph_config padding_enabled fie… (#6003) 2025-07-15 15:50:03 +09:00
llm_sampling.py [TRTLLM-6160] chore: add sampling examples for pytorch (#5951) 2025-07-14 15:28:32 +09:00
llm_speculative_decoding.py [refactor] Simplification of Speculative decoding configs (#5639) 2025-07-10 11:37:30 -04:00
quickstart_advanced.py [None][opt] ADP schedule balance optimization (#6061) 2025-08-06 09:38:02 +08:00
quickstart_example.py infra: [TRTLLM-6242] install cuda-toolkit to fix sanity check (#5709) 2025-07-14 18:52:13 +09:00
quickstart_multimodal.py [5830][feat] Improve LoRA cache memory control (#6220) 2025-07-31 09:26:38 +03:00
README.md [doc] Add NGram tech blog (#6311) 2025-07-25 10:26:33 -07:00
star_attention.py [https://nvbugs/5409420][fix] Fix test_ptp_star_attention_example (#6584) 2025-08-11 10:14:20 +08:00

LLM API Examples

Please refer to the official documentation including customization for detailed information and usage guidelines regarding the LLM API.

Run the advanced usage example script:

# FP8 + TP=2
python3 quickstart_advanced.py --model_dir nvidia/Llama-3.1-8B-Instruct-FP8 --tp_size 2

# FP8 (e4m3) kvcache
python3 quickstart_advanced.py --model_dir nvidia/Llama-3.1-8B-Instruct-FP8 --kv_cache_dtype fp8

# BF16 + TP=8
python3 quickstart_advanced.py --model_dir nvidia/Llama-3_1-Nemotron-Ultra-253B-v1 --tp_size 8

# Nemotron-H requires disabling cache reuse in kv cache
python3 quickstart_advanced.py --model_dir nvidia/Nemotron-H-8B-Base-8K --disable_kv_cache_reuse --max_batch_size 8

Run the multimodal example script:

# default inputs
python3 quickstart_multimodal.py --model_dir Efficient-Large-Model/NVILA-8B --modality image [--use_cuda_graph]

# user inputs
# supported modes:
# (1) N prompt, N media (N requests are in-flight batched)
# (2) 1 prompt, N media
# Note: media should be either image or video. Mixing image and video is not supported.
python3 quickstart_multimodal.py --model_dir Efficient-Large-Model/NVILA-8B --modality video --prompt "Tell me what you see in the video briefly." "Describe the scene in the video briefly." --media "https://huggingface.co/datasets/Efficient-Large-Model/VILA-inference-demos/resolve/main/OAI-sora-tokyo-walk.mp4" "https://huggingface.co/datasets/Efficient-Large-Model/VILA-inference-demos/resolve/main/world.mp4" --max_tokens 128 [--use_cuda_graph]

Run the speculative decoding script:

# NGram drafter
python3 quickstart_advanced.py \
    --model_dir meta-llama/Llama-3.1-8B-Instruct \
    --spec_decode_algo NGRAM \
    --spec_decode_max_draft_len 4 \
    --max_matching_ngram_size 2 \
    --disable_overlap_scheduler \
    --disable_kv_cache_reuse
# Draft Target
python3 quickstart_advanced.py \
    --model_dir meta-llama/Llama-3.1-8B-Instruct \
    --spec_decode_algo draft_target \
    --spec_decode_max_draft_len 5 \
    --draft_model_dir meta-llama/Llama-3.2-1B-Instruct \
    --disable_overlap_scheduler \
    --disable_kv_cache_reuse