TensorRT-LLMs/examples/llm-api
Yan Chunwei 2ab8e58ede
[TRTLLM-9075][doc] refine the slurm examples (#9548)
Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com>
2025-12-01 09:33:21 +08:00
..
_tensorrt_engine [None][chore] Mass integration of release/1.0 - 3rd (#7519) 2025-09-08 14:03:04 +08:00
out_of_tree_example [None][fix] Refactoring input prep to allow out-of-tree models (#6497) 2025-08-12 20:29:10 -04:00
extra-llm-api-config.yml [https://nvbugs/5634220][fix] Add developer guide back and fix some i… (#8911) 2025-11-05 10:17:01 +08:00
llm_guided_decoding.py [TRTLLM-6406] feat: Enable guided decoding with overlap scheduler (#6000) 2025-07-17 17:46:10 +08:00
llm_inference_async_streaming.py chores: merge examples for v1.0 doc (#5736) 2025-07-08 21:00:42 -07:00
llm_inference_async.py chores: merge examples for v1.0 doc (#5736) 2025-07-08 21:00:42 -07:00
llm_inference_distributed.py chores: merge examples for v1.0 doc (#5736) 2025-07-08 21:00:42 -07:00
llm_inference.py chores: merge examples for v1.0 doc (#5736) 2025-07-08 21:00:42 -07:00
llm_kv_cache_connector.py [TRTLLM-9159][doc] Add KV Connector docs (#9043) 2025-11-17 10:44:49 +08:00
llm_kv_cache_offloading.py [None][doc] fix section header of llm_kv_cache_offloading example (#7795) 2025-09-17 17:26:11 +08:00
llm_logits_processor.py [TRTLLM-5277] chore: refine llmapi examples for 1.0 (part1) (#5431) 2025-07-01 19:06:41 +08:00
llm_mgmn_llm_distributed.sh [TRTLLM-9075][doc] refine the slurm examples (#9548) 2025-12-01 09:33:21 +08:00
llm_mgmn_trtllm_bench.sh [TRTLLM-9075][doc] refine the slurm examples (#9548) 2025-12-01 09:33:21 +08:00
llm_mgmn_trtllm_serve.sh [TRTLLM-9075][doc] refine the slurm examples (#9548) 2025-12-01 09:33:21 +08:00
llm_multilora.py [https://nvbugs/5503529][fix] Change test_llmapi_example_multilora to get adapters path from cmd line to avoid downloading from HF (#7740) 2025-09-16 16:35:13 +08:00
llm_runtime.py [TRTLLM-9160][doc] add doc to llm_runtime.py (#9482) 2025-11-27 10:10:17 +08:00
llm_sampling.py [TRTLLM-9086][doc] Clean up TODOs in documentation (#9292) 2025-11-27 14:13:00 +08:00
llm_speculative_decoding.py [refactor] Simplification of Speculative decoding configs (#5639) 2025-07-10 11:37:30 -04:00
quickstart_advanced.py [TRTLLM-6746][feat] Enable two-model spec dec for MTP Eagle (#7001) 2025-09-18 12:05:36 -04:00
quickstart_example.py [None][fix] add the missing import raised by #7607 (#7639) 2025-09-09 03:42:42 -04:00
quickstart_multimodal.py [TRTLLM-7385][feat] Optimize Qwen2/2.5-VL performance (#7250) 2025-09-22 03:40:02 -07:00
README.md [doc] Add NGram tech blog (#6311) 2025-07-25 10:26:33 -07:00
star_attention.py [None][chore] Mass integration of release/1.0 (#6864) 2025-08-22 09:25:15 +08:00

LLM API Examples

Please refer to the official documentation including customization for detailed information and usage guidelines regarding the LLM API.

Run the advanced usage example script:

# FP8 + TP=2
python3 quickstart_advanced.py --model_dir nvidia/Llama-3.1-8B-Instruct-FP8 --tp_size 2

# FP8 (e4m3) kvcache
python3 quickstart_advanced.py --model_dir nvidia/Llama-3.1-8B-Instruct-FP8 --kv_cache_dtype fp8

# BF16 + TP=8
python3 quickstart_advanced.py --model_dir nvidia/Llama-3_1-Nemotron-Ultra-253B-v1 --tp_size 8

# Nemotron-H requires disabling cache reuse in kv cache
python3 quickstart_advanced.py --model_dir nvidia/Nemotron-H-8B-Base-8K --disable_kv_cache_reuse --max_batch_size 8

Run the multimodal example script:

# default inputs
python3 quickstart_multimodal.py --model_dir Efficient-Large-Model/NVILA-8B --modality image [--use_cuda_graph]

# user inputs
# supported modes:
# (1) N prompt, N media (N requests are in-flight batched)
# (2) 1 prompt, N media
# Note: media should be either image or video. Mixing image and video is not supported.
python3 quickstart_multimodal.py --model_dir Efficient-Large-Model/NVILA-8B --modality video --prompt "Tell me what you see in the video briefly." "Describe the scene in the video briefly." --media "https://huggingface.co/datasets/Efficient-Large-Model/VILA-inference-demos/resolve/main/OAI-sora-tokyo-walk.mp4" "https://huggingface.co/datasets/Efficient-Large-Model/VILA-inference-demos/resolve/main/world.mp4" --max_tokens 128 [--use_cuda_graph]

Run the speculative decoding script:

# NGram drafter
python3 quickstart_advanced.py \
    --model_dir meta-llama/Llama-3.1-8B-Instruct \
    --spec_decode_algo NGRAM \
    --spec_decode_max_draft_len 4 \
    --max_matching_ngram_size 2 \
    --disable_overlap_scheduler \
    --disable_kv_cache_reuse
# Draft Target
python3 quickstart_advanced.py \
    --model_dir meta-llama/Llama-3.1-8B-Instruct \
    --spec_decode_algo draft_target \
    --spec_decode_max_draft_len 5 \
    --draft_model_dir meta-llama/Llama-3.2-1B-Instruct \
    --disable_overlap_scheduler \
    --disable_kv_cache_reuse