TensorRT-LLMs/examples/llm-api
2025-12-08 10:37:40 -08:00
..
_tensorrt_engine [OMNIML-3036][doc] Re-branding TensorRT-Model-Optimizer as Nvidia Model-Optimizer (#9679) 2025-12-07 07:14:05 -08:00
out_of_tree_example [TRTLLM-9089][chore] Port prepare_dataset into trtllm-bench (#9250) 2025-12-08 10:37:40 -08:00
extra-llm-api-config.yml [None][chore] Weekly mass integration of release/1.1 -- rebase (#9522) 2025-11-29 21:48:48 +08:00
llm_guided_decoding.py [TRTLLM-6406] feat: Enable guided decoding with overlap scheduler (#6000) 2025-07-17 17:46:10 +08:00
llm_inference_async_streaming.py chores: merge examples for v1.0 doc (#5736) 2025-07-08 21:00:42 -07:00
llm_inference_async.py chores: merge examples for v1.0 doc (#5736) 2025-07-08 21:00:42 -07:00
llm_inference_distributed.py chores: merge examples for v1.0 doc (#5736) 2025-07-08 21:00:42 -07:00
llm_inference.py [OMNIML-3036][doc] Re-branding TensorRT-Model-Optimizer as Nvidia Model-Optimizer (#9679) 2025-12-07 07:14:05 -08:00
llm_kv_cache_connector.py [None][chore] Weekly mass integration of release/1.1 -- rebase (#9522) 2025-11-29 21:48:48 +08:00
llm_kv_cache_offloading.py [None][doc] fix section header of llm_kv_cache_offloading example (#7795) 2025-09-17 17:26:11 +08:00
llm_logits_processor.py [TRTLLM-5277] chore: refine llmapi examples for 1.0 (part1) (#5431) 2025-07-01 19:06:41 +08:00
llm_mgmn_llm_distributed.sh [TRTLLM-9075][doc] refine the slurm examples (#9548) 2025-12-05 17:50:12 -05:00
llm_mgmn_trtllm_bench.sh [TRTLLM-9089][chore] Port prepare_dataset into trtllm-bench (#9250) 2025-12-08 10:37:40 -08:00
llm_mgmn_trtllm_serve.sh [TRTLLM-9075][doc] refine the slurm examples (#9548) 2025-12-05 17:50:12 -05:00
llm_multilora.py [https://nvbugs/5503529][fix] Change test_llmapi_example_multilora to get adapters path from cmd line to avoid downloading from HF (#7740) 2025-09-16 16:35:13 +08:00
llm_runtime.py [TRTLLM-9160][doc] add doc to llm_runtime.py (#9482) 2025-12-05 17:50:12 -05:00
llm_sampling.py [TRTLLM-9086][doc] Clean up TODOs in documentation (#9292) 2025-12-05 17:50:12 -05:00
llm_sparse_attention.py [None] [feat] Optimize the algorithm part of RocketKV (#9333) 2025-12-01 09:04:09 +08:00
llm_speculative_decoding.py [TRTLLM-6393][feat] add static tree sampling and verification (#7161) 2025-09-26 13:16:16 -04:00
quickstart_advanced.py [TRTLLM-9001][feat] add TP support for DeepSeek-V3.2 (#8943) 2025-11-10 12:16:01 +08:00
quickstart_example.py [OMNIML-3036][doc] Re-branding TensorRT-Model-Optimizer as Nvidia Model-Optimizer (#9679) 2025-12-07 07:14:05 -08:00
quickstart_multimodal.py [TRTLLM-8817][chore] Set default value of KvCacheConfig.free_gpu_memory_fraction explicitly (#8561) 2025-10-24 08:55:49 +08:00
README.md [doc] Add NGram tech blog (#6311) 2025-07-25 10:26:33 -07:00
star_attention.py [None][chore] Mass integration of release/1.0 (#6864) 2025-08-22 09:25:15 +08:00

LLM API Examples

Please refer to the official documentation including customization for detailed information and usage guidelines regarding the LLM API.

Run the advanced usage example script:

# FP8 + TP=2
python3 quickstart_advanced.py --model_dir nvidia/Llama-3.1-8B-Instruct-FP8 --tp_size 2

# FP8 (e4m3) kvcache
python3 quickstart_advanced.py --model_dir nvidia/Llama-3.1-8B-Instruct-FP8 --kv_cache_dtype fp8

# BF16 + TP=8
python3 quickstart_advanced.py --model_dir nvidia/Llama-3_1-Nemotron-Ultra-253B-v1 --tp_size 8

# Nemotron-H requires disabling cache reuse in kv cache
python3 quickstart_advanced.py --model_dir nvidia/Nemotron-H-8B-Base-8K --disable_kv_cache_reuse --max_batch_size 8

Run the multimodal example script:

# default inputs
python3 quickstart_multimodal.py --model_dir Efficient-Large-Model/NVILA-8B --modality image [--use_cuda_graph]

# user inputs
# supported modes:
# (1) N prompt, N media (N requests are in-flight batched)
# (2) 1 prompt, N media
# Note: media should be either image or video. Mixing image and video is not supported.
python3 quickstart_multimodal.py --model_dir Efficient-Large-Model/NVILA-8B --modality video --prompt "Tell me what you see in the video briefly." "Describe the scene in the video briefly." --media "https://huggingface.co/datasets/Efficient-Large-Model/VILA-inference-demos/resolve/main/OAI-sora-tokyo-walk.mp4" "https://huggingface.co/datasets/Efficient-Large-Model/VILA-inference-demos/resolve/main/world.mp4" --max_tokens 128 [--use_cuda_graph]

Run the speculative decoding script:

# NGram drafter
python3 quickstart_advanced.py \
    --model_dir meta-llama/Llama-3.1-8B-Instruct \
    --spec_decode_algo NGRAM \
    --spec_decode_max_draft_len 4 \
    --max_matching_ngram_size 2 \
    --disable_overlap_scheduler \
    --disable_kv_cache_reuse
# Draft Target
python3 quickstart_advanced.py \
    --model_dir meta-llama/Llama-3.1-8B-Instruct \
    --spec_decode_algo draft_target \
    --spec_decode_max_draft_len 5 \
    --draft_model_dir meta-llama/Llama-3.2-1B-Instruct \
    --disable_overlap_scheduler \
    --disable_kv_cache_reuse