TensorRT-LLMs/examples/llm-api
xinhe-nv 186e2b8c38
[TRTQA-2802][fix]: add --host for mgmn serve examples script (#4175)
remove prepare data

Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com>
2025-05-12 13:28:42 +08:00
..
llm_auto_parallel.py Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
llm_eagle_decoding.py Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
llm_guided_decoding.py Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
llm_inference_async_streaming.py Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
llm_inference_async.py Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
llm_inference_customize.py chore: Cleanup deprecated APIs from LLM-API (part 1/2) (#3732) 2025-05-07 13:20:25 +08:00
llm_inference_distributed.py Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
llm_inference_kv_events.py test: add kv cache event tests for disagg workers (#3602) 2025-04-18 18:30:19 +08:00
llm_inference.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
llm_logits_processor.py feat: LogitsProcessor in PyTorch backend (#3145) 2025-05-01 14:15:30 -07:00
llm_lookahead_decoding.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
llm_medusa_decoding.py Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
llm_mgmn_llm_distributed.sh make LLM-API slurm examples executable (#3402) 2025-04-13 21:42:45 +08:00
llm_mgmn_trtllm_bench.sh make LLM-API slurm examples executable (#3402) 2025-04-13 21:42:45 +08:00
llm_mgmn_trtllm_serve.sh [TRTQA-2802][fix]: add --host for mgmn serve examples script (#4175) 2025-05-12 13:28:42 +08:00
llm_multilora.py Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
llm_quantization.py feat: use cudaMalloc to allocate kvCache (#3303) 2025-04-08 10:59:14 +08:00
quickstart_example.py Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
README.md docs:update llm api examples and customizations sections' links. (#3566) 2025-04-15 13:55:22 +08:00

LLM API Examples

Please refer to the official documentation, examples and customization for detailed information and usage guidelines regarding the LLM API.