TensorRT-LLMs/examples/llm-api
Omer Ullman Argov 8731f5f14f
chore: Mass integration of release/0.20 (#4898)
Signed-off-by: Ivy Zhang <25222398+crazydemo@users.noreply.github.com>
Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
Signed-off-by: Hui Gao <huig@nvidia.com>
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com>
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>
Signed-off-by: Ruodi <200874449+ruodil@users.noreply.github.com>
Signed-off-by: ruodil <200874449+ruodil@users.noreply.github.com>
Signed-off-by: Stanley Sun <190317771+StanleySun639@users.noreply.github.com>
Signed-off-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>
Signed-off-by: Anurag Mukkara <134339030+amukkara@users.noreply.github.com>
Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com>
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
Signed-off-by: moraxu <mguzek@nvidia.com>
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>
Co-authored-by: Ivy Zhang <25222398+crazydemo@users.noreply.github.com>
Co-authored-by: Yiqing Yan <yiqingy@nvidia.com>
Co-authored-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
Co-authored-by: HuiGao-NV <huig@nvidia.com>
Co-authored-by: brb-nv <169953907+brb-nv@users.noreply.github.com>
Co-authored-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com>
Co-authored-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Co-authored-by: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com>
Co-authored-by: ruodil <200874449+ruodil@users.noreply.github.com>
Co-authored-by: Stanley Sun <190317771+StanleySun639@users.noreply.github.com>
Co-authored-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>
Co-authored-by: Anurag Mukkara <134339030+amukkara@users.noreply.github.com>
Co-authored-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com>
Co-authored-by: Faraz <58580514+farazkh80@users.noreply.github.com>
Co-authored-by: Michal Guzek <moraxu@users.noreply.github.com>
Co-authored-by: Larry <197874197+LarryXFly@users.noreply.github.com>
Co-authored-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
Co-authored-by: Yechan Kim <161688079+yechank-nvidia@users.noreply.github.com>
2025-06-08 23:26:26 +08:00
..
llm_auto_parallel.py Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
llm_eagle2_decoding.py [fix] Eagle-2 LLMAPI pybind argument fix. (#3967) 2025-05-29 12:23:25 -07:00
llm_eagle_decoding.py [fix] Eagle-2 LLMAPI pybind argument fix. (#3967) 2025-05-29 12:23:25 -07:00
llm_guided_decoding.py Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
llm_inference_async_streaming.py Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
llm_inference_async.py Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
llm_inference_customize.py chore: Cleanup deprecated APIs from LLM-API (part 1/2) (#3732) 2025-05-07 13:20:25 +08:00
llm_inference_distributed.py Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
llm_inference_kv_events.py chore [BREAKING CHANGE]: Flatten PyTorchConfig knobs into TorchLlmArgs (#4603) 2025-05-28 18:43:04 +08:00
llm_inference.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
llm_logits_processor.py fix: [nvbugs/5066257] serialization improvments (#3869) 2025-05-23 13:06:29 +08:00
llm_lookahead_decoding.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
llm_medusa_decoding.py Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
llm_mgmn_llm_distributed.sh make LLM-API slurm examples executable (#3402) 2025-04-13 21:42:45 +08:00
llm_mgmn_trtllm_bench.sh chore [BREAKING CHANGE]: Flatten PyTorchConfig knobs into TorchLlmArgs (#4603) 2025-05-28 18:43:04 +08:00
llm_mgmn_trtllm_serve.sh [TRTQA-2802][fix]: add --host for mgmn serve examples script (#4175) 2025-05-12 13:28:42 +08:00
llm_multilora.py Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
llm_quantization.py feat: use cudaMalloc to allocate kvCache (#3303) 2025-04-08 10:59:14 +08:00
quickstart_example.py Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
README.md chore: Mass integration of release/0.20 (#4898) 2025-06-08 23:26:26 +08:00

LLM API Examples

Please refer to the official documentation, examples and customization for detailed information and usage guidelines regarding the LLM API.