mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-14 06:27:45 +08:00
Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com> Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> Co-authored-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| llm_auto_parallel.py | ||
| llm_eagle_decoding.py | ||
| llm_guided_decoding.py | ||
| llm_inference_async_streaming.py | ||
| llm_inference_async.py | ||
| llm_inference_customize.py | ||
| llm_inference_distributed.py | ||
| llm_inference_kv_events.py | ||
| llm_inference.py | ||
| llm_logits_processor.py | ||
| llm_lookahead_decoding.py | ||
| llm_medusa_decoding.py | ||
| llm_mgmn_llm_distributed.sh | ||
| llm_mgmn_trtllm_bench.sh | ||
| llm_mgmn_trtllm_serve.sh | ||
| llm_multilora.py | ||
| llm_quantization.py | ||
| quickstart_example.py | ||
| README.md | ||
LLM API Examples
Please refer to the official documentation, examples and customization for detailed information and usage guidelines regarding the LLM API.