mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-14 06:27:45 +08:00
* beam_width and max_new_token Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com> * remove beam_width Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com> * remove min_length Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com> * remove return_num_sequences Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com> Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com> Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com> Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com> Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com> Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com> --------- Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| llm_auto_parallel.py | ||
| llm_eagle_decoding.py | ||
| llm_guided_decoding.py | ||
| llm_inference_async_streaming.py | ||
| llm_inference_async.py | ||
| llm_inference_customize.py | ||
| llm_inference_distributed.py | ||
| llm_inference_kv_events.py | ||
| llm_inference.py | ||
| llm_logits_processor.py | ||
| llm_lookahead_decoding.py | ||
| llm_medusa_decoding.py | ||
| llm_mgmn_llm_distributed.sh | ||
| llm_mgmn_trtllm_bench.sh | ||
| llm_mgmn_trtllm_serve.sh | ||
| llm_multilora.py | ||
| llm_quantization.py | ||
| quickstart_example.py | ||
| README.md | ||
LLM API Examples
Please refer to the official documentation, examples and customization for detailed information and usage guidelines regarding the LLM API.