TensorRT-LLMs/tensorrt_llm/executor
Pengyun Lin 039f7e3118
[https://nvbugspro.nvidia.com/bug/5243740][fix] deduce default max_tokens for trtllm-serve (#4265)
* Deduce default max_tokens for trtllm-serve

Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>

* Improve executor_config.max_seq_len assignment in TRT workflow

Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>

* Enhance error message

Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>

* Add deduced max_tokens test

Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>

---------

Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
2025-05-19 00:34:40 +08:00
..
__init__.py Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
executor.py feat: Support Top-K logprobs and prompt_logprobs in LLMAPI (#3388) 2025-05-01 12:47:14 -04:00
ipc.py fix: Use hmac authentication for pickle encryption (#3384) 2025-04-17 00:40:13 +08:00
postproc_worker.py feat: return logits in PyTorch flow (#3221) 2025-04-24 16:56:03 -07:00
proxy.py feat: Support Top-K logprobs and prompt_logprobs in LLMAPI (#3388) 2025-05-01 12:47:14 -04:00
request.py feat: Add multimodal embedding field in LlmRequest (#3855) 2025-05-01 12:23:30 +08:00
result.py feat: Support Top-K logprobs and prompt_logprobs in LLMAPI (#3388) 2025-05-01 12:47:14 -04:00
utils.py feat: Support Top-K logprobs and prompt_logprobs in LLMAPI (#3388) 2025-05-01 12:47:14 -04:00
worker.py [https://nvbugspro.nvidia.com/bug/5243740][fix] deduce default max_tokens for trtllm-serve (#4265) 2025-05-19 00:34:40 +08:00