TensorRT-LLMs/tensorrt_llm/serve
Pengyun Lin 039f7e3118
[https://nvbugspro.nvidia.com/bug/5243740][fix] deduce default max_tokens for trtllm-serve (#4265)
* Deduce default max_tokens for trtllm-serve

Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>

* Improve executor_config.max_seq_len assignment in TRT workflow

Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>

* Enhance error message

Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>

* Add deduced max_tokens test

Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>

---------

Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
2025-05-19 00:34:40 +08:00
..
scripts bench: TRTLLM-4936 Port benchmark_serving.py (#4011) 2025-05-07 09:45:14 +08:00
__init__.py Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00
chat_utils.py fix: Fix chat template kwargs bug. (#4387) 2025-05-16 23:07:46 +08:00
openai_disagg_server.py feat: add kv cache aware router (#3831) 2025-05-12 07:23:57 -04:00
openai_protocol.py [https://nvbugspro.nvidia.com/bug/5243740][fix] deduce default max_tokens for trtllm-serve (#4265) 2025-05-19 00:34:40 +08:00
openai_server.py Removing the outdated argument (#4408) 2025-05-18 15:52:15 +08:00
postprocess_handlers.py feat: add deepseek-r1 reasoning parser to trtllm-serve (#3354) 2025-05-06 08:13:04 +08:00
router.py feat: add kv cache aware router (#3831) 2025-05-12 07:23:57 -04:00