TensorRT-LLMs/tensorrt_llm/serve
Pengyun Lin 721f84a0ac
fix: Align default setting & remove unnecessary check for chat and completion (#3888)
Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
2025-05-07 14:42:53 +08:00
..
scripts bench: TRTLLM-4936 Port benchmark_serving.py (#4011) 2025-05-07 09:45:14 +08:00
__init__.py Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00
chat_utils.py feat: trtllm-serve multimodal support (#3590) 2025-04-19 05:01:28 +08:00
openai_disagg_server.py feat: Disaggregated router class (#3584) 2025-04-19 00:34:12 +08:00
openai_protocol.py fix: Align default setting & remove unnecessary check for chat and completion (#3888) 2025-05-07 14:42:53 +08:00
openai_server.py feat: add deepseek-r1 reasoning parser to trtllm-serve (#3354) 2025-05-06 08:13:04 +08:00
postprocess_handlers.py feat: add deepseek-r1 reasoning parser to trtllm-serve (#3354) 2025-05-06 08:13:04 +08:00
router.py feat: Disaggregated router class (#3584) 2025-04-19 00:34:12 +08:00