TensorRT-LLMs/tensorrt_llm/serve
Pengyun Lin bac22ff7b5
[feat] support sharegpt downloading in benchmark_serving (#4578)
Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
2025-05-30 17:27:53 +08:00
..
scripts [feat] support sharegpt downloading in benchmark_serving (#4578) 2025-05-30 17:27:53 +08:00
__init__.py Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00
chat_utils.py [TRTLLM-1658][feat] Enable multiple response in trtllm-serve for TRT backend (#4623) 2025-05-28 11:36:44 +08:00
openai_disagg_server.py fix: Remove duplicate tokenization in generation server (#4492) 2025-05-26 16:43:07 +08:00
openai_protocol.py [TRTLLM-1658][feat] Enable multiple response in trtllm-serve for TRT backend (#4623) 2025-05-28 11:36:44 +08:00
openai_server.py [TRTLLM-1658][feat] Enable multiple response in trtllm-serve for TRT backend (#4623) 2025-05-28 11:36:44 +08:00
postprocess_handlers.py feat: add deepseek-r1 reasoning parser to trtllm-serve (#3354) 2025-05-06 08:13:04 +08:00
router.py feat: conditional disaggregation in disagg server (#3974) 2025-05-21 09:57:46 +08:00