TensorRT-LLMs/tensorrt_llm/serve
Zero Zeng 953f4fd69e
[None][fix] acceptance rate calculation fix in benchmark_serving (#6746)
Signed-off-by: Zero Zeng <38289304+zerollzeng@users.noreply.github.com>
2025-08-19 17:29:36 +08:00
..
scripts [None][fix] acceptance rate calculation fix in benchmark_serving (#6746) 2025-08-19 17:29:36 +08:00
__init__.py Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00
chat_utils.py feat: TRTLLM-5574 Add phi-4-multimodal pytorch-backend support (#5644) 2025-07-17 06:30:58 +08:00
metadata_server.py feat: Add integration of etcd (#3738) 2025-06-03 20:01:44 +08:00
openai_disagg_server.py [Disaggregated] Add retry knobs and handling (#5808) 2025-07-19 07:27:59 +08:00
openai_protocol.py [None][fix] acceptance rate calculation fix in benchmark_serving (#6746) 2025-08-19 17:29:36 +08:00
openai_server.py [None][feat] Core Metrics Implementation (#5785) 2025-08-09 02:48:53 -04:00
postprocess_handlers.py [None][fix] acceptance rate calculation fix in benchmark_serving (#6746) 2025-08-19 17:29:36 +08:00
router.py feat: Dynamically remove servers in PD (#5270) 2025-06-25 09:50:04 +08:00