TensorRT-LLMs/tensorrt_llm/serve
Zero Zeng c9b8b6180f
Add Acceptance Rate calculation to benchmark_serving (#6240)
Signed-off-by: Zero Zeng <38289304+zerollzeng@users.noreply.github.com>
2025-07-28 14:00:58 +08:00
..
scripts Add Acceptance Rate calculation to benchmark_serving (#6240) 2025-07-28 14:00:58 +08:00
__init__.py Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00
chat_utils.py feat: TRTLLM-5574 Add phi-4-multimodal pytorch-backend support (#5644) 2025-07-17 06:30:58 +08:00
metadata_server.py feat: Add integration of etcd (#3738) 2025-06-03 20:01:44 +08:00
openai_disagg_server.py [Disaggregated] Add retry knobs and handling (#5808) 2025-07-19 07:27:59 +08:00
openai_protocol.py feat: Support JSON Schema in OpenAI-Compatible API (#6321) 2025-07-25 12:55:56 -04:00
openai_server.py [nvbug 5004744][fix] rewrite completion API to avoid repetitive tokens (#5201) 2025-07-14 17:17:30 +08:00
postprocess_handlers.py [feat] Detokenize option in /v1/completions request (#5382) 2025-07-08 19:36:04 +08:00
router.py feat: Dynamically remove servers in PD (#5270) 2025-06-25 09:50:04 +08:00