TensorRT-LLMs/tensorrt_llm/serve
zhanghaotong 96af324ff1
[None][fix] Add try-catch in stream generator (#7467)
Signed-off-by: Zhang Haotong <zhanghaotong.zht@antgroup.com>
Co-authored-by: Zhang Haotong <zhanghaotong.zht@antgroup.com>
2025-09-08 16:09:26 -04:00
..
scripts [https://nvbugs/5369366] [fix] Report failing requests (#7060) 2025-09-04 12:56:23 -07:00
__init__.py Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00
chat_utils.py [TRTLLM-6771][feat] Support MMMU for multimodal models (#6828) 2025-08-21 08:54:12 +08:00
harmony_adapter.py [TRTLLM-7779][feat] Support multiple postprocess workers for chat completions API (#7508) 2025-09-08 11:11:35 +08:00
metadata_server.py feat: Add integration of etcd (#3738) 2025-06-03 20:01:44 +08:00
openai_disagg_server.py [None][chore] Mass integration of release/1.0 - 3rd (#7519) 2025-09-08 14:03:04 +08:00
openai_protocol.py [TRTLLM-7398][feat] Support KV cache salting for secure KV cache reuse (#7106) 2025-09-06 17:58:32 -04:00
openai_server.py [None][fix] Add try-catch in stream generator (#7467) 2025-09-08 16:09:26 -04:00
postprocess_handlers.py [TRTLLM-7779][feat] Support multiple postprocess workers for chat completions API (#7508) 2025-09-08 11:11:35 +08:00
responses_utils.py [TRTLLM-7208][feat] Implement basic functionalities for Responses API (#7341) 2025-09-02 07:08:22 -04:00
router.py feat: Dynamically remove servers in PD (#5270) 2025-06-25 09:50:04 +08:00