TensorRT-LLMs/tensorrt_llm/serve
Wanli Jiang 2d2b8bae32
feat: TRTLLM-5574 Add phi-4-multimodal pytorch-backend support (#5644)
Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
2025-07-17 06:30:58 +08:00
..
scripts fix: Make the bench serving script compatible with different usages (#5905) 2025-07-10 19:36:26 +08:00
__init__.py Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00
chat_utils.py feat: TRTLLM-5574 Add phi-4-multimodal pytorch-backend support (#5644) 2025-07-17 06:30:58 +08:00
metadata_server.py feat: Add integration of etcd (#3738) 2025-06-03 20:01:44 +08:00
openai_disagg_server.py Fix lost requests for disaggregated serving (#5815) 2025-07-09 08:42:45 +09:00
openai_protocol.py [nvbug 5004744][fix] rewrite completion API to avoid repetitive tokens (#5201) 2025-07-14 17:17:30 +08:00
openai_server.py [nvbug 5004744][fix] rewrite completion API to avoid repetitive tokens (#5201) 2025-07-14 17:17:30 +08:00
postprocess_handlers.py [feat] Detokenize option in /v1/completions request (#5382) 2025-07-08 19:36:04 +08:00
router.py feat: Dynamically remove servers in PD (#5270) 2025-06-25 09:50:04 +08:00