TensorRT-LLMs/tensorrt_llm/serve
Yechan Kim 0893afae3d
[TRTLLM-6771][feat] Support MMMU for multimodal models (#6828)
Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>
2025-08-21 08:54:12 +08:00
..
scripts [None][fix] acceptance rate calculation fix in benchmark_serving (#6746) 2025-08-19 17:29:36 +08:00
__init__.py Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00
chat_utils.py [TRTLLM-6771][feat] Support MMMU for multimodal models (#6828) 2025-08-21 08:54:12 +08:00
metadata_server.py feat: Add integration of etcd (#3738) 2025-06-03 20:01:44 +08:00
openai_disagg_server.py [Disaggregated] Add retry knobs and handling (#5808) 2025-07-19 07:27:59 +08:00
openai_protocol.py [TRTLLM-7326][feat] Add standalone multimodal encoder (#6743) 2025-08-19 21:42:50 -07:00
openai_server.py [TRTLLM-7326][feat] Add standalone multimodal encoder (#6743) 2025-08-19 21:42:50 -07:00
postprocess_handlers.py [None][fix] acceptance rate calculation fix in benchmark_serving (#6746) 2025-08-19 17:29:36 +08:00
router.py feat: Dynamically remove servers in PD (#5270) 2025-06-25 09:50:04 +08:00