TensorRT-LLMs/tensorrt_llm/serve
Yechan Kim c6e2111f4e
feat: enhance trtllm serve multimodal (#3757)
* feat: enhance trtllm serve multimodal

1. made the load_image and load_video asynchronous
2. add image_encoded input support to be compatible with genai-perf
3. support text-only on multimodal mdoels(currently, Qwen2-VL & Qwen2.5-VL)

Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>

* add test

Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>

* fix bandit

Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>

* trimming uils

Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>

* trimming for test

Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>

* genai perf command fix

Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>

* command fix

Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>

* refactor chat_utils

Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>

* stress test genai-perf command

Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>

---------

Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>
2025-05-15 16:16:31 -07:00
..
scripts bench: TRTLLM-4936 Port benchmark_serving.py (#4011) 2025-05-07 09:45:14 +08:00
__init__.py Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00
chat_utils.py feat: enhance trtllm serve multimodal (#3757) 2025-05-15 16:16:31 -07:00
openai_disagg_server.py feat: add kv cache aware router (#3831) 2025-05-12 07:23:57 -04:00
openai_protocol.py feat: Support the Structural Tag in guided decoding (#4066) 2025-05-12 17:24:50 +08:00
openai_server.py feat: enhance trtllm serve multimodal (#3757) 2025-05-15 16:16:31 -07:00
postprocess_handlers.py feat: add deepseek-r1 reasoning parser to trtllm-serve (#3354) 2025-05-06 08:13:04 +08:00
router.py feat: add kv cache aware router (#3831) 2025-05-12 07:23:57 -04:00