mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-02-16 15:55:08 +08:00
* Why? Prior to this commit, we only supported a single multimodal input for E/P/D disaggregated serving. * What? This commit does a minor refactor of the multimodal embedding handles that cross process boundaries to enable this. Existing unit tests are updated accordingly to test this. The `RequestOutput` has its `mm_embedding_handle` replaced in favor of `disaggregated_params`, addressing a previous TODO. Signed-off-by: William Zhang <133824995+2ez4bz@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| scripts | ||
| tool_parser | ||
| __init__.py | ||
| chat_utils.py | ||
| cluster_storage.py | ||
| disagg_auto_scaling.py | ||
| harmony_adapter.py | ||
| metadata_server.py | ||
| openai_client.py | ||
| openai_disagg_server.py | ||
| openai_disagg_service.py | ||
| openai_protocol.py | ||
| openai_server.py | ||
| openai_service.py | ||
| perf_metrics.py | ||
| postprocess_handlers.py | ||
| responses_utils.py | ||
| router.py | ||