TensorRT-LLMs/tests/unittest/_torch
William Zhang ca9537e17c
[TRTLLM-10858][feat] Multi-image support for EPD disagg (#11264)
* Why?

Prior to this commit, we only supported a single multimodal input for
E/P/D disaggregated serving.

* What?

This commit does a minor refactor of the multimodal embedding handles
that cross process boundaries to enable this.
Existing unit tests are updated accordingly to test this.

The `RequestOutput` has its `mm_embedding_handle` replaced in favor of
`disaggregated_params`, addressing a previous TODO.

Signed-off-by: William Zhang <133824995+2ez4bz@users.noreply.github.com>
2026-02-11 20:50:00 -08:00
..
attention [None][chore] Move test_trtllm_flashinfer_symbol_collision.py to tests/unittest/_torch (#11168) 2026-02-09 13:57:57 +08:00
auto_deploy [#11032][feat] MLA revisited and GLM 4.7 Flash support (#11324) 2026-02-09 23:26:51 -05:00
compilation [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
debugger Fix: fix nvbug 5356427 (#5464) 2025-06-25 22:24:26 +08:00
distributed [TRTLLM-9467][fix] Fix PP+CP combination with helix parallelism (#10312) 2026-01-01 13:42:53 -05:00
executor [None][chore] Introduceing an abstract WaitingQueue interface to decouple the request scheduling logic from specific queue implementations (#11330) 2026-02-12 09:18:24 +08:00
flashinfer [None][chore] Move test_trtllm_flashinfer_symbol_collision.py to tests/unittest/_torch (#11168) 2026-02-09 13:57:57 +08:00
misc [TRTLLM-10308][feat] AutoTuner Cache: reorganize cache file for distributed tuning (#10956) 2026-01-27 16:39:40 +08:00
modeling [https://nvbugs/5761391][fix] Include triton-kernels as a packaged dependency (#10471) 2026-01-28 19:56:32 -08:00
models/checkpoints/hf [TRTLLM-7136][feat] Update load_weights method to include mapping parameter in checkpoint loaders (#9583) 2025-12-05 16:07:20 +01:00
modules [TRTLLM-9111][feat] provide the uniform test framework to test all MoE backends (#11128) 2026-02-04 15:57:56 +08:00
multi_gpu [https://nvbugs/5800646][fix] Fix hang issue by avoid exposing UB buf… (#10842) 2026-02-09 23:53:40 +08:00
multi_gpu_modeling [https://nvbugs/5515753][ci] Add NCCL_DEBUG=INFO flag to collect more info with CI failure. (#8440) 2025-11-20 12:43:13 -05:00
multimodal [TRTLLM-10858][feat] Multi-image support for EPD disagg (#11264) 2026-02-11 20:50:00 -08:00
ray_orchestrator [TRTLLM-9771][feat] Make update_weights compatible with CUDA Graph (#11267) 2026-02-10 01:12:49 -05:00
sampler [https://nvbugs/5769815][fix] Fix offset calculation in _are_stop_words when using speculative decoding (#10854) 2026-02-09 23:53:40 +08:00
speculative [TRTLLM-10321][feat] Support different KV cache layout for one-model spec dec (#10502) 2026-02-10 05:16:02 +08:00
thop [TRTLLM-9457][feat] Add cute dsl fp8 gemm for Blackwell (#10130) 2026-02-06 09:49:30 +08:00
helpers.py [#8733][feat] Add Llama4 MoE handling to AutoDeploy (#9556) 2025-12-04 08:03:33 +02:00
pattern_watcher.py [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
test_connector.py [None][feat] KV Cache Connector API (#7228) 2025-08-28 23:09:27 -04:00
test_model_config.py [TRTLLM-10171][fix] Correct attention handling in ModelConfig and KVCacheManager (#10330) 2026-01-04 06:07:30 -05:00