TensorRT-LLMs/tests/unittest/_torch
William Zhang bc2487bc2c [https://nvbugs/5826962][fix] Fix PD disaggregation for VLMs that use mrope (#10865)
* Why?

Commit a6a8898 enabled EPD disaggregation for VLMs that use mrope (e.g.
qwen). However, this broke PD disaggregation for these sames models.

* What?

This commit fixes this, and adds a unit test that guards against it.

Signed-off-by: William Zhang <133824995+2ez4bz@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
2026-02-02 16:26:46 +08:00
..
attention [TRTLLM-9766][feat] Integration of the KVCacheManager V2 to TRTLLM Runtime (#10659) 2026-02-02 14:29:02 +08:00
auto_deploy [#8242][feat] Add int4 GPTQ support for AutoDeploy (#8248) 2026-01-30 23:07:24 -08:00
compilation [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
debugger Fix: fix nvbug 5356427 (#5464) 2025-06-25 22:24:26 +08:00
distributed [TRTLLM-9467][fix] Fix PP+CP combination with helix parallelism (#10312) 2026-01-01 13:42:53 -05:00
executor [TRTLLM-10666][chore] Refactor request fetching logic for better separation of concerns (#10988) 2026-02-02 10:36:08 +08:00
misc [TRTLLM-10308][feat] AutoTuner Cache: reorganize cache file for distributed tuning (#10956) 2026-01-27 16:39:40 +08:00
modeling [https://nvbugs/5761391][fix] Include triton-kernels as a packaged dependency (#10471) 2026-01-28 19:56:32 -08:00
models/checkpoints/hf [TRTLLM-7136][feat] Update load_weights method to include mapping parameter in checkpoint loaders (#9583) 2025-12-05 16:07:20 +01:00
modules [https://nvbugs/5761391][fix] Include triton-kernels as a packaged dependency (#10471) 2026-01-28 19:56:32 -08:00
multi_gpu [TRTLLM-10048][feat] Fuse the AllGather for expert statistics required by the EPLB. (#10885) 2026-01-26 17:59:03 +08:00
multi_gpu_modeling [https://nvbugs/5515753][ci] Add NCCL_DEBUG=INFO flag to collect more info with CI failure. (#8440) 2025-11-20 12:43:13 -05:00
multimodal [https://nvbugs/5826962][fix] Fix PD disaggregation for VLMs that use mrope (#10865) 2026-02-02 16:26:46 +08:00
ray_orchestrator [TRTLLM-9771][feat] Allow overriding quantization configs (#11062) 2026-01-31 10:48:51 -05:00
sampler [TRTLLM-10312][perf] Improve performance of _write_finish_reasons in TorchSampler (#10459) 2026-01-29 11:06:09 -05:00
speculative [TRTC-122][feat] Eagle3 Specdec UX improvements (#10124) 2026-01-22 07:24:11 -08:00
thop [TRTLLM-10398][feat] Enable TRTLLM moe backend for Nemotron Super (#10791) 2026-01-31 13:48:25 +08:00
helpers.py [#8733][feat] Add Llama4 MoE handling to AutoDeploy (#9556) 2025-12-04 08:03:33 +02:00
pattern_watcher.py [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
test_connector.py [None][feat] KV Cache Connector API (#7228) 2025-08-28 23:09:27 -04:00
test_model_config.py [TRTLLM-10171][fix] Correct attention handling in ModelConfig and KVCacheManager (#10330) 2026-01-04 06:07:30 -05:00