TensorRT-LLMs/tests/unittest/_torch/multi_gpu
Bo Li e405468230
[TRTLLM-10048][feat] Fuse the AllGather for expert statistics required by the EPLB. (#10885)
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
2026-01-26 17:59:03 +08:00
..
NIAH_simple_data.jsonl [None][chore] Waive star attention unittests (#10439) 2026-01-16 10:12:32 +08:00
test_allreduce.py [None][refactor] Unify the usage of MPIDist and TorchDist. (#10380) 2026-01-14 14:05:47 +08:00
test_alltoall.py [TRTLLM-5966][feat] Helix: add alltoall op (#6815) 2025-09-25 07:18:29 -07:00
test_ar_residual_norm.py [TRTLLM-7073][feat] Support torch compile for PP for Llama and DeepSeekV3 (#7838) 2025-12-04 13:32:11 +08:00
test_embedding.py [ci] small multigpu speedups (#5643) 2025-07-03 08:06:10 -04:00
test_linear.py [None][fix] default disable gemm+allreduce fusion (#10656) 2026-01-20 12:31:17 +08:00
test_lowprecision_allreduce.py [None][ci] add DGX_H100-2_GPUs-PyTorch-Others-1 pipeline (#7629) 2025-09-09 11:06:32 -04:00
test_mnnvl_allreduce.py [https://nvbugs/5729697][fix] MNNVL Allreduce: use CUDA runtime instead of Macro to get SM version. (#10062) 2025-12-23 16:07:07 +08:00
test_mnnvl_memory.py [None][ci] move unittests to sub-directories (#6635) 2025-08-20 05:42:22 -04:00
test_moe_a2a.py [TRTLLM-10048][feat] Fuse the AllGather for expert statistics required by the EPLB. (#10885) 2026-01-26 17:59:03 +08:00
test_user_buffers.py [TRTLLM-8821][feat] Apply AutoTuner to AllReduce Op for strategy tuning. (#8531) 2026-01-05 15:44:37 +08:00