TensorRT-LLMs/tests/unittest/_torch/multi_gpu
Yukun He d272f1a9bc
[TRTLLM-8821][feat] Apply AutoTuner to AllReduce Op for strategy tuning. (#8531)
Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
2026-01-05 15:44:37 +08:00
..
test_allreduce.py [TRTLLM-8821][feat] Apply AutoTuner to AllReduce Op for strategy tuning. (#8531) 2026-01-05 15:44:37 +08:00
test_alltoall.py [TRTLLM-5966][feat] Helix: add alltoall op (#6815) 2025-09-25 07:18:29 -07:00
test_ar_residual_norm.py [TRTLLM-7073][feat] Support torch compile for PP for Llama and DeepSeekV3 (#7838) 2025-12-04 13:32:11 +08:00
test_embedding.py [ci] small multigpu speedups (#5643) 2025-07-03 08:06:10 -04:00
test_linear.py [https://nvbugs/5501820][fix] Add requirements for numba-cuda version to WAR mem corruption (#7992) 2025-10-10 10:18:27 +08:00
test_lowprecision_allreduce.py [None][ci] add DGX_H100-2_GPUs-PyTorch-Others-1 pipeline (#7629) 2025-09-09 11:06:32 -04:00
test_mnnvl_allreduce.py [https://nvbugs/5729697][fix] MNNVL Allreduce: use CUDA runtime instead of Macro to get SM version. (#10062) 2025-12-23 16:07:07 +08:00
test_mnnvl_memory.py [None][ci] move unittests to sub-directories (#6635) 2025-08-20 05:42:22 -04:00
test_moe_a2a.py [TRTLLM-10126][feat] Increase topk upper limit to 22 for NVLinkOneSid… (#10229) 2025-12-27 22:48:10 +08:00
test_star_attention_input.jsonl Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
test_star_attention.py [TRTLLM-5966][feat] Helix: extend mapping to support different CP types (#6816) 2025-08-14 09:00:02 -07:00
test_user_buffers.py [TRTLLM-8821][feat] Apply AutoTuner to AllReduce Op for strategy tuning. (#8531) 2026-01-05 15:44:37 +08:00