TensorRT-LLMs/tests/unittest/_torch/multi_gpu
Yukun He 2225745782 [TRTLLM-8129][feat] Allreduce tuning and benchmark script revising (#7870)
Because we have encountered some perf regression due to using a one-shot kernel instead of NCCL on A100/H100, it will be beneficial if we can have a solid benchmarking of allreduce Op and analyze the data collected from it.

Implemented new AllreduceOp heuristic:
- Added Linear programming-based heuristic implementation.
- Added LUT-based heuristic implementation and corresponding code generation script.

AllreduceOp minor fixing:
- Fixed a minor issue in AllreduceOp, that the strategy can not be overridden when ONESHOT or TWOSHOT is set.
- Fixed a minor TWOSHOT kernel perf issue.
- Cleaned up Dispatching code in AllReduceOp.

This PR will fix the perf gaps reported in:
https://nvbugspro.nvidia.com/bug/5517023

For Deepseek-R1, it shows a performance gain of about 3-4% in concurrency levels of 256 and 512.

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
2025-11-04 16:42:31 +08:00
..
test_allreduce.py [TRTLLM-8129][feat] Allreduce tuning and benchmark script revising (#7870) 2025-11-04 16:42:31 +08:00
test_alltoall.py [TRTLLM-5966][feat] Helix: add alltoall op (#6815) 2025-09-25 07:18:29 -07:00
test_ar_residual_norm.py refactor: remove ParallelConfig in tensorrt_llm._torch.distributed module (#3370) 2025-04-11 15:34:20 -07:00
test_embedding.py [ci] small multigpu speedups (#5643) 2025-07-03 08:06:10 -04:00
test_linear.py [https://nvbugs/5501820][fix] Add requirements for numba-cuda version to WAR mem corruption (#7992) 2025-10-10 10:18:27 +08:00
test_lowprecision_allreduce.py [None][ci] add DGX_H100-2_GPUs-PyTorch-Others-1 pipeline (#7629) 2025-09-09 11:06:32 -04:00
test_mnnvl_allreduce.py [None][feat] Add NCCL Symmetric Integration for All Reduce (#4500) 2025-08-07 17:28:14 -07:00
test_mnnvl_memory.py [None][ci] move unittests to sub-directories (#6635) 2025-08-20 05:42:22 -04:00
test_moe_a2a.py [TRTLLM-7318][feat] MnnvlThroughput AlltoAll implementation. (#7499) 2025-10-27 13:23:06 -04:00
test_star_attention_input.jsonl Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
test_star_attention.py [TRTLLM-5966][feat] Helix: extend mapping to support different CP types (#6816) 2025-08-14 09:00:02 -07:00
test_user_buffers.py [None][feat] Add NCCL Symmetric Integration for All Reduce (#4500) 2025-08-07 17:28:14 -07:00