TensorRT-LLMs/tests/unittest/_torch/multi_gpu
Matthias Jouanneaux 69574ad730
[TRTLLM-5966][feat] Helix: extend mapping to support different CP types (#6816)
Signed-off-by: Matthias Jouanneaux <mjoux@nvidia.com>
2025-08-14 09:00:02 -07:00
..
test_allreduce.py [feat][test] reuse MPI pool executor across tests (#5566) 2025-06-29 17:23:12 +03:00
test_ar_residual_norm.py refactor: remove ParallelConfig in tensorrt_llm._torch.distributed module (#3370) 2025-04-11 15:34:20 -07:00
test_embedding.py [ci] small multigpu speedups (#5643) 2025-07-03 08:06:10 -04:00
test_linear.py [ci] small multigpu speedups (#5643) 2025-07-03 08:06:10 -04:00
test_lowprecision_allreduce.py [None][fix] Migrate to new cuda binding package name (#6700) 2025-08-07 16:29:55 -04:00
test_mnnvl_allreduce.py [None][feat] Add NCCL Symmetric Integration for All Reduce (#4500) 2025-08-07 17:28:14 -07:00
test_star_attention_input.jsonl Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
test_star_attention.py [TRTLLM-5966][feat] Helix: extend mapping to support different CP types (#6816) 2025-08-14 09:00:02 -07:00
test_user_buffers.py [None][feat] Add NCCL Symmetric Integration for All Reduce (#4500) 2025-08-07 17:28:14 -07:00