TensorRT-LLMs/tests/unittest/_torch/multi_gpu
Shiyu Li b0d287c9b7
[TRTLLM-4647][fix] Fix the no fusion allreduce hanging (#4594)
Signed-off-by: Shiyu Li <shili@nvidia.com>
2025-06-04 18:26:13 -07:00
..
test_allreduce.py Fallback to NCCL for various patterns when input size is large. (#4009) 2025-05-01 15:17:16 -07:00
test_ar_residual_norm.py refactor: remove ParallelConfig in tensorrt_llm._torch.distributed module (#3370) 2025-04-11 15:34:20 -07:00
test_embedding.py refactor: remove ParallelConfig in tensorrt_llm._torch.distributed module (#3370) 2025-04-11 15:34:20 -07:00
test_linear.py refactor: remove ParallelConfig in tensorrt_llm._torch.distributed module (#3370) 2025-04-11 15:34:20 -07:00
test_lowprecision_allreduce.py feat: Low Precision Allreduce for PCIe based GPU (#4344) 2025-05-20 06:53:46 +08:00
test_mnnvl_allreduce.py [TRTLLM-4647][fix] Fix the no fusion allreduce hanging (#4594) 2025-06-04 18:26:13 -07:00
test_star_attention_input.jsonl Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
test_star_attention.py chore [BREAKING CHANGE]: Flatten PyTorchConfig knobs into TorchLlmArgs (#4603) 2025-05-28 18:43:04 +08:00
test_user_buffers.py fix: Remove ParallelConfig. (#3678) 2025-04-21 14:14:08 +08:00