TensorRT-LLMs/tests/unittest/_torch/multi_gpu
Omer Ullman Argov 94dc97ab10
[feat][test] reuse MPI pool executor across tests (#5566)
Signed-off-by: Omer Ullman Argov <118735753+omera-nv@users.noreply.github.com>
2025-06-29 17:23:12 +03:00
..
test_allreduce.py [feat][test] reuse MPI pool executor across tests (#5566) 2025-06-29 17:23:12 +03:00
test_ar_residual_norm.py refactor: remove ParallelConfig in tensorrt_llm._torch.distributed module (#3370) 2025-04-11 15:34:20 -07:00
test_embedding.py refactor: remove ParallelConfig in tensorrt_llm._torch.distributed module (#3370) 2025-04-11 15:34:20 -07:00
test_linear.py refactor: remove ParallelConfig in tensorrt_llm._torch.distributed module (#3370) 2025-04-11 15:34:20 -07:00
test_lowprecision_allreduce.py [feat][test] reuse MPI pool executor across tests (#5566) 2025-06-29 17:23:12 +03:00
test_mnnvl_allreduce.py Use backend to replace macro to control enablement of MNNVL all reduce (#4635) 2025-06-12 11:22:49 +08:00
test_star_attention_input.jsonl Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
test_star_attention.py [TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312) 2025-06-20 03:01:10 +08:00
test_user_buffers.py [feat][test] reuse MPI pool executor across tests (#5566) 2025-06-29 17:23:12 +03:00