TensorRT-LLMs/tests/unittest/_torch/multi_gpu
Yukun He 9cc5922a0b
Clean up allreduce op in Deepseek V3 model. (#3829)
* Replace deepseek_allreduce op with the new unified allreduce op and moe_allreduce op.
* Minor revision of moe_allreduce op argument names.

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
2025-05-01 07:56:36 +08:00
..
test_allreduce.py Clean up allreduce op in Deepseek V3 model. (#3829) 2025-05-01 07:56:36 +08:00
test_ar_residual_norm.py refactor: remove ParallelConfig in tensorrt_llm._torch.distributed module (#3370) 2025-04-11 15:34:20 -07:00
test_deepseek_allreduce.py Clean up allreduce op in Deepseek V3 model. (#3829) 2025-05-01 07:56:36 +08:00
test_embedding.py refactor: remove ParallelConfig in tensorrt_llm._torch.distributed module (#3370) 2025-04-11 15:34:20 -07:00
test_linear.py refactor: remove ParallelConfig in tensorrt_llm._torch.distributed module (#3370) 2025-04-11 15:34:20 -07:00
test_star_attention_input.jsonl Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
test_star_attention.py feat: Support cos_sin_cache in all cases. (#3517) 2025-04-16 13:48:44 +08:00
test_user_buffers.py fix: Remove ParallelConfig. (#3678) 2025-04-21 14:14:08 +08:00