TensorRT-LLMs/tensorrt_llm/_torch/distributed
Matthias Jouanneaux f8dd494536
[None][perf] Helix: improve all-to-all perf for large CP size (#9494)
Signed-off-by: Matthias Jouanneaux <mjoux@nvidia.com>
Signed-off-by: Zheyu Fu <zheyuf@NVIDIA.com>
Co-authored-by: Zheyu Fu <zheyuf@nvidia.com>
2025-11-28 07:24:55 -08:00
..
__init__.py [None][chore] Reduce nested nvtx ranges. (#9347) 2025-11-25 09:58:41 +08:00
communicator.py [None][chore] Reduce nested nvtx ranges. (#9347) 2025-11-25 09:58:41 +08:00
moe_alltoall.py [None][feat] Integrate MnnvlThroughput into TRTLLM MoE. (#8728) 2025-11-04 21:36:29 +08:00
ops.py [None][perf] Helix: improve all-to-all perf for large CP size (#9494) 2025-11-28 07:24:55 -08:00
pg_utils.py [TRTLLM-7349][feat] Adding new orchestrator type -- ray (#7520) 2025-10-04 08:12:24 +08:00