TensorRT-LLMs/tensorrt_llm/_torch/distributed
Balaram Buddharaju ccdfa43a6e
[https://nvbugs/5791900][fix] Fix HelixCpMnnvlMemory init with PP (#10533)
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
2026-01-13 15:48:42 -05:00
..
__init__.py [TRTLLM-9493][feat] Custom AllToAll for helix parallelism (#9986) 2025-12-23 18:14:30 -08:00
communicator.py [https://nvbugs/5791900][fix] Fix HelixCpMnnvlMemory init with PP (#10533) 2026-01-13 15:48:42 -05:00
moe_alltoall.py [TRTLLM-9391][chore] Automatically estimate required workspace. (#9535) 2025-12-03 12:49:38 +08:00
ops.py [TRTLLM-8821][feat] Apply AutoTuner to AllReduce Op for strategy tuning. (#8531) 2026-01-05 15:44:37 +08:00
pg_utils.py [TRTLLM-7349][feat] Adding new orchestrator type -- ray (#7520) 2025-10-04 08:12:24 +08:00
symm_mem_allreduce.py [#8921][feat] Added symetric memory AllReduce strategy (#8919) 2025-12-08 13:12:56 -08:00