TensorRT-LLMs/tensorrt_llm/_torch/distributed
2026-01-20 11:08:04 +08:00
..
__init__.py [TRTLLM-9493][feat] Custom AllToAll for helix parallelism (#9986) 2025-12-23 18:14:30 -08:00
communicator.py [None][refactor] Unify the usage of MPIDist and TorchDist. (#10380) 2026-01-14 14:05:47 +08:00
moe_alltoall.py [TRTLLM-10296][fix] Fix the potential misaligned access due to vectorized ld/st instructions in NVLinkOneSided A2A. (#10539) 2026-01-20 11:08:04 +08:00
ops.py [https://nvbugs/5782112][fix] Fix hanging issue for MNNVL Allreduce under PP (#10633) 2026-01-16 13:03:36 +08:00
pg_utils.py [TRTLLM-7349][feat] Adding new orchestrator type -- ray (#7520) 2025-10-04 08:12:24 +08:00
symm_mem_allreduce.py [#8921][feat] Added symetric memory AllReduce strategy (#8919) 2025-12-08 13:12:56 -08:00