TensorRT-LLMs/tensorrt_llm/_torch/distributed
Eran Geva 23cf72b0f8
[#8921][feat] Added symetric memory AllReduce strategy (#8919)
Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com>
2025-12-08 13:12:56 -08:00
..
__init__.py [None][chore] Reduce nested nvtx ranges. (#9347) 2025-11-25 09:58:41 +08:00
communicator.py [TRTLLM-7073][feat] Support torch compile for PP for Llama and DeepSeekV3 (#7838) 2025-12-04 13:32:11 +08:00
moe_alltoall.py [TRTLLM-9391][chore] Automatically estimate required workspace. (#9535) 2025-12-03 12:49:38 +08:00
ops.py [#8921][feat] Added symetric memory AllReduce strategy (#8919) 2025-12-08 13:12:56 -08:00
pg_utils.py [TRTLLM-7349][feat] Adding new orchestrator type -- ray (#7520) 2025-10-04 08:12:24 +08:00
symm_mem_allreduce.py [#8921][feat] Added symetric memory AllReduce strategy (#8919) 2025-12-08 13:12:56 -08:00