TensorRT-LLMs/tensorrt_llm/_torch/distributed
Chang Liu 26901e4aa0
[TRTLLM-10612][feat] Initial support of AIGV models in TRTLLM (#11462)
Signed-off-by: Chang Liu (Enterprise Products) <liuc@nvidia.com>
Signed-off-by: Chang Liu <9713593+chang-l@users.noreply.github.com>
Signed-off-by: Zhenhua Wang <zhenhuaw@nvidia.com>
Co-authored-by: Freddy Qi <junq@nvidia.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Zhenhua Wang <zhenhuaw@nvidia.com>
2026-02-14 06:11:11 +08:00
..
__init__.py [TRTLLM-10612][feat] Initial support of AIGV models in TRTLLM (#11462) 2026-02-14 06:11:11 +08:00
communicator.py [TRTLLM-10264][feat] Support attention DP + Helix CP (#10477) 2026-01-29 02:57:13 -05:00
moe_alltoall.py [TRTLLM-10048][feat] Fuse the AllGather for expert statistics required by the EPLB. (#10885) 2026-01-26 17:59:03 +08:00
ops.py [TRTLLM-10612][feat] Initial support of AIGV models in TRTLLM (#11462) 2026-02-14 06:11:11 +08:00
pg_utils.py [TRTLLM-7349][feat] Adding new orchestrator type -- ray (#7520) 2025-10-04 08:12:24 +08:00
symm_mem_allreduce.py [#8921][feat] Added symetric memory AllReduce strategy (#8919) 2025-12-08 13:12:56 -08:00