TensorRT-LLMs/tensorrt_llm/_torch/distributed
Yukun He aa38e28cfa
fix: [nvbug/5241627] Fix AllReduce kernel hang issue when both tp and pp are enabled. (#3988)
* Fix AllReduce kernel hang issue when both tp and pp are enabled.
Allocate one workspace for each pp rank to avoid potential race.

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>

* update waive list

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>

---------

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
2025-05-05 11:33:25 +08:00
..
__init__.py Clean up allreduce op in Deepseek V3 model. (#3829) 2025-05-01 07:56:36 +08:00
communicator.py chore: bump version to 0.19.0 (#3598) (#3841) 2025-04-29 16:57:22 +08:00
ops.py fix: [nvbug/5241627] Fix AllReduce kernel hang issue when both tp and pp are enabled. (#3988) 2025-05-05 11:33:25 +08:00