TensorRT-LLMs/cpp/tensorrt_llm/kernels/communicationKernels
yunruis b99c5ce8c1
Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL (#4560)
Signed-off-by: yunruis <yunruis@nvidia.com>
Signed-off-by: kduan <176893526+Kefeng-Duan@users.noreply.github.com>
Signed-off-by: Kefeng-Duan <176893526+Kefeng-Duan@users.noreply.github.com>
Co-authored-by: kduan <176893526+Kefeng-Duan@users.noreply.github.com>
2025-06-14 17:36:22 +08:00
..
allReduceFusionKernels.cu Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL (#4560) 2025-06-14 17:36:22 +08:00
allReduceFusionKernels.h Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL (#4560) 2025-06-14 17:36:22 +08:00
allReduceWorkspace.cu chore: bump version to 0.19.0 (#3598) (#3841) 2025-04-29 16:57:22 +08:00
allReduceWorkspace.h feat: fix and improve allreduce and fusion kernels (#3064) 2025-04-08 19:33:52 +08:00
customLowPrecisionAllReduceKernels.cu feat: Low Precision Allreduce for PCIe based GPU (#4344) 2025-05-20 06:53:46 +08:00
customLowPrecisionAllReduceKernels.h feat: Low Precision Allreduce for PCIe based GPU (#4344) 2025-05-20 06:53:46 +08:00
mnnvlTwoShotAllreduceKernels.cu [TRTLLM-4647][fix] Fix the no fusion allreduce hanging (#4594) 2025-06-04 18:26:13 -07:00
mnnvlTwoShotAllreduceKernels.h Adding two-shot allreduce kernel and mnnvl multicasting buffer (#4216) 2025-05-22 03:42:36 +08:00
moeAllReduceFusionKernels.cu Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL (#4560) 2025-06-14 17:36:22 +08:00
moeAllReduceFusionKernels.h [TRTLLM-3927] [feat] Finalize + Allreduce + add + rmsnorm fusion (#4756) 2025-06-10 19:55:16 +08:00