TensorRT-LLMs/cpp
kanghui0204 6f3922f318
feat: Low Precision Allreduce for PCIe based GPU (#4344)
This PR adds a customized allreduce to TensorRT-LLM. The new allreduce is used for communication on PCIe-based GPUs via low-precision quantization, which can accelerate the PCIe allreduce process.

Signed-off-by: Hui Kang <hkang@nvidia.com>
Co-authored-by: Hui Kang <hkang@nvidia.com>
2025-05-20 06:53:46 +08:00
..
cmake feat: NIXL interface integration (#3934) 2025-05-19 18:18:22 +08:00
include/tensorrt_llm fix: [nvbugs/5287097] Align PP layer distribution between pytorch and TRT flow. (#4399) 2025-05-19 14:25:36 -07:00
kernels [Feat] add chunked-attention kernels on Hopper (for llama4) (#4291) 2025-05-19 09:57:10 -07:00
micro_benchmarks feat: support add internal cutlass kernels as subproject (#3658) 2025-05-06 11:35:07 +08:00
tensorrt_llm feat: Low Precision Allreduce for PCIe based GPU (#4344) 2025-05-20 06:53:46 +08:00
tests fix: [nvbugs/5287097] Align PP layer distribution between pytorch and TRT flow. (#4399) 2025-05-19 14:25:36 -07:00
CMakeLists.txt fix: better method to help torch find nvtx3 (#4110) 2025-05-15 16:42:30 +08:00
conandata.yml infra: add conan (#3744) 2025-04-30 11:53:14 -07:00
conanfile.py infra: add conan (#3744) 2025-04-30 11:53:14 -07:00