mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-14 06:27:45 +08:00
This PR adds a customized allreduce to TensorRT-LLM. The new allreduce is used for communication on PCIe-based GPUs via low-precision quantization, which can accelerate the PCIe allreduce process. Signed-off-by: Hui Kang <hkang@nvidia.com> Co-authored-by: Hui Kang <hkang@nvidia.com> |
||
|---|---|---|
| .. | ||
| images | ||
| disaggregated-service.md | ||
| executor.md | ||
| expert-parallelism.md | ||
| gpt-attention.md | ||
| gpt-runtime.md | ||
| graph-rewriting.md | ||
| kv-cache-reuse.md | ||
| lora.md | ||
| lowprecision-pcie-allreduce.md | ||
| speculative-decoding.md | ||
| weight-streaming.md | ||