TensorRT-LLMs/cpp/tensorrt_llm
Yukun He 2225745782 [TRTLLM-8129][feat] Allreduce tuning and benchmark script revising (#7870)
Because we have encountered some perf regression due to using a one-shot kernel instead of NCCL on A100/H100, it will be beneficial if we can have a solid benchmarking of allreduce Op and analyze the data collected from it.

Implemented new AllreduceOp heuristic:
- Added Linear programming-based heuristic implementation.
- Added LUT-based heuristic implementation and corresponding code generation script.

AllreduceOp minor fixing:
- Fixed a minor issue in AllreduceOp, that the strategy can not be overridden when ONESHOT or TWOSHOT is set.
- Fixed a minor TWOSHOT kernel perf issue.
- Cleaned up Dispatching code in AllReduceOp.

This PR will fix the perf gaps reported in:
https://nvbugspro.nvidia.com/bug/5517023

For Deepseek-R1, it shows a performance gain of about 3-4% in concurrency levels of 256 and 512.

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
2025-11-04 16:42:31 +08:00
..
batch_manager [TRTLLM-7731][feat] Avoid over-allocation of KV cache for transmission in disagg with CP (#8145) 2025-10-31 17:32:39 -07:00
common [TRTLLM-8129][feat] Allreduce tuning and benchmark script revising (#7870) 2025-11-04 16:42:31 +08:00
cutlass_extensions/include/cutlass_extensions [None][feat] GPT-OSS Sm120/Sm121 Support (#7937) 2025-10-06 16:59:06 -04:00
deep_ep [TRTLLM-6589][feat] Support CUDA graph for DeepEP (#7514) 2025-10-02 10:13:24 -07:00
deep_gemm [https://nvbugs/5433581][fix] DeepGEMM installation on SBSA (#6588) 2025-08-06 16:44:21 +08:00
executor [TRTLLM-7731][feat] Avoid over-allocation of KV cache for transmission in disagg with CP (#8145) 2025-10-31 17:32:39 -07:00
executor_worker Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
flash_mla [TRTLLM-8535][feat] Support DeepSeek V3.2 with FP8 + BF16 KV cache/NVFP4 + BF16 KV cache (#8405) 2025-10-24 13:40:41 -04:00
kernels [TRTLLM-8129][feat] Allreduce tuning and benchmark script revising (#7870) 2025-11-04 16:42:31 +08:00
layers [None][feat] Support ignored prompt length for penalties via new sampling config parameter (#8127) 2025-10-27 13:12:31 -04:00
nanobind [TRTLLM-8541][feat] Add trtllm-gen sparse MLA kernels to support per-Tensor FP8 KV Cache (#8692) 2025-10-31 14:38:31 -07:00
plugins [None][fix] Fix the performance issue of FP8 blockwise grouped GEMM when using attention DP (#8501) 2025-10-27 10:18:19 +08:00
pybind [TRTLLM-8541][feat] Add trtllm-gen sparse MLA kernels to support per-Tensor FP8 KV Cache (#8692) 2025-10-31 14:38:31 -07:00
runtime [None][feat] add flag for EPLB to force using GDRCopy (#8650) 2025-10-29 13:33:26 +08:00
testing fix: Improve chunking test and skip empty kernel calls (#5710) 2025-07-04 09:08:15 +02:00
thop [TRTLLM-8129][feat] Allreduce tuning and benchmark script revising (#7870) 2025-11-04 16:42:31 +08:00
CMakeLists.txt [TRTLLM-8535][feat] Support DeepSeek V3.2 with FP8 + BF16 KV cache/NVFP4 + BF16 KV cache (#8405) 2025-10-24 13:40:41 -04:00