TensorRT-LLMs/tests/unittest/_torch
Yukun He fd4311e6a3
[TRTLLM-8129][feat] Allreduce tuning and benchmark script revising (#7870)
Because we have encountered some perf regression due to using a one-shot kernel instead of NCCL on A100/H100, it will be beneficial if we can have a solid benchmarking of allreduce Op and analyze the data collected from it.

Implemented new AllreduceOp heuristic:
- Added Linear programming-based heuristic implementation.
- Added LUT-based heuristic implementation and corresponding code generation script.

AllreduceOp minor fixing:
- Fixed a minor issue in AllreduceOp, that the strategy can not be overridden when ONESHOT or TWOSHOT is set.
- Fixed a minor TWOSHOT kernel perf issue.
- Cleaned up Dispatching code in AllReduceOp.

This PR will fix the perf gaps reported in:
https://nvbugspro.nvidia.com/bug/5517023

For Deepseek-R1, it shows a performance gain of about 3-4% in concurrency levels of 256 and 512.

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
2025-10-16 14:15:25 +08:00
..
attention [https://nvbugs/5453806][unwaive] Unwaive fp8 kvcache attention test (#7243) 2025-09-05 12:13:57 -04:00
auto_deploy [#7675][feat] CapturedGraph to support max_batch_size > max(cuda_graph_batch_sizes) (#7888) 2025-09-24 10:11:44 -04:00
compilation [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
debugger Fix: fix nvbug 5356427 (#5464) 2025-06-25 22:24:26 +08:00
executor [None][chore] extract weights loading related logic to model loader (#7579) 2025-09-25 10:19:22 -07:00
misc [TRTLLM-4629] [feat] Add support of CUDA13 and sm103 devices (#7568) 2025-09-16 09:56:18 +08:00
modeling [https://nvbugs/5550722][fix] Fix image load (#8093) 2025-10-13 14:12:39 +08:00
models/checkpoints/hf [None][feat] Skip prefetching consolidated safetensors when appropriate (#7013) 2025-08-25 23:56:21 -04:00
modules [None][infra] Update and waive failed tests for release branch (#8291) 2025-10-12 21:51:54 +08:00
multi_gpu [TRTLLM-8129][feat] Allreduce tuning and benchmark script revising (#7870) 2025-10-16 14:15:25 +08:00
multi_gpu_modeling [https://nvbugs/5536131][fix] Fix illegal access issue when scale is not provided in Llama3/4. (#7960) 2025-10-07 23:47:00 -07:00
multimodal [None][ci] Waive test_mm_encoder_standalone.py::test_multi_request_batch_chat[llava-v1.6-mistral-7b-hf] (#8010) 2025-09-26 11:07:54 +08:00
sampler [TRTLLM-7155][feat] Unify sampler handle logits implementation. (#6867) 2025-08-22 08:09:30 +02:00
speculative [https://nvbugs/5537878][fix] Reserve an extra slot for padded batch … (#8231) 2025-10-13 23:34:22 -07:00
thop [None][chore] Waive test failing on pre-merge CI (#8295) 2025-10-12 16:54:56 -07:00
helpers.py [TRTLLM-7330][feat] Eagle3 cuda graph support for the first draft model inference (#7363) 2025-09-26 11:28:05 +08:00
pattern_watcher.py [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
test_connector.py [None][feat] KV Cache Connector API (#7228) 2025-08-28 23:09:27 -04:00