TensorRT-LLMs/cpp/tensorrt_llm
Perkz Zheng 426f6fd2bc
Feat: add chunked-attention kernels on Blackwell (#4394)
* update cubins

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>

* add chunked-attention kernels on blackwell

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>

fix

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>

---------

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
2025-05-21 10:16:46 +08:00
..
batch_manager refactor: Unify request order in TRT and PyTorch workflow (#4096) 2025-05-20 18:49:27 +02:00
common Feat: add chunked-attention kernels on Blackwell (#4394) 2025-05-21 10:16:46 +08:00
cutlass_extensions/include/cutlass_extensions [TRTLLM-3330][feat] Support DeepSeek-R1 W4A8 on Hopper (#4123) 2025-05-14 15:48:07 +08:00
executor feat: NIXL interface integration (#3934) 2025-05-19 18:18:22 +08:00
executor_worker Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
kernels Feat: add chunked-attention kernels on Blackwell (#4394) 2025-05-21 10:16:46 +08:00
layers Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979) 2025-05-12 22:32:29 +02:00
plugins Fix bias shape in weightOnlyGroupwiseQuantMatmulPlugin for TRT workflow (#4348) 2025-05-16 10:02:30 +08:00
pybind feat: large-scale EP(part 2: MoE Load Balancer - core utilities) (#4384) 2025-05-20 17:53:48 +08:00
runtime feat: large-scale EP(part 2: MoE Load Balancer - core utilities) (#4384) 2025-05-20 17:53:48 +08:00
testing refactor: Move ModelSpec to core library (#3980) 2025-05-04 01:39:09 +08:00
thop perf: Fuse gemm setup function for SM90/SM100 MOE plugin path (#4146) 2025-05-21 10:00:36 +08:00
CMakeLists.txt feat: NIXL interface integration (#3934) 2025-05-19 18:18:22 +08:00