TensorRT-LLMs/cpp/tensorrt_llm/kernels/contextFusedMultiHeadAttention
Perkz Zheng 4d711be8f4
Feat: add sliding-window-attention generation-phase kernels on Blackwell (#4564)
* move cubins to LFS

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>

* update cubins

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>

* add sliding-window-attention generation-phase kernels on Blackwell

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>

* address comments

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>

---------

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
2025-05-26 09:06:33 +08:00
..
cubin Feat: add sliding-window-attention generation-phase kernels on Blackwell (#4564) 2025-05-26 09:06:33 +08:00
CMakeLists.txt infra: open source fmha v2 kernels (#4185) 2025-05-15 10:56:34 +08:00
fmhaPackedMask.cu [Feat] add chunked-attention kernels on Hopper (for llama4) (#4291) 2025-05-19 09:57:10 -07:00
fmhaPackedMask.h Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
fmhaRunner.cpp [Feat] add chunked-attention kernels on Hopper (for llama4) (#4291) 2025-05-19 09:57:10 -07:00
fmhaRunner.h Update TensorRT-LLM (#2363) 2024-10-22 20:27:35 +08:00
fused_multihead_attention_common.h [Feat] add chunked-attention kernels on Hopper (for llama4) (#4291) 2025-05-19 09:57:10 -07:00
fused_multihead_attention_v2.cpp feat: Add support for FP8 MLA on Hopper and Blackwell. (#3190) 2025-04-07 15:14:13 +08:00
fused_multihead_attention_v2.h feat: Add support for FP8 MLA on Hopper and Blackwell. (#3190) 2025-04-07 15:14:13 +08:00
tmaDescriptor.h Update TensorRT-LLM (#1274) 2024-03-12 18:15:52 +08:00