TensorRT-LLMs/cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention
Perkz Zheng 4d711be8f4
Feat: add sliding-window-attention generation-phase kernels on Blackwell (#4564)
* move cubins to LFS

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>

* update cubins

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>

* add sliding-window-attention generation-phase kernels on Blackwell

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>

* address comments

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>

---------

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
2025-05-26 09:06:33 +08:00
..
cubin Feat: add sliding-window-attention generation-phase kernels on Blackwell (#4564) 2025-05-26 09:06:33 +08:00
decoderXQAImplJIT fix: rename some terms (#4534) 2025-05-23 23:23:49 +08:00
instantiation Update TensorRT-LLM (#2502) 2024-11-26 16:51:34 +08:00
CMakeLists.txt infra: open source XQA kernels (#3762) 2025-04-30 18:05:15 +08:00
copy_cu.py Update TensorRT-LLM (#787) 2024-01-02 17:54:32 +08:00
decoderMaskedMultiheadAttentionLaunch.h Update TensorRT-LLM (#2502) 2024-11-26 16:51:34 +08:00
decoderMaskedMultiheadAttentionTemplate.h fix: [https://nvbugspro.nvidia.com/bug/5238626] illegal memory address when running llama 4 with cuda graph enabled (#4101) 2025-05-13 14:58:54 +08:00
decoderXQAConstants.h Update TensorRT-LLM (#2094) 2024-08-07 16:44:43 +08:00
decoderXQAImpl.cpp Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
decoderXQAImpl.h chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
decoderXQAImplCommon.cpp Support speculative decoding with Hopper XQA (#3269) 2025-04-07 17:14:34 +08:00
decoderXQAImplCommon.h Support speculative decoding with Hopper XQA (#3269) 2025-04-07 17:14:34 +08:00
decoderXQAImplPrecompiled.cpp feat: add CGA reduction fmha kernels on Blackwell. (#3763) 2025-04-29 10:43:54 +08:00
decoderXQAImplPrecompiled.h Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
decoderXQARunner.cpp Support speculative decoding with Hopper XQA (#3269) 2025-04-07 17:14:34 +08:00
decoderXQARunner.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
mmha_notes.md Initial commit 2023-09-20 00:29:41 -07:00
tensorMapUtils.cpp Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
tensorMapUtils.h Update TensorRT-LLM (#1688) 2024-05-28 20:07:49 +08:00
xqaParams.h Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00