TensorRT-LLMs/cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention
Tian Zheng e257cb3533
[None][feat] Support NVFP4 KV Cache (#6244)
Signed-off-by: Tian Zheng <29906817+Tom-Zheng@users.noreply.github.com>
2025-09-01 09:24:52 +08:00
..
cubin Feat: add sliding-window-attention generation-phase kernels on Blackwell (#4564) 2025-05-26 09:06:33 +08:00
decoderXQAImplJIT [None][feat] Support NVFP4 KV Cache (#6244) 2025-09-01 09:24:52 +08:00
instantiation Update TensorRT-LLM (#2502) 2024-11-26 16:51:34 +08:00
CMakeLists.txt infra: open source XQA kernels (#3762) 2025-04-30 18:05:15 +08:00
copy_cu.py Update TensorRT-LLM (#787) 2024-01-02 17:54:32 +08:00
decoderMaskedMultiheadAttentionLaunch.h [https://nvbugspro.nvidia.com/bug/5300080] Fix the bug of setting attention_chunk_size and enable chunked-attention in the generation-phase by default (#4693) 2025-06-03 19:02:57 -04:00
decoderMaskedMultiheadAttentionTemplate.h [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00
decoderXQAConstants.h [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
decoderXQAImpl.cpp Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
decoderXQAImpl.h [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
decoderXQAImplCommon.cpp [fix] Fix missing fields in xqa kernel cache key (#6282) 2025-08-01 10:41:26 +08:00
decoderXQAImplCommon.h [TRTLLM-6674][feat] (Breaking Change) Hopper SWA non-cyclic kernels + KV reuse + Spec Dec (#6379) 2025-08-05 07:47:41 +00:00
decoderXQAImplPrecompiled.cpp [None][feat] Support NVFP4 KV Cache (#6244) 2025-09-01 09:24:52 +08:00
decoderXQAImplPrecompiled.h Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
decoderXQARunner.cpp [TRTLLM-6674][feat] (Breaking Change) Hopper SWA non-cyclic kernels + KV reuse + Spec Dec (#6379) 2025-08-05 07:47:41 +00:00
decoderXQARunner.h [nvbug 5333996 ][fix] Unload XQA cubins early to avoid static lifetime (#5133) 2025-06-13 15:53:29 +08:00
mmha_notes.md Initial commit 2023-09-20 00:29:41 -07:00
tensorMapUtils.cpp [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
tensorMapUtils.h [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
xqaParams.h [TRTLLM-7348] [feat] Enable Cross-Attention to use XQA kernels for Whisper (#7035) 2025-08-20 10:11:25 -04:00