TensorRT-LLMs/cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention
Omer Ullman Argov 8731f5f14f
chore: Mass integration of release/0.20 (#4898)
Signed-off-by: Ivy Zhang <25222398+crazydemo@users.noreply.github.com>
Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
Signed-off-by: Hui Gao <huig@nvidia.com>
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com>
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>
Signed-off-by: Ruodi <200874449+ruodil@users.noreply.github.com>
Signed-off-by: ruodil <200874449+ruodil@users.noreply.github.com>
Signed-off-by: Stanley Sun <190317771+StanleySun639@users.noreply.github.com>
Signed-off-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>
Signed-off-by: Anurag Mukkara <134339030+amukkara@users.noreply.github.com>
Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com>
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
Signed-off-by: moraxu <mguzek@nvidia.com>
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>
Co-authored-by: Ivy Zhang <25222398+crazydemo@users.noreply.github.com>
Co-authored-by: Yiqing Yan <yiqingy@nvidia.com>
Co-authored-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
Co-authored-by: HuiGao-NV <huig@nvidia.com>
Co-authored-by: brb-nv <169953907+brb-nv@users.noreply.github.com>
Co-authored-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com>
Co-authored-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Co-authored-by: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com>
Co-authored-by: ruodil <200874449+ruodil@users.noreply.github.com>
Co-authored-by: Stanley Sun <190317771+StanleySun639@users.noreply.github.com>
Co-authored-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>
Co-authored-by: Anurag Mukkara <134339030+amukkara@users.noreply.github.com>
Co-authored-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com>
Co-authored-by: Faraz <58580514+farazkh80@users.noreply.github.com>
Co-authored-by: Michal Guzek <moraxu@users.noreply.github.com>
Co-authored-by: Larry <197874197+LarryXFly@users.noreply.github.com>
Co-authored-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
Co-authored-by: Yechan Kim <161688079+yechank-nvidia@users.noreply.github.com>
2025-06-08 23:26:26 +08:00
..
cubin Feat: add sliding-window-attention generation-phase kernels on Blackwell (#4564) 2025-05-26 09:06:33 +08:00
decoderXQAImplJIT chore: Mass integration of release/0.20 (#4898) 2025-06-08 23:26:26 +08:00
instantiation Update TensorRT-LLM (#2502) 2024-11-26 16:51:34 +08:00
CMakeLists.txt infra: open source XQA kernels (#3762) 2025-04-30 18:05:15 +08:00
copy_cu.py Update TensorRT-LLM (#787) 2024-01-02 17:54:32 +08:00
decoderMaskedMultiheadAttentionLaunch.h [https://nvbugspro.nvidia.com/bug/5300080] Fix the bug of setting attention_chunk_size and enable chunked-attention in the generation-phase by default (#4693) 2025-06-03 19:02:57 -04:00
decoderMaskedMultiheadAttentionTemplate.h chore: Mass integration of release/0.20 (#4898) 2025-06-08 23:26:26 +08:00
decoderXQAConstants.h [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
decoderXQAImpl.cpp Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
decoderXQAImpl.h [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
decoderXQAImplCommon.cpp Support speculative decoding with Hopper XQA (#3269) 2025-04-07 17:14:34 +08:00
decoderXQAImplCommon.h [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
decoderXQAImplPrecompiled.cpp [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
decoderXQAImplPrecompiled.h Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
decoderXQARunner.cpp chore: Mass integration of release/0.20 (#4898) 2025-06-08 23:26:26 +08:00
decoderXQARunner.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
mmha_notes.md Initial commit 2023-09-20 00:29:41 -07:00
tensorMapUtils.cpp [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
tensorMapUtils.h [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
xqaParams.h [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00