TensorRT-LLMs/cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention
Fanrong Li 0d20a8fd61
[TRTLLM-8536][feat] Add the sparse attention framework and one use case--RocketKV support (#8086)
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
Signed-off-by: yuhangh <58161490+heyuhhh@users.noreply.github.com>
Co-authored-by: yuhangh <58161490+heyuhhh@users.noreply.github.com>
2025-10-14 08:23:16 -07:00
..
cubin
decoderXQAImplJIT [None][feat] support JIT mha.cu for SPEC_DEC in runtime (#6078) 2025-09-23 14:56:17 -07:00
instantiation
CMakeLists.txt
copy_cu.py
decoderMaskedMultiheadAttentionLaunch.h
decoderMaskedMultiheadAttentionTemplate.h [https://nvbugs/5522462][fix] Fix FP8 scout illegal memory access (#7845) 2025-09-19 10:30:37 -04:00
decoderXQAConstants.h
decoderXQAImpl.cpp
decoderXQAImpl.h
decoderXQAImplCommon.cpp [None][feat] support JIT mha.cu for SPEC_DEC in runtime (#6078) 2025-09-23 14:56:17 -07:00
decoderXQAImplCommon.h [TRTLLM-8536][feat] Add the sparse attention framework and one use case--RocketKV support (#8086) 2025-10-14 08:23:16 -07:00
decoderXQAImplPrecompiled.cpp [None][feat] support JIT mha.cu for SPEC_DEC in runtime (#6078) 2025-09-23 14:56:17 -07:00
decoderXQAImplPrecompiled.h
decoderXQARunner.cpp [None][feat] support JIT mha.cu for SPEC_DEC in runtime (#6078) 2025-09-23 14:56:17 -07:00
decoderXQARunner.h [nvbug 5333996 ][fix] Unload XQA cubins early to avoid static lifetime (#5133) 2025-06-13 15:53:29 +08:00
mmha_notes.md
tensorMapUtils.cpp
tensorMapUtils.h
xqaParams.h [TRTLLM-8536][feat] Add the sparse attention framework and one use case--RocketKV support (#8086) 2025-10-14 08:23:16 -07:00