TensorRT-LLMs/cpp/tensorrt_llm/kernels/contextFusedMultiHeadAttention
Fanrong Li f0dc746738
[TRTLLM-8541][feat] Add trtllm-gen sparse MLA kernels to support per-Tensor FP8 KV Cache (#8692)
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
Signed-off-by: Tracin <10434017+Tracin@users.noreply.github.com>
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
Co-authored-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
Co-authored-by: Tracin <10434017+Tracin@users.noreply.github.com>
2025-10-31 14:38:31 -07:00
..
cubin [https://nvbugs/5542862][fix] Upgrade fmha_v2. (#8364) 2025-10-20 10:20:23 +08:00
CMakeLists.txt
fmhaPackedMask.cu
fmhaPackedMask.h
fmhaRunner.cpp [None][feat] GPT-OSS Sm120/Sm121 Support (#7937) 2025-10-06 16:59:06 -04:00
fmhaRunner.h hopper-style context MLA (#5713) 2025-07-23 14:37:20 +08:00
fused_multihead_attention_common.h [TRTLLM-8541][feat] Add trtllm-gen sparse MLA kernels to support per-Tensor FP8 KV Cache (#8692) 2025-10-31 14:38:31 -07:00
fused_multihead_attention_v2.cpp
fused_multihead_attention_v2.h
tmaDescriptor.h