TensorRT-LLMs/cpp/tensorrt_llm/kernels/contextFusedMultiHeadAttention
Perkz Zheng 4a0b7a0cf1 [https://nvbugspro.nvidia.com/bug/5355054] fallback to cubins for fp8 fmha kernels on Ada. (#5779)
Signed-off-by: Qidi Sang <200703406+qsang-nv@users.noreply.github.com>
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
Co-authored-by: qsang-nv <200703406+qsang-nv@users.noreply.github.com>
2025-07-14 17:17:30 +08:00
..
cubin [https://nvbugspro.nvidia.com/bug/5355054] fallback to cubins for fp8 fmha kernels on Ada. (#5779) 2025-07-14 17:17:30 +08:00
CMakeLists.txt feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
fmhaPackedMask.cu [Feat] add chunked-attention kernels on Hopper (for llama4) (#4291) 2025-05-19 09:57:10 -07:00
fmhaPackedMask.h Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
fmhaRunner.cpp [TRTLLM-5366][feat]Add support for sm121 (#5524) 2025-07-08 14:27:00 -07:00
fmhaRunner.h Update TensorRT-LLM (#2363) 2024-10-22 20:27:35 +08:00
fused_multihead_attention_common.h [Feat] add chunked-attention kernels on Hopper (for llama4) (#4291) 2025-05-19 09:57:10 -07:00
fused_multihead_attention_v2.cpp [TRTLLM-5366][feat]Add support for sm121 (#5524) 2025-07-08 14:27:00 -07:00
fused_multihead_attention_v2.h feat: Add support for FP8 MLA on Hopper and Blackwell. (#3190) 2025-04-07 15:14:13 +08:00
tmaDescriptor.h Update TensorRT-LLM (#1274) 2024-03-12 18:15:52 +08:00