TensorRT-LLMs/cpp/tensorrt_llm/kernels/contextFusedMultiHeadAttention
Haohang Huang c9eebcb454
[TRTLLM-6674][feat] (Breaking Change) Hopper SWA non-cyclic kernels + KV reuse + Spec Dec (#6379)
Signed-off-by: Haohang Huang <31998628+symphonylyh@users.noreply.github.com>
Signed-off-by: symphonylyh <31998628+symphonylyh@users.noreply.github.com>
2025-08-05 07:47:41 +00:00
..
cubin [TRTLLM-6674][feat] (Breaking Change) Hopper SWA non-cyclic kernels + KV reuse + Spec Dec (#6379) 2025-08-05 07:47:41 +00:00
CMakeLists.txt feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
fmhaPackedMask.cu [Feat] add chunked-attention kernels on Hopper (for llama4) (#4291) 2025-05-19 09:57:10 -07:00
fmhaPackedMask.h Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
fmhaRunner.cpp [TRTLLM-6674][feat] (Breaking Change) Hopper SWA non-cyclic kernels + KV reuse + Spec Dec (#6379) 2025-08-05 07:47:41 +00:00
fmhaRunner.h hopper-style context MLA (#5713) 2025-07-23 14:37:20 +08:00
fused_multihead_attention_common.h hopper-style context MLA (#5713) 2025-07-23 14:37:20 +08:00
fused_multihead_attention_v2.cpp [TRTLLM-5366][feat]Add support for sm121 (#5524) 2025-07-08 14:27:00 -07:00
fused_multihead_attention_v2.h feat: Add support for FP8 MLA on Hopper and Blackwell. (#3190) 2025-04-07 15:14:13 +08:00
tmaDescriptor.h Update TensorRT-LLM (#1274) 2024-03-12 18:15:52 +08:00