TensorRT-LLMs/cpp/tensorrt_llm/kernels/contextFusedMultiHeadAttention
Yuan Tong 32b244af38
feat: reduce unnecessary kernel generation (#5476)
Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com>
2025-07-04 14:37:49 +08:00
..
cubin keep sm90 headsize 128 cubins (#5320) 2025-06-26 12:14:01 +08:00
CMakeLists.txt feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
fmhaPackedMask.cu [Feat] add chunked-attention kernels on Hopper (for llama4) (#4291) 2025-05-19 09:57:10 -07:00
fmhaPackedMask.h Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
fmhaRunner.cpp [Feat] add chunked-attention kernels on Hopper (for llama4) (#4291) 2025-05-19 09:57:10 -07:00
fmhaRunner.h Update TensorRT-LLM (#2363) 2024-10-22 20:27:35 +08:00
fused_multihead_attention_common.h [Feat] add chunked-attention kernels on Hopper (for llama4) (#4291) 2025-05-19 09:57:10 -07:00
fused_multihead_attention_v2.cpp use cu for fmha_v2 (#4694) 2025-06-15 18:40:44 +08:00
fused_multihead_attention_v2.h feat: Add support for FP8 MLA on Hopper and Blackwell. (#3190) 2025-04-07 15:14:13 +08:00
tmaDescriptor.h Update TensorRT-LLM (#1274) 2024-03-12 18:15:52 +08:00