TensorRT-LLMs/cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha
Fanrong Li f0dc746738
[TRTLLM-8541][feat] Add trtllm-gen sparse MLA kernels to support per-Tensor FP8 KV Cache (#8692)
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
Signed-off-by: Tracin <10434017+Tracin@users.noreply.github.com>
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
Co-authored-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
Co-authored-by: Tracin <10434017+Tracin@users.noreply.github.com>
2025-10-31 14:38:31 -07:00
..
cubin [TRTLLM-8541][feat] Add trtllm-gen sparse MLA kernels to support per-Tensor FP8 KV Cache (#8692) 2025-10-31 14:38:31 -07:00
CMakeLists.txt Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
fmhaKernels.h [TRTLLM-8541][feat] Add trtllm-gen sparse MLA kernels to support per-Tensor FP8 KV Cache (#8692) 2025-10-31 14:38:31 -07:00
fmhaReduction.cu [None][feat] support gpt-oss with fp8 kv cache (#7612) 2025-09-15 02:17:37 +08:00
fmhaReduction.h [None][feat] Optimize MLA kernels with separate reduction kernels (#7597) 2025-09-09 16:58:44 +08:00
fmhaRunner.cpp [TRTLLM-4629] [feat] trtllm-gen kernels support sm103 (#7570) 2025-09-07 10:04:10 +08:00
fmhaRunner.h optimize cudaMemGetInfo for TllmGenFmhaRunner (#3907) 2025-04-29 14:17:07 +08:00
fmhaRunnerParams.h [TRTLLM-8541][feat] Add trtllm-gen sparse MLA kernels to support per-Tensor FP8 KV Cache (#8692) 2025-10-31 14:38:31 -07:00
kernelParams.h [TRTLLM-8541][feat] Add trtllm-gen sparse MLA kernels to support per-Tensor FP8 KV Cache (#8692) 2025-10-31 14:38:31 -07:00
kernelUtils.h [None][feat] Optimize MLA kernels with separate reduction kernels (#7597) 2025-09-09 16:58:44 +08:00