TensorRT-LLMs/cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha
Perkz Zheng 6037fe3716
[https://nvbugs/5394685][fix] proper fix for the accuracy issue in 2CTA MLA kernels (#6941)
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
2025-08-15 23:29:36 +08:00
..
cubin [https://nvbugs/5394685][fix] proper fix for the accuracy issue in 2CTA MLA kernels (#6941) 2025-08-15 23:29:36 +08:00
CMakeLists.txt Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
fmhaKernels.h [https://nvbugs/5394685][fix] proper fix for the accuracy issue in 2CTA MLA kernels (#6941) 2025-08-15 23:29:36 +08:00
fmhaRunner.cpp optimize cudaMemGetInfo for TllmGenFmhaRunner (#3907) 2025-04-29 14:17:07 +08:00
fmhaRunner.h optimize cudaMemGetInfo for TllmGenFmhaRunner (#3907) 2025-04-29 14:17:07 +08:00
fmhaRunnerParams.h [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00
kernelParams.h [https://nvbugs/5394685][fix] the bug with spec-decoding + SWA && an accuracy issue related to 2CTA MLA (#6834) 2025-08-13 13:55:56 -07:00