TensorRT-LLMs/cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha
Perkz Zheng 20f7df25ac
[https://nvbugs/5394685][fix] proper fix for the accuracy issue in 2CTA MLA kernels (release 1.0) (#6946)
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
2025-08-19 03:10:29 -04:00
..
cubin [https://nvbugs/5394685][fix] proper fix for the accuracy issue in 2CTA MLA kernels (release 1.0) (#6946) 2025-08-19 03:10:29 -04:00
CMakeLists.txt Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
fmhaKernels.h [Fix] the bug in the trtllm-gen heurisitcf for MLA kernels. (#6284) 2025-07-24 23:40:27 +08:00
fmhaRunner.cpp optimize cudaMemGetInfo for TllmGenFmhaRunner (#3907) 2025-04-29 14:17:07 +08:00
fmhaRunner.h optimize cudaMemGetInfo for TllmGenFmhaRunner (#3907) 2025-04-29 14:17:07 +08:00
fmhaRunnerParams.h [https://nvbugspro.nvidia.com/bug/5300080] Fix the bug of setting attention_chunk_size and enable chunked-attention in the generation-phase by default (#4693) 2025-06-03 19:02:57 -04:00
kernelParams.h [https://nvbugspro.nvidia.com/bug/5295470] support headDim 256 for blackwell fmha kernels (#5164) 2025-06-13 23:01:01 +08:00