TensorRT-LLMs/cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha
Perkz Zheng 1f292ff2a0
[https://jirasw.nvidia.com/browse/TRTLLM-4645] support mutliCtasKvMode for high-throughput MLA kernels (#5426)
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
2025-06-25 16:31:10 +08:00
..
cubin [https://jirasw.nvidia.com/browse/TRTLLM-4645] support mutliCtasKvMode for high-throughput MLA kernels (#5426) 2025-06-25 16:31:10 +08:00
CMakeLists.txt Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
fmhaKernels.h [https://jirasw.nvidia.com/browse/TRTLLM-4645] support mutliCtasKvMode for high-throughput MLA kernels (#5426) 2025-06-25 16:31:10 +08:00
fmhaRunner.cpp optimize cudaMemGetInfo for TllmGenFmhaRunner (#3907) 2025-04-29 14:17:07 +08:00
fmhaRunner.h optimize cudaMemGetInfo for TllmGenFmhaRunner (#3907) 2025-04-29 14:17:07 +08:00
fmhaRunnerParams.h [https://nvbugspro.nvidia.com/bug/5300080] Fix the bug of setting attention_chunk_size and enable chunked-attention in the generation-phase by default (#4693) 2025-06-03 19:02:57 -04:00
kernelParams.h [https://nvbugspro.nvidia.com/bug/5295470] support headDim 256 for blackwell fmha kernels (#5164) 2025-06-13 23:01:01 +08:00