TensorRT-LLMs/cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha
Tian Zheng 3a94d80839 Update SM100f cubins
Signed-off-by: Tian Zheng <29906817+Tom-Zheng@users.noreply.github.com>
2025-08-06 14:25:00 +08:00
..
cubin Update SM100f cubins 2025-08-06 14:25:00 +08:00
CMakeLists.txt Update SM100f cubins 2025-08-06 14:25:00 +08:00
fmhaKernels.h Update SM100f cubins 2025-08-06 14:25:00 +08:00
fmhaRunner.cpp Update SM100f cubins 2025-08-06 14:25:00 +08:00
fmhaRunner.h optimize cudaMemGetInfo for TllmGenFmhaRunner (#3907) 2025-04-29 14:17:07 +08:00
fmhaRunnerParams.h [https://nvbugspro.nvidia.com/bug/5300080] Fix the bug of setting attention_chunk_size and enable chunked-attention in the generation-phase by default (#4693) 2025-06-03 19:02:57 -04:00
kernelParams.h [https://nvbugspro.nvidia.com/bug/5295470] support headDim 256 for blackwell fmha kernels (#5164) 2025-06-13 23:01:01 +08:00