TensorRT-LLMs/cpp/tensorrt_llm/kernels/trtllmGenKernels/fmha
Perkz Zheng 426f6fd2bc
Feat: add chunked-attention kernels on Blackwell (#4394)
* update cubins

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>

* add chunked-attention kernels on blackwell

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>

fix

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>

---------

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
2025-05-21 10:16:46 +08:00
..
cubin Feat: add chunked-attention kernels on Blackwell (#4394) 2025-05-21 10:16:46 +08:00
CMakeLists.txt Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
fmhaKernels.h Feat: add chunked-attention kernels on Blackwell (#4394) 2025-05-21 10:16:46 +08:00
fmhaRunner.cpp optimize cudaMemGetInfo for TllmGenFmhaRunner (#3907) 2025-04-29 14:17:07 +08:00
fmhaRunner.h optimize cudaMemGetInfo for TllmGenFmhaRunner (#3907) 2025-04-29 14:17:07 +08:00
fmhaRunnerParams.h Feat: add chunked-attention kernels on Blackwell (#4394) 2025-05-21 10:16:46 +08:00
kernelParams.h Feat: add chunked-attention kernels on Blackwell (#4394) 2025-05-21 10:16:46 +08:00