TensorRT-LLMs/cpp/tensorrt_llm/kernels/trtllmGenKernels
Perkz Zheng 426f6fd2bc
Feat: add chunked-attention kernels on Blackwell (#4394)
* update cubins

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>

* add chunked-attention kernels on blackwell

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>

fix

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>

---------

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
2025-05-21 10:16:46 +08:00
..
batchedGemm feat: TRT-LLM Gen integration for BMM and MoE refactoring (#4280) 2025-05-16 13:31:53 +02:00
blockscaleGemm feat: trtllm-gen fp4 GEMM for pytorch workflow (#3423) 2025-04-11 02:28:07 +08:00
blockScaleMoe feat: TRT-LLM Gen integration for BMM and MoE refactoring (#4280) 2025-05-16 13:31:53 +02:00
common Cherry-pick trtllm-gen from feat/llama4 to main (#4086) 2025-05-08 14:13:01 -07:00
fmha Feat: add chunked-attention kernels on Blackwell (#4394) 2025-05-21 10:16:46 +08:00
gemm feat: TRT-LLM Gen integration for BMM and MoE refactoring (#4280) 2025-05-16 13:31:53 +02:00
gemmGatedAct feat: TRT-LLM Gen integration for BMM and MoE refactoring (#4280) 2025-05-16 13:31:53 +02:00
CMakeLists.txt Cherry-pick trtllm-gen from feat/llama4 to main (#4086) 2025-05-08 14:13:01 -07:00