TensorRT-LLMs/cpp/tensorrt_llm/kernels/unfusedAttentionKernels
Perkz Zheng 35c5e4f1c5
feat: add CGA reduction fmha kernels on Blackwell. (#3763)
* update cubins

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>

* add trtllm-gen kernels for eagle3 and also kernels with cga-reduction

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>

* address the comments

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>

---------

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
2025-04-29 10:43:54 +08:00
..
unfusedAttentionKernels_2_bf16_bf16.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_bf16_fp4.cu Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
unfusedAttentionKernels_2_bf16_fp8.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_bf16_int8.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_float_float.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_float_fp8.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_float_int8.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_half_fp4.cu Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
unfusedAttentionKernels_2_half_fp8.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_half_half.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_half_int8.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_template.h feat: add CGA reduction fmha kernels on Blackwell. (#3763) 2025-04-29 10:43:54 +08:00