TensorRT-LLMs/cpp/tensorrt_llm/kernels/unfusedAttentionKernels
Dom Brown 92daec1115
[TRTLLM-7348] [feat] Enable Cross-Attention to use XQA kernels for Whisper (#7035)
Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>
2025-08-20 10:11:25 -04:00
..
unfusedAttentionKernels_2_bf16_bf16.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_bf16_fp4.cu Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
unfusedAttentionKernels_2_bf16_fp8.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_bf16_int8.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_float_float.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_float_fp8.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_float_int8.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_half_fp4.cu Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
unfusedAttentionKernels_2_half_fp8.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_half_half.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_half_int8.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_template.h [TRTLLM-7348] [feat] Enable Cross-Attention to use XQA kernels for Whisper (#7035) 2025-08-20 10:11:25 -04:00