TensorRT-LLMs/cpp/tensorrt_llm/kernels/unfusedAttentionKernels
Fanrong Li 0d20a8fd61
[TRTLLM-8536][feat] Add the sparse attention framework and one use case--RocketKV support (#8086)
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
Signed-off-by: yuhangh <58161490+heyuhhh@users.noreply.github.com>
Co-authored-by: yuhangh <58161490+heyuhhh@users.noreply.github.com>
2025-10-14 08:23:16 -07:00
..
unfusedAttentionKernels_2_bf16_bf16.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_bf16_fp4.cu Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
unfusedAttentionKernels_2_bf16_fp8.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_bf16_int8.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_float_float.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_float_fp8.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_float_int8.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_half_fp4.cu Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
unfusedAttentionKernels_2_half_fp8.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_half_half.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_half_int8.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_template.h [TRTLLM-8536][feat] Add the sparse attention framework and one use case--RocketKV support (#8086) 2025-10-14 08:23:16 -07:00