TensorRT-LLMs/cpp/tensorrt_llm/kernels/unfusedAttentionKernels
qixiang-99 ecd621fb0a
feat: Add head size 72 support for QKV Preprocessing kernel (#3743)
* refactor: Fix headsize 72 attention error for TRTLLM attn backend in PyTorch workflow

- Remove the head size pre-check logic in AttentionOp because head size 72 can be supported with fmha kernels.
- Added support for head size 72 in unfused attention kernels(QKVPreprocessing).
- Enhanced unit tests by introducing a scenario generation function for better test coverage of attention configurations(include head size 72).

Signed-off-by: qixiang-99 <203170375+qixiang-99@users.noreply.github.com>

* update: Waive head_dim=72 test cases and enhance test representation

- Added a waiver for head_dim=72 cases on post sm100 in the test suite to address known issues.
- Introduced a custom __repr__ method in the Scenario class for pytest substring match.

Signed-off-by: qixiang-99 <203170375+qixiang-99@users.noreply.github.com>

---------

Signed-off-by: qixiang-99 <203170375+qixiang-99@users.noreply.github.com>
2025-04-25 11:07:40 -07:00
..
unfusedAttentionKernels_2_bf16_bf16.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_bf16_fp4.cu Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
unfusedAttentionKernels_2_bf16_fp8.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_bf16_int8.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_float_float.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_float_fp8.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_float_int8.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_half_fp4.cu Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
unfusedAttentionKernels_2_half_fp8.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_half_half.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_half_int8.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
unfusedAttentionKernels_2_template.h feat: Add head size 72 support for QKV Preprocessing kernel (#3743) 2025-04-25 11:07:40 -07:00