mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-22 11:42:41 +08:00
* refactor: Fix headsize 72 attention error for TRTLLM attn backend in PyTorch workflow - Remove the head size pre-check logic in AttentionOp because head size 72 can be supported with fmha kernels. - Added support for head size 72 in unfused attention kernels(QKVPreprocessing). - Enhanced unit tests by introducing a scenario generation function for better test coverage of attention configurations(include head size 72). Signed-off-by: qixiang-99 <203170375+qixiang-99@users.noreply.github.com> * update: Waive head_dim=72 test cases and enhance test representation - Added a waiver for head_dim=72 cases on post sm100 in the test suite to address known issues. - Introduced a custom __repr__ method in the Scenario class for pytest substring match. Signed-off-by: qixiang-99 <203170375+qixiang-99@users.noreply.github.com> --------- Signed-off-by: qixiang-99 <203170375+qixiang-99@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| unfusedAttentionKernels_2_bf16_bf16.cu | ||
| unfusedAttentionKernels_2_bf16_fp4.cu | ||
| unfusedAttentionKernels_2_bf16_fp8.cu | ||
| unfusedAttentionKernels_2_bf16_int8.cu | ||
| unfusedAttentionKernels_2_float_float.cu | ||
| unfusedAttentionKernels_2_float_fp8.cu | ||
| unfusedAttentionKernels_2_float_int8.cu | ||
| unfusedAttentionKernels_2_half_fp4.cu | ||
| unfusedAttentionKernels_2_half_fp8.cu | ||
| unfusedAttentionKernels_2_half_half.cu | ||
| unfusedAttentionKernels_2_half_int8.cu | ||
| unfusedAttentionKernels_2_template.h | ||