mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-02-22 02:35:21 +08:00
* refactor: Fix headsize 72 attention error for TRTLLM attn backend in PyTorch workflow - Remove the head size pre-check logic in AttentionOp because head size 72 can be supported with fmha kernels. - Added support for head size 72 in unfused attention kernels(QKVPreprocessing). - Enhanced unit tests by introducing a scenario generation function for better test coverage of attention configurations(include head size 72). Signed-off-by: qixiang-99 <203170375+qixiang-99@users.noreply.github.com> * update: Waive head_dim=72 test cases and enhance test representation - Added a waiver for head_dim=72 cases on post sm100 in the test suite to address known issues. - Introduced a custom __repr__ method in the Scenario class for pytest substring match. Signed-off-by: qixiang-99 <203170375+qixiang-99@users.noreply.github.com> --------- Signed-off-by: qixiang-99 <203170375+qixiang-99@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| auto_deploy | ||
| compilation | ||
| modeling | ||
| modules | ||
| multi_gpu | ||
| multi_gpu_modeling | ||
| speculative | ||
| thop | ||
| helpers.py | ||
| pattern_watcher.py | ||
| test_attention_no_cache.py | ||
| test_attention.py | ||
| test_autotuner.py | ||
| test_flashinfer_attention.py | ||
| test_flashinfer_star_attn.py | ||
| test_mnnvl_memory.py | ||
| test_overlap_scheduler_input.json | ||
| test_overlap_scheduler.py | ||
| test_pytorch_model_engine.py | ||
| test_resource_manager.py | ||
| test_return_logits.py | ||
| test_trtllm_decoder.py | ||
| test_vanilla_attention.py | ||