mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-14 06:27:45 +08:00
* refactor: Fix headsize 72 attention error for TRTLLM attn backend in PyTorch workflow - Remove the head size pre-check logic in AttentionOp because head size 72 can be supported with fmha kernels. - Added support for head size 72 in unfused attention kernels(QKVPreprocessing). - Enhanced unit tests by introducing a scenario generation function for better test coverage of attention configurations(include head size 72). Signed-off-by: qixiang-99 <203170375+qixiang-99@users.noreply.github.com> * update: Waive head_dim=72 test cases and enhance test representation - Added a waiver for head_dim=72 cases on post sm100 in the test suite to address known issues. - Introduced a custom __repr__ method in the Scenario class for pytest substring match. Signed-off-by: qixiang-99 <203170375+qixiang-99@users.noreply.github.com> --------- Signed-off-by: qixiang-99 <203170375+qixiang-99@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| assert.cpp | ||
| attentionOp.cpp | ||
| attentionOp.h | ||
| CMakeLists.txt | ||
| cublasMMWrapper.cpp | ||
| cublasMMWrapper.h | ||
| cublasVersionCheck.h | ||
| cudaBf16Fallbacks.cuh | ||
| cudaBufferUtils.cuh | ||
| cudaDriverWrapper.cpp | ||
| cudaDriverWrapper.h | ||
| cudaFp8Utils.cu | ||
| cudaProfilerUtils.cpp | ||
| cudaTypeUtils.cuh | ||
| customAllReduceUtils.h | ||
| envUtils.cpp | ||
| envUtils.h | ||
| jsonSerializeOptional.h | ||
| logger.cpp | ||
| mathUtils.h | ||
| memoryUtils.cu | ||
| memoryUtils.h | ||
| nvtxUtils.h | ||
| opUtils.cpp | ||
| opUtils.h | ||
| quantTypeUtils.cuh | ||
| reduceKernelUtils.cuh | ||
| safetensors.cpp | ||
| safetensors.h | ||
| stlUtils.h | ||
| stringUtils.cpp | ||
| timestampUtils.cpp | ||
| timestampUtils.h | ||
| tllmException.cpp | ||
| workspace.h | ||