TensorRT-LLMs/tests/unittest/_torch/attention
Chang Liu 79a6c9742b
[None][fix] Use fp32 for indexer weight_proj GEMM (#9243)
Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com>
2025-11-19 21:52:38 -08:00
..
sparse [None][fix] Use fp32 for indexer weight_proj GEMM (#9243) 2025-11-19 21:52:38 -08:00
test_attention_mla.py [TRTLLM-8803][feat] Add rope and uk-bgemm overlap for mla generation (#8495) 2025-11-06 17:39:57 +08:00
test_attention_no_cache.py [None][ci] move unittests to sub-directories (#6635) 2025-08-20 05:42:22 -04:00
test_attention.py [#6507][fix] Fix precision issue due to KV layout mismatch for split/concat kernels (#6917) 2025-11-13 12:14:58 +08:00
test_flashinfer_attention.py [#6507][fix] Fix precision issue due to KV layout mismatch for split/concat kernels (#6917) 2025-11-13 12:14:58 +08:00
test_flashinfer_star_attn.py [#6507][fix] Fix precision issue due to KV layout mismatch for split/concat kernels (#6917) 2025-11-13 12:14:58 +08:00
test_vanilla_attention.py [None][ci] move unittests to sub-directories (#6635) 2025-08-20 05:42:22 -04:00