TensorRT-LLMs/tests/unittest/_torch/attention
Fanrong Li 8f144d9282
[TRTLLM-9416][feat] Skip DS-v3.2 indexer MQA and Top-K for short sequences. (#9524)
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
2025-12-15 12:42:25 +08:00
..
sparse [TRTLLM-9416][feat] Skip DS-v3.2 indexer MQA and Top-K for short sequences. (#9524) 2025-12-15 12:42:25 +08:00
test_attention_mla.py [TRTLLM-8803][feat] Add rope and uk-bgemm overlap for mla generation (#8495) 2025-11-06 17:39:57 +08:00
test_attention_no_cache.py [None][ci] move unittests to sub-directories (#6635) 2025-08-20 05:42:22 -04:00
test_attention.py [#6507][fix] Fix precision issue due to KV layout mismatch for split/concat kernels (#6917) 2025-11-13 12:14:58 +08:00
test_flashinfer_attention.py [#6507][fix] Fix precision issue due to KV layout mismatch for split/concat kernels (#6917) 2025-11-13 12:14:58 +08:00
test_flashinfer_star_attn.py [#6507][fix] Fix precision issue due to KV layout mismatch for split/concat kernels (#6917) 2025-11-13 12:14:58 +08:00
test_trtllm_flashinfer_symbol_collision.py [None][fix] Introduce inline namespace to avoid symbol collision (#9541) 2025-12-12 23:32:15 +08:00
test_vanilla_attention.py [None][ci] move unittests to sub-directories (#6635) 2025-08-20 05:42:22 -04:00