TensorRT-LLMs/tests/unittest/_torch/attention
heyuhhh e3f27e06c7
[None][chore] Waive star attention unittests (#10439)
Signed-off-by: yuhangh <58161490+heyuhhh@users.noreply.github.com>
2026-01-16 10:12:32 +08:00
..
sparse [None][chore] Waive star attention unittests (#10439) 2026-01-16 10:12:32 +08:00
test_attention_mla.py [None][feat] update trtllm-gen to support groupsTokensHeadsQ (#10261) 2026-01-15 02:24:25 -05:00
test_attention_no_cache.py [None][ci] move unittests to sub-directories (#6635) 2025-08-20 05:42:22 -04:00
test_attention.py [#6507][fix] Fix precision issue due to KV layout mismatch for split/concat kernels (#6917) 2025-11-13 12:14:58 +08:00
test_flashinfer_attention.py [#6507][fix] Fix precision issue due to KV layout mismatch for split/concat kernels (#6917) 2025-11-13 12:14:58 +08:00
test_trtllm_flashinfer_symbol_collision.py [#9717][chore] Refactor MoE code to use enums (#9910) 2025-12-22 15:14:56 -05:00
test_vanilla_attention.py [None][ci] move unittests to sub-directories (#6635) 2025-08-20 05:42:22 -04:00