TensorRT-LLMs/tests/unittest/_torch/attention/sparse
Chang Liu b10137fdd5
[None][feat] Support MLA chunked prefill for DeepSeek V3.2 model (#9376)
Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com>
2025-11-26 16:38:25 +08:00
..
test_dsa_indexer.py [None][feat] Support MLA chunked prefill for DeepSeek V3.2 model (#9376) 2025-11-26 16:38:25 +08:00
test_flash_mla.py [TRTLLM-8535][feat] Support DeepSeek V3.2 with FP8 + BF16 KV cache/NVFP4 + BF16 KV cache (#8405) 2025-10-24 13:40:41 -04:00
test_rocketkv.py [None] [feat] Use triton kernels for RocketKV prediction module (#8682) 2025-11-13 18:51:09 -08:00
test_sparse_mla_forward.py [None][fix] Use fp32 for indexer weight_proj GEMM (#9243) 2025-11-19 21:52:38 -08:00
test_triton_bmm.py [None] [feat] Use triton kernels for RocketKV prediction module (#8682) 2025-11-13 18:51:09 -08:00
test_triton_topk.py [None] [feat] Use triton kernels for RocketKV prediction module (#8682) 2025-11-13 18:51:09 -08:00