TensorRT-LLMs/tensorrt_llm/_torch/attention_backend
Yukun He 39076410a8
[https://nvbugs/5676748][fix] Fix mismatched nvfp4 gemm sf shape. (#9336)
Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
2025-11-24 12:16:32 +08:00
..
sparse [None][fix] Use fp32 for indexer weight_proj GEMM (#9243) 2025-11-19 21:52:38 -08:00
__init__.py [TRTLLM-8536][feat] Add the sparse attention framework and one use case--RocketKV support (#8086) 2025-10-14 08:23:16 -07:00
flashinfer.py [#6507][fix] Fix precision issue due to KV layout mismatch for split/concat kernels (#6917) 2025-11-13 12:14:58 +08:00
interface.py [TRTLLM-8778][feat] Add tree attention support for blackwell arch (#8975) 2025-11-17 09:01:53 +08:00
star_flashinfer.py [#6507][fix] Fix precision issue due to KV layout mismatch for split/concat kernels (#6917) 2025-11-13 12:14:58 +08:00
trtllm.py [https://nvbugs/5676748][fix] Fix mismatched nvfp4 gemm sf shape. (#9336) 2025-11-24 12:16:32 +08:00
utils.py [TRTLLM-8535][feat] Support DeepSeek V3.2 with FP8 + BF16 KV cache/NVFP4 + BF16 KV cache (#8405) 2025-10-24 13:40:41 -04:00
vanilla.py [TRTLLM-8536][feat] Add the sparse attention framework and one use case--RocketKV support (#8086) 2025-10-14 08:23:16 -07:00