TensorRT-LLMs/tensorrt_llm/_torch/attention_backend
Chenghao Zhang 0b748d5bba
[None][chore] update flashinfer to 0.6.0 (#10522)
Signed-off-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com>
2026-01-16 16:22:06 -05:00
..
sparse [TRTLLM-10309] [feat] Optimize qk rope/nope concat for DSA (#10571) 2026-01-09 09:50:57 -05:00
__init__.py [TRTLLM-8536][feat] Add the sparse attention framework and one use case--RocketKV support (#8086) 2025-10-14 08:23:16 -07:00
flashinfer.py [None][chore] update flashinfer to 0.6.0 (#10522) 2026-01-16 16:22:06 -05:00
interface.py [None][feat] Support new Transformers RoPE configuration format (#10636) 2026-01-14 19:41:27 +09:00
star_flashinfer.py [#6507][fix] Fix precision issue due to KV layout mismatch for split/concat kernels (#6917) 2025-11-13 12:14:58 +08:00
trtllm.py [https://nvbugs/5779534][fix] fix buffer reuse for CUDA graph attention metadata (#10393) 2026-01-05 09:43:44 +08:00
utils.py [TRTLLM-8535][feat] Support DeepSeek V3.2 with FP8 + BF16 KV cache/NVFP4 + BF16 KV cache (#8405) 2025-10-24 13:40:41 -04:00
vanilla.py [TRTLLM-8536][feat] Add the sparse attention framework and one use case--RocketKV support (#8086) 2025-10-14 08:23:16 -07:00