TensorRT-LLMs/tensorrt_llm/_torch/attention_backend
Fanrong Li 4931c5eb3a
[None][feat] update deepgemm to the DeepGEMM/nv_dev branch (#9898)
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
2026-01-05 16:43:42 +08:00
..
sparse [None][feat] update deepgemm to the DeepGEMM/nv_dev branch (#9898) 2026-01-05 16:43:42 +08:00
__init__.py [TRTLLM-8536][feat] Add the sparse attention framework and one use case--RocketKV support (#8086) 2025-10-14 08:23:16 -07:00
flashinfer.py [https://nvbugs/5779534][fix] fix buffer reuse for CUDA graph attention metadata (#10393) 2026-01-05 09:43:44 +08:00
interface.py [TRTLLM-7735][feat] Attention NVFP4 out support for torch compile (#9740) 2025-12-27 00:07:20 +08:00
star_flashinfer.py [#6507][fix] Fix precision issue due to KV layout mismatch for split/concat kernels (#6917) 2025-11-13 12:14:58 +08:00
trtllm.py [https://nvbugs/5779534][fix] fix buffer reuse for CUDA graph attention metadata (#10393) 2026-01-05 09:43:44 +08:00
utils.py [TRTLLM-8535][feat] Support DeepSeek V3.2 with FP8 + BF16 KV cache/NVFP4 + BF16 KV cache (#8405) 2025-10-24 13:40:41 -04:00
vanilla.py [TRTLLM-8536][feat] Add the sparse attention framework and one use case--RocketKV support (#8086) 2025-10-14 08:23:16 -07:00