TensorRT-LLMs/tensorrt_llm/_torch/attention_backend
Ziyi Xiong 7c4344b92e
[https://nvbugs/5590408][fix] Exclude num of draft tokens from mMaxSeqLenKv (#9210)
Signed-off-by: ziyixiong-nv <219238287+ziyixiong-nv@users.noreply.github.com>
2025-11-18 15:41:56 -05:00
..
sparse [None][fix] DeepSeek V3.2 indexer RoPE fix (#9232) 2025-11-18 20:35:27 +08:00
__init__.py [TRTLLM-8536][feat] Add the sparse attention framework and one use case--RocketKV support (#8086) 2025-10-14 08:23:16 -07:00
flashinfer.py [#6507][fix] Fix precision issue due to KV layout mismatch for split/concat kernels (#6917) 2025-11-13 12:14:58 +08:00
interface.py [TRTLLM-8778][feat] Add tree attention support for blackwell arch (#8975) 2025-11-17 09:01:53 +08:00
star_flashinfer.py [#6507][fix] Fix precision issue due to KV layout mismatch for split/concat kernels (#6917) 2025-11-13 12:14:58 +08:00
trtllm.py [https://nvbugs/5590408][fix] Exclude num of draft tokens from mMaxSeqLenKv (#9210) 2025-11-18 15:41:56 -05:00
utils.py [TRTLLM-8535][feat] Support DeepSeek V3.2 with FP8 + BF16 KV cache/NVFP4 + BF16 KV cache (#8405) 2025-10-24 13:40:41 -04:00
vanilla.py [TRTLLM-8536][feat] Add the sparse attention framework and one use case--RocketKV support (#8086) 2025-10-14 08:23:16 -07:00