TensorRT-LLMs/tensorrt_llm/_torch/attention_backend
heyuhhh 6e470aab72
[None] [feat] Optimize the algorithm part of RocketKV (#9333)
Signed-off-by: yuhangh <58161490+heyuhhh@users.noreply.github.com>
2025-12-01 09:04:09 +08:00
..
sparse [None] [feat] Optimize the algorithm part of RocketKV (#9333) 2025-12-01 09:04:09 +08:00
__init__.py [TRTLLM-8536][feat] Add the sparse attention framework and one use case--RocketKV support (#8086) 2025-10-14 08:23:16 -07:00
flashinfer.py [#6507][fix] Fix precision issue due to KV layout mismatch for split/concat kernels (#6917) 2025-11-13 12:14:58 +08:00
interface.py [TRTLLM-8160][feat] Add draft token tree runtime on CDL (#8586) 2025-11-25 09:40:55 -05:00
star_flashinfer.py [#6507][fix] Fix precision issue due to KV layout mismatch for split/concat kernels (#6917) 2025-11-13 12:14:58 +08:00
trtllm.py [TRTLLM-5971][feat] Integrate helix parallelism (#9342) 2025-11-29 15:17:30 -08:00
utils.py [TRTLLM-8535][feat] Support DeepSeek V3.2 with FP8 + BF16 KV cache/NVFP4 + BF16 KV cache (#8405) 2025-10-24 13:40:41 -04:00
vanilla.py [TRTLLM-8536][feat] Add the sparse attention framework and one use case--RocketKV support (#8086) 2025-10-14 08:23:16 -07:00