TensorRT-LLMs/tensorrt_llm/_torch/attention_backend
Mike Iovine 9c0de251db
[feat] Integrate Hopper chunked attention kernels (#4330)
* Integrate chunked attention kernels

Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>

* Fix cache key

Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>

* Fix lint

Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>

---------

Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
2025-05-22 17:10:57 -04:00
..
__init__.py feat: Add group_rms_norm kernel to normalize multiple inputs in a single operator. (#3438) 2025-05-02 13:25:30 +08:00
flashinfer.py [https://nvbugs/5123103][fix] Fix torch compile for DeepSeekV3 (#3952) 2025-05-19 22:12:25 +08:00
interface.py [https://nvbugs/5123103][fix] Fix torch compile for DeepSeekV3 (#3952) 2025-05-19 22:12:25 +08:00
star_flashinfer.py Remove dummy forward path (#3669) 2025-04-18 16:17:50 +08:00
trtllm.py [feat] Integrate Hopper chunked attention kernels (#4330) 2025-05-22 17:10:57 -04:00
utils.py [feat] Integrate Hopper chunked attention kernels (#4330) 2025-05-22 17:10:57 -04:00
vanilla.py Remove dummy forward path (#3669) 2025-04-18 16:17:50 +08:00