TensorRT-LLMs/tensorrt_llm/_torch/attention_backend
peaceh-nv 030598a497
[https://nvbugs/5448426][fix] Fix illegal memory access in cuda graph (#7127)
Signed-off-by: peaceh <103117813+peaceh-nv@users.noreply.github.com>
2025-08-25 10:04:34 +08:00
..
__init__.py feat: Add group_rms_norm kernel to normalize multiple inputs in a single operator. (#3438) 2025-05-02 13:25:30 +08:00
flashinfer.py [https://nvbugs/5410391][bug] Support to share device buffers in attention meta (#6557) 2025-08-22 13:19:27 +08:00
interface.py [https://nvbugs/5410391][bug] Support to share device buffers in attention meta (#6557) 2025-08-22 13:19:27 +08:00
star_flashinfer.py Remove dummy forward path (#3669) 2025-04-18 16:17:50 +08:00
trtllm.py [https://nvbugs/5448426][fix] Fix illegal memory access in cuda graph (#7127) 2025-08-25 10:04:34 +08:00
utils.py [feat] Integrate Hopper chunked attention kernels (#4330) 2025-05-22 17:10:57 -04:00
vanilla.py [TRTLLM-6674][feat] (Breaking Change) Hopper SWA non-cyclic kernels + KV reuse + Spec Dec (#6379) 2025-08-05 07:47:41 +00:00