TensorRT-LLMs/tensorrt_llm/_torch/attention_backend
JunyiXu-nv 6adccd758d
[https://nvbugs/5606268][fix] Separate cuda graph workspace to prevent IMA (#8685)
Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
2025-10-29 09:43:30 +01:00
..
__init__.py [None][ci] move unittests to sub-directories (#6635) 2025-08-20 05:42:22 -04:00
flashinfer.py [None][chore] Mass integration of release/1.0 - 3rd (#7519) 2025-09-08 14:03:04 +08:00
interface.py [TRTLLM-7385][feat] Optimize Qwen2/2.5-VL performance (#7250) 2025-09-22 03:40:02 -07:00
star_flashinfer.py Remove dummy forward path (#3669) 2025-04-18 16:17:50 +08:00
trtllm.py [https://nvbugs/5606268][fix] Separate cuda graph workspace to prevent IMA (#8685) 2025-10-29 09:43:30 +01:00
utils.py [None][ci] move unittests to sub-directories (#6635) 2025-08-20 05:42:22 -04:00
vanilla.py [TRTLLM-6674][feat] (Breaking Change) Hopper SWA non-cyclic kernels + KV reuse + Spec Dec (#6379) 2025-08-05 07:47:41 +00:00