TensorRT-LLMs/tensorrt_llm/_torch/attention_backend
yuxianq a4c3359513
fix: Reset planned states to avoid memory leak in TrtllmAttentionWrapper (#4227)
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
2025-05-12 23:25:54 +08:00
..
__init__.py feat: Add group_rms_norm kernel to normalize multiple inputs in a single operator. (#3438) 2025-05-02 13:25:30 +08:00
flashinfer.py Fix fp8 kvcache (#3877) 2025-04-29 10:31:10 +08:00
interface.py fix: change the seq_lens sync copy to an async one (#3786) 2025-04-29 23:56:49 +08:00
star_flashinfer.py Remove dummy forward path (#3669) 2025-04-18 16:17:50 +08:00
trtllm.py fix: Reset planned states to avoid memory leak in TrtllmAttentionWrapper (#4227) 2025-05-12 23:25:54 +08:00
utils.py feat: Add group_rms_norm kernel to normalize multiple inputs in a single operator. (#3438) 2025-05-02 13:25:30 +08:00
vanilla.py Remove dummy forward path (#3669) 2025-04-18 16:17:50 +08:00