TensorRT-LLMs/tensorrt_llm/_torch/attention_backend
liji-nv 1daa8c3232
[https://nvbugs/5340941][https://nvbugs/5375785] - fix: Wrap attentio… (#6355)
Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com>
2025-08-01 07:38:06 -04:00
..
__init__.py feat: Add group_rms_norm kernel to normalize multiple inputs in a single operator. (#3438) 2025-05-02 13:25:30 +08:00
flashinfer.py [https://nvbugs/5340941][https://nvbugs/5375785] - fix: Wrap attentio… (#6355) 2025-08-01 07:38:06 -04:00
interface.py feat: TRTLLM-6450 update long rope for phi3.5/phi4-mini/phi4-mm (#6353) 2025-07-30 09:20:16 -07:00
star_flashinfer.py Remove dummy forward path (#3669) 2025-04-18 16:17:50 +08:00
trtllm.py [https://nvbugs/5340941][https://nvbugs/5375785] - fix: Wrap attentio… (#6355) 2025-08-01 07:38:06 -04:00
utils.py [feat] Integrate Hopper chunked attention kernels (#4330) 2025-05-22 17:10:57 -04:00
vanilla.py fix: Investigate Gemma3 1B decoder output discrepancy (#5564) 2025-07-04 13:14:13 +08:00