TensorRT-LLMs/tensorrt_llm/_torch/attention_backend
brb-nv 9106b5d9a5
fix: Skip rope scaling for local layers in Gemma3 VLM (#5773)
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
2025-07-07 13:36:23 +08:00
..
__init__.py feat: Add group_rms_norm kernel to normalize multiple inputs in a single operator. (#3438) 2025-05-02 13:25:30 +08:00
flashinfer.py fix: Skip rope scaling for local layers in Gemma3 VLM (#5773) 2025-07-07 13:36:23 +08:00
interface.py ci: [nvbugs/5280806] Unwaive unittests/_torch. (#4951) 2025-06-09 19:04:11 +08:00
star_flashinfer.py Remove dummy forward path (#3669) 2025-04-18 16:17:50 +08:00
trtllm.py [feat] Piecewise cuda graph support for MLA (#4467) 2025-06-17 18:58:38 +08:00
utils.py [feat] Integrate Hopper chunked attention kernels (#4330) 2025-05-22 17:10:57 -04:00
vanilla.py fix: Investigate Gemma3 1B decoder output discrepancy (#5564) 2025-07-03 09:55:25 +08:00