TensorRT-LLMs/tensorrt_llm/_torch/attention_backend
brb-nv 3209b31665
feat: Custom masking utils for Gemma3 VLM (#5853)
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
2025-07-10 06:18:04 +09:00
..
__init__.py feat: Add group_rms_norm kernel to normalize multiple inputs in a single operator. (#3438) 2025-05-02 13:25:30 +08:00
flashinfer.py feat: Custom masking utils for Gemma3 VLM (#5853) 2025-07-10 06:18:04 +09:00
interface.py feat: Custom masking utils for Gemma3 VLM (#5853) 2025-07-10 06:18:04 +09:00
star_flashinfer.py Remove dummy forward path (#3669) 2025-04-18 16:17:50 +08:00
trtllm.py [TRTLLM-5812][feat] support FP8 row-wise dense GEMM in torch flow (#5615) 2025-07-07 18:04:57 +08:00
utils.py [feat] Integrate Hopper chunked attention kernels (#4330) 2025-05-22 17:10:57 -04:00
vanilla.py fix: Investigate Gemma3 1B decoder output discrepancy (#5564) 2025-07-04 13:14:13 +08:00