TensorRT-LLMs/tensorrt_llm/_torch/attention_backend
Wanli Jiang 3f7cedec7c
Update transformers to 4.53.0 (#5747)
Signed-off-by: Hao Lu <14827759+hlu1@users.noreply.github.com>
Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
2025-07-09 09:32:24 -07:00
..
__init__.py feat: Add group_rms_norm kernel to normalize multiple inputs in a single operator. (#3438) 2025-05-02 13:25:30 +08:00
flashinfer.py fix: Skip rope scaling for local layers in Gemma3 VLM (#5857) 2025-07-09 10:10:33 +08:00
interface.py Update transformers to 4.53.0 (#5747) 2025-07-09 09:32:24 -07:00
star_flashinfer.py Remove dummy forward path (#3669) 2025-04-18 16:17:50 +08:00
trtllm.py [TRTLLM-5812][feat] support FP8 row-wise dense GEMM in torch flow (#5615) 2025-07-07 18:04:57 +08:00
utils.py [feat] Integrate Hopper chunked attention kernels (#4330) 2025-05-22 17:10:57 -04:00
vanilla.py fix: Investigate Gemma3 1B decoder output discrepancy (#5564) 2025-07-04 13:14:13 +08:00