TensorRT-LLMs/tensorrt_llm/_torch/attention_backend
Wanli Jiang 9632dba02e
feat: TRTLLM-6450 update long rope for phi3.5/phi4-mini/phi4-mm (#6353)
Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
2025-07-30 09:20:16 -07:00
..
__init__.py feat: Add group_rms_norm kernel to normalize multiple inputs in a single operator. (#3438) 2025-05-02 13:25:30 +08:00
flashinfer.py fix: Flush stale PlanParams with custom attention mask (#6163) 2025-07-21 09:55:09 +08:00
interface.py feat: TRTLLM-6450 update long rope for phi3.5/phi4-mini/phi4-mm (#6353) 2025-07-30 09:20:16 -07:00
star_flashinfer.py Remove dummy forward path (#3669) 2025-04-18 16:17:50 +08:00
trtllm.py [TRTLLM-6650][feat] Enhance beam search support with CUDA graph integration (#6217) 2025-07-24 18:04:41 +02:00
utils.py [feat] Integrate Hopper chunked attention kernels (#4330) 2025-05-22 17:10:57 -04:00
vanilla.py fix: Investigate Gemma3 1B decoder output discrepancy (#5564) 2025-07-04 13:14:13 +08:00