TensorRT-LLMs/tensorrt_llm/_torch/attention_backend
yuxianq 9d64b6b890
Cache sin cos in model instead of global LRU cache. (#3378)
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
2025-04-14 11:19:09 +08:00
..
__init__.py Update (#2978) 2025-03-23 16:39:35 +08:00
flashinfer.py Cache sin cos in model instead of global LRU cache. (#3378) 2025-04-14 11:19:09 +08:00
interface.py Cache sin cos in model instead of global LRU cache. (#3378) 2025-04-14 11:19:09 +08:00
star_flashinfer.py refactor: remove ParallelConfig in tensorrt_llm._torch.distributed module (#3370) 2025-04-11 15:34:20 -07:00
trtllm.py Cache sin cos in model instead of global LRU cache. (#3378) 2025-04-14 11:19:09 +08:00
utils.py chore: Refine attention backend interface. (#3271) 2025-04-09 02:34:53 +08:00
vanilla.py chore: Refine attention backend interface. (#3271) 2025-04-09 02:34:53 +08:00