TensorRT-LLMs/tensorrt_llm/_torch
yuxianq 9d64b6b890
Cache sin cos in model instead of global LRU cache. (#3378)
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
2025-04-14 11:19:09 +08:00
..
attention_backend Cache sin cos in model instead of global LRU cache. (#3378) 2025-04-14 11:19:09 +08:00
auto_deploy refactor: remove ParallelConfig in tensorrt_llm._torch.distributed module (#3370) 2025-04-11 15:34:20 -07:00
compilation feat: Add NVFP4 UB pattern optimization pass in torch compile (#3371) 2025-04-11 21:25:29 +08:00
custom_ops feat: Add NVFP4 UB pattern optimization pass in torch compile (#3371) 2025-04-11 21:25:29 +08:00
models Cache sin cos in model instead of global LRU cache. (#3378) 2025-04-14 11:19:09 +08:00
modules refactor: Remove _pp_forward. (#3496) 2025-04-14 09:49:44 +08:00
peft lora_tests (#3201) 2025-04-09 18:06:52 +03:00
pyexecutor Cache sin cos in model instead of global LRU cache. (#3378) 2025-04-14 11:19:09 +08:00
speculative fix the py_decoding_iter update in decoder. (#3297) 2025-04-07 11:18:33 +08:00
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
autotuner.py feat: Apply the new torch-flow compatible AutoTuner to both Fused MoE and NVFP4 Linear operators. (#3151) 2025-04-08 14:28:36 +08:00
distributed.py refactor: remove ParallelConfig in tensorrt_llm._torch.distributed module (#3370) 2025-04-11 15:34:20 -07:00
llm.py test: [TRTLLM-4334] Create 1.0 criteria scope from API stability references (#3069) 2025-03-26 18:14:35 +08:00
metadata.py feat: no-cache attention in PyTorch workflow (#3085) 2025-04-05 01:54:32 +08:00
model_config.py Add Llama 4 (#3302) 2025-04-09 03:35:21 +08:00
pipeline_interface.py Update (#2978) 2025-03-23 16:39:35 +08:00
utils.py Cache sin cos in model instead of global LRU cache. (#3378) 2025-04-14 11:19:09 +08:00