TensorRT-LLMs/tensorrt_llm/_torch/auto_deploy
Lucas Liebenwein 8e4320ede5
[AutoDeploy] configurable cache resize (#4372)
Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
2025-05-16 10:07:09 -04:00
..
compile [AutoDeploy]feat: Add an AutoDeploy compile backend that only calls torch.compile (#4240) 2025-05-16 08:38:15 +08:00
custom_ops feat: [AutoDeploy] update rope matcher with minor variants (Deepseek) (#3638) 2025-05-16 09:55:32 -04:00
distributed [AutoDeploy] Make all ranks agree on kv-cache size (#4007) 2025-05-02 04:07:28 +08:00
models feat:[AutoDeploy] Update MoE pattern matcher to drop expert selection logic (#3283) 2025-05-15 13:53:09 +08:00
shim [AutoDeploy] configurable cache resize (#4372) 2025-05-16 10:07:09 -04:00
transformations [AutoDeploy] configurable cache resize (#4372) 2025-05-16 10:07:09 -04:00
utils feat: [AutoDeploy] update rope matcher with minor variants (Deepseek) (#3638) 2025-05-16 09:55:32 -04:00
__init__.py Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00