TensorRT-LLMs/tensorrt_llm/_torch/auto_deploy
Jinyang Yuan b618e1f55b
perf: Eliminate the need for attention DP padding when possible (#3439)
Signed-off-by: Jinyang Yuan <154768711+jinyangyuan-nvidia@users.noreply.github.com>
Co-authored-by: raccoonliukai <raccoonliu@tencent.com>
2025-05-17 13:30:55 +08:00
..
compile [AutoDeploy]feat: Add an AutoDeploy compile backend that only calls torch.compile (#4240) 2025-05-16 08:38:15 +08:00
custom_ops perf: Eliminate the need for attention DP padding when possible (#3439) 2025-05-17 13:30:55 +08:00
distributed perf: Eliminate the need for attention DP padding when possible (#3439) 2025-05-17 13:30:55 +08:00
models feat:[AutoDeploy] Update MoE pattern matcher to drop expert selection logic (#3283) 2025-05-15 13:53:09 +08:00
shim [AutoDeploy] fix: proper process group clean up (#4373) 2025-05-16 12:35:25 -04:00
transformations [AutoDeploy] eager pattern matcher new pattern (#4370) 2025-05-16 12:35:44 -04:00
utils feat: [AutoDeploy] update rope matcher with minor variants (Deepseek) (#3638) 2025-05-16 09:55:32 -04:00
__init__.py Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00