TensorRT-LLMs/tensorrt_llm/_torch/auto_deploy
Enwei Zhu 4b82b8b4c7
[TRTLLM-5330] perf: Optimize MoE supplementary kernels for large-scale EP (#5215)
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
2025-06-17 15:23:24 +08:00
..
compile [AutoDeploy]feat: Add an AutoDeploy compile backend that only calls torch.compile (#4240) 2025-05-16 08:38:15 +08:00
custom_ops [TRTLLM-5330] perf: Optimize MoE supplementary kernels for large-scale EP (#5215) 2025-06-17 15:23:24 +08:00
distributed Use backend to replace macro to control enablement of MNNVL all reduce (#4635) 2025-06-12 11:22:49 +08:00
models chore: Change the type annotations of input_ids and position_ids to int32. (#4632) 2025-06-07 16:10:47 +08:00
shim [AutoDeploy] _AutoDeployLlmArgs as primary config object (#4891) 2025-06-05 17:20:55 +08:00
transformations fix: build_config in TorchLlmArgs and avoid arbitrary args (#4972) 2025-06-15 17:51:56 -07:00
utils [nvbugs/5331013] fix AutoDeploy for PyTorch 25.05 dependency upgrade (#5106) 2025-06-12 13:07:27 +08:00
__init__.py Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00