TensorRT-LLMs/tensorrt_llm/_torch/auto_deploy
Taylor Yeonbok Lee 1fbbb1f3cd
[None][feat] AutoDeploy: Enhance memory consumption for MoE fusion transform (#10772)
Signed-off-by: Taylor Yeonbok Lee <249374542+taylor-yb-lee@users.noreply.github.com>
2026-01-23 15:22:54 -08:00
..
compile [None][feat] AutoDeploy: prepare_metadata revisited (#9764) 2025-12-12 20:14:14 +08:00
config [None][fix] AutoDeploy: Fix the nvfp4 fused_moe (#10727) 2026-01-16 12:04:40 -08:00
custom_ops [None][chore] NVFP4 MoE - Move weights transformation to fusion phase… (#10803) 2026-01-22 13:08:05 +02:00
distributed [None][refactor] Unify the usage of MPIDist and TorchDist. (#10380) 2026-01-14 14:05:47 +08:00
export [https://nvbugs/5732942][fix] AutoDeploy: handle transformers 4.57.1 upgrade fixes (#10466) 2026-01-06 19:55:49 -05:00
models [#8241][feat] Support model_kwargs for pytorch backend (#10351) 2026-01-21 20:51:38 -08:00
shim [None][feat] Auto download speculative models from HF for pytorch backend, add speculative_model field alias (#10099) 2026-01-14 21:06:07 -08:00
transform [None][feat] AutoDeploy: Enhance memory consumption for MoE fusion transform (#10772) 2026-01-23 15:22:54 -08:00
utils [https://nvbugs/5819002][fix] fix sharding tests (#10775) 2026-01-22 20:02:48 +01:00
__init__.py [AutoDeploy] merge feat/ad-2025-07-07 (#6196) 2025-07-23 05:11:04 +08:00
llm_args.py [#9306][refactor] Refactor AutoDeployConfig into LlmArgs (#10613) 2026-01-22 16:02:49 -05:00
llm.py [TRTLLM-9065][chore] remove PyTorchConfig completely (#8856) 2025-11-06 22:37:03 -08:00