TensorRT-LLMs/tensorrt_llm/_torch/auto_deploy
Neta Zmora 028fc877a5
[#9096][feature] Auto Deploy: configurable fused MoE backend (#9194)
Allow configuring Auto Deploy's MoE/FP8-MoE backend from external yaml config file.

Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com>
2025-11-19 21:50:22 -08:00
..
compile [None][feat] Autodeploy add triton configs and optimize mamba prefill (#9083) 2025-11-13 19:15:43 -08:00
config [#9096][feature] Auto Deploy: configurable fused MoE backend (#9194) 2025-11-19 21:50:22 -08:00
custom_ops [#9096][feature] Auto Deploy: configurable fused MoE backend (#9194) 2025-11-19 21:50:22 -08:00
distributed [#9152][fix] AutoDeploy fused_allreduce_residual_rmsnorm to support demollm mode (#9197) 2025-11-18 22:15:29 +02:00
export [#8924][fix] Fix AutoDeploy pattern matcher for torch 2.9 (#8920) 2025-11-05 13:29:20 -08:00
models [#9098][feat] Simple sharding latent experts (#9099) 2025-11-18 21:14:22 -05:00
shim [#9237][feat] enable iter stats in autodeploy (#9278) 2025-11-19 19:29:29 +01:00
transform [#9096][feature] Auto Deploy: configurable fused MoE backend (#9194) 2025-11-19 21:50:22 -08:00
utils [None][autodeploy] fix weight extraction for graph based quantized checkpoints (#9109) 2025-11-13 13:14:24 -08:00
__init__.py [AutoDeploy] merge feat/ad-2025-07-07 (#6196) 2025-07-23 05:11:04 +08:00
llm_args.py [#9237][feat] enable iter stats in autodeploy (#9278) 2025-11-19 19:29:29 +01:00
llm.py [TRTLLM-9065][chore] remove PyTorchConfig completely (#8856) 2025-11-06 22:37:03 -08:00