TensorRT-LLMs/tensorrt_llm/_torch/auto_deploy
William Zhang 2dd3ebf037
[#9150][feat] Add code for nano v3 to custom implementation in AD (#9465)
* Why?

We would like to show an alternative to monkey-patching in AutoDeploy.

* What?

This commit builds on the existing custom model implementation for
NemotronH and adds the bits relevant for MoE layers.

Part of #9150.

Signed-off-by: William Zhang <133824995+2ez4bz@users.noreply.github.com>
2025-12-02 08:56:44 -08:00
..
compile [None][feat] Autodeploy add triton configs and optimize mamba prefill (#9083) 2025-11-13 19:15:43 -08:00
config [#9198][feat] Refactor dist ops in AutoDeploy (#9301) 2025-12-02 02:36:32 +08:00
custom_ops [#9150][feat] Add code for nano v3 to custom implementation in AD (#9465) 2025-12-02 08:56:44 -08:00
distributed [#9198][feat] Refactor dist ops in AutoDeploy (#9301) 2025-12-02 02:36:32 +08:00
export [#9230][feat] Slimmed down implementation of nemotron H (#9235) 2025-11-23 03:13:32 -08:00
models [#9150][feat] Add code for nano v3 to custom implementation in AD (#9465) 2025-12-02 08:56:44 -08:00
shim [TRTLLM-6756][feat] Add Beam Search to TorchSampler (#8509) 2025-12-01 18:48:04 +01:00
transform [#9198][feat] Refactor dist ops in AutoDeploy (#9301) 2025-12-02 02:36:32 +08:00
utils [#9198][feat] Refactor dist ops in AutoDeploy (#9301) 2025-12-02 02:36:32 +08:00
__init__.py [AutoDeploy] merge feat/ad-2025-07-07 (#6196) 2025-07-23 05:11:04 +08:00
llm_args.py [#9237][feat] enable iter stats in autodeploy (#9278) 2025-11-19 19:29:29 +01:00
llm.py [TRTLLM-9065][chore] remove PyTorchConfig completely (#8856) 2025-11-06 22:37:03 -08:00