TensorRT-LLMs/tensorrt_llm/_torch/auto_deploy
William Zhang a4049fc557
[#9413][fix] Minor fixes to nemotron H and custom models in AD (#9416)
* Why?

There were a couple of issues with the recently merged custom model
injection for AutoDeploy + the reference implementation of nemotron
H:
- `d_mlp` was left in despite being mathematically always null (could
  lead to runtime issues during sharding).
- the custom model mapping was inherited by children factories.

* What?

This commit fixes these issues, and refactors the key of the custom
implementation to be based on the name of the configuration class as
well.

Signed-off-by: William Zhang <133824995+2ez4bz@users.noreply.github.com>
2025-11-24 20:17:33 -08:00
..
compile [None][feat] Autodeploy add triton configs and optimize mamba prefill (#9083) 2025-11-13 19:15:43 -08:00
config [#9096][feature] Auto Deploy: configurable fused MoE backend (#9194) 2025-11-19 21:50:22 -08:00
custom_ops [#9271][perf] Enable multi-stream MOE optimization in AutoDeploy (#9322) 2025-11-24 19:50:10 -08:00
distributed [#9152][fix] AutoDeploy fused_allreduce_residual_rmsnorm to support demollm mode (#9197) 2025-11-18 22:15:29 +02:00
export [#9230][feat] Slimmed down implementation of nemotron H (#9235) 2025-11-23 03:13:32 -08:00
models [#9413][fix] Minor fixes to nemotron H and custom models in AD (#9416) 2025-11-24 20:17:33 -08:00
shim [#9237][feat] enable iter stats in autodeploy (#9278) 2025-11-19 19:29:29 +01:00
transform [#9271][perf] Enable multi-stream MOE optimization in AutoDeploy (#9322) 2025-11-24 19:50:10 -08:00
utils [None][autodeploy] fix weight extraction for graph based quantized checkpoints (#9109) 2025-11-13 13:14:24 -08:00
__init__.py [AutoDeploy] merge feat/ad-2025-07-07 (#6196) 2025-07-23 05:11:04 +08:00
llm_args.py [#9237][feat] enable iter stats in autodeploy (#9278) 2025-11-19 19:29:29 +01:00
llm.py [TRTLLM-9065][chore] remove PyTorchConfig completely (#8856) 2025-11-06 22:37:03 -08:00