TensorRT-LLMs/tensorrt_llm/_torch/auto_deploy
Suyog Gupta 2956978da3
[None][feat] Enable rms norm fusion for Nemotron MOE (#8563)
Signed-off-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com>
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
Signed-off-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com>
Co-authored-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com>
Co-authored-by: QI JUN <22017000+QiJune@users.noreply.github.com>
Co-authored-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
2025-10-23 00:09:42 -04:00
..
compile [None][feat] AutoDeploy: compiler backends based on nn.Module (#8126) 2025-10-03 12:14:21 -04:00
config [None][feat] Enable rms norm fusion for Nemotron MOE (#8563) 2025-10-23 00:09:42 -04:00
custom_ops [None][feat] Enable rms norm fusion for Nemotron MOE (#8563) 2025-10-23 00:09:42 -04:00
distributed [#8272][feat] Enable chunked prefill for SSMs in AutoDeploy (#8477) 2025-10-20 15:31:52 -07:00
export [None][chore] AutoDeploy: cleanup old inference optimizer configs (#8039) 2025-10-17 15:55:57 -04:00
models [None][feat] AutoDeploy: Add Nemotron MOE support for AutoDeploy (#8469) 2025-10-21 15:32:01 -07:00
shim [TRTLLM-8483][chore] Refine scheduler_config and peft_cache_config in create_py_executor (#8451) 2025-10-22 08:33:48 +08:00
transform [None][feat] Enable rms norm fusion for Nemotron MOE (#8563) 2025-10-23 00:09:42 -04:00
utils [None][chore] AutoDeploy: cleanup old inference optimizer configs (#8039) 2025-10-17 15:55:57 -04:00
__init__.py [AutoDeploy] merge feat/ad-2025-07-07 (#6196) 2025-07-23 05:11:04 +08:00
llm_args.py [TRTLLM-8682][chore] Remove auto_parallel module (#8329) 2025-10-22 20:53:08 -04:00
llm.py [#4593][feat] AutoDeploy: Linear Attention Support (SSM + causal_conv + Bamba + Nemotron-H) (#8068) 2025-09-29 22:41:06 -04:00