TensorRT-LLMs/tensorrt_llm/_torch/auto_deploy
tcherckez-nvidia f9aa86dbdd
[#8733][feat] Add Llama4 MoE handling to AutoDeploy (#9556)
Signed-off-by: Tal Cherckez <127761168+tcherckez-nvidia@users.noreply.github.com>
Signed-off-by: tcherckez-nvidia <127761168+tcherckez-nvidia@users.noreply.github.com>
Co-authored-by: Neta Zmora <nzmora@nvidia.com>
2025-12-04 08:03:33 +02:00
..
compile [None][feat] Autodeploy add triton configs and optimize mamba prefill (#9083) 2025-11-13 19:15:43 -08:00
config [#8733][feat] Add Llama4 MoE handling to AutoDeploy (#9556) 2025-12-04 08:03:33 +02:00
custom_ops [#8733][feat] Add Llama4 MoE handling to AutoDeploy (#9556) 2025-12-04 08:03:33 +02:00
distributed [#9198][feat] Refactor dist ops in AutoDeploy (#9301) 2025-12-02 02:36:32 +08:00
export [#9230][feat] Slimmed down implementation of nemotron H (#9235) 2025-11-23 03:13:32 -08:00
models [#9643][fix] AutoDeploy: fix nano sharding config (#9668) 2025-12-04 03:10:25 +08:00
shim [#9147][feat] AutoDeploy: Draft Target Speculative Decoding (#9275) 2025-12-04 05:13:49 +08:00
transform [#8733][feat] Add Llama4 MoE handling to AutoDeploy (#9556) 2025-12-04 08:03:33 +02:00
utils [#8733][feat] Add Llama4 MoE handling to AutoDeploy (#9556) 2025-12-04 08:03:33 +02:00
__init__.py [AutoDeploy] merge feat/ad-2025-07-07 (#6196) 2025-07-23 05:11:04 +08:00
llm_args.py [#9147][feat] AutoDeploy: Draft Target Speculative Decoding (#9275) 2025-12-04 05:13:49 +08:00
llm.py [TRTLLM-9065][chore] remove PyTorchConfig completely (#8856) 2025-11-06 22:37:03 -08:00