TensorRT-LLMs/tensorrt_llm/_torch/auto_deploy
Chenghao Zhang 18fbda5cdb
[None][feat] AutoDeploy: Add A_log fusion for Mamba layers (#9422)
Signed-off-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com>
2025-11-26 14:39:20 -08:00
..
compile [None][feat] Autodeploy add triton configs and optimize mamba prefill (#9083) 2025-11-13 19:15:43 -08:00
config [None][chore] AutoDeploy add multi stream moe pass to default.yaml (#9430) 2025-11-25 14:16:13 -08:00
custom_ops [None][feat] AutoDeploy: Remove redundant copies in mamba layers (#9461) 2025-11-26 14:38:33 -08:00
distributed [https://nvbugs/5647400] [fix] Enlarged the AllReduce workspace size to 64MB. Added AllReduce strategy to AD config. (#9145) 2025-11-25 10:56:07 -08:00
export [#9230][feat] Slimmed down implementation of nemotron H (#9235) 2025-11-23 03:13:32 -08:00
models [#9413][fix] Minor fixes to nemotron H and custom models in AD (#9416) 2025-11-24 20:17:33 -08:00
shim [#9237][feat] enable iter stats in autodeploy (#9278) 2025-11-19 19:29:29 +01:00
transform [None][feat] AutoDeploy: Add A_log fusion for Mamba layers (#9422) 2025-11-26 14:39:20 -08:00
utils [https://nvbugs/5647400] [fix] Enlarged the AllReduce workspace size to 64MB. Added AllReduce strategy to AD config. (#9145) 2025-11-25 10:56:07 -08:00
__init__.py [AutoDeploy] merge feat/ad-2025-07-07 (#6196) 2025-07-23 05:11:04 +08:00
llm_args.py [#9237][feat] enable iter stats in autodeploy (#9278) 2025-11-19 19:29:29 +01:00
llm.py [TRTLLM-9065][chore] remove PyTorchConfig completely (#8856) 2025-11-06 22:37:03 -08:00