TensorRT-LLMs/tensorrt_llm/_torch/auto_deploy
Grzegorz Kwasniewski ccc64da287
[TRTLLM-9847][fix] WAR fix hanging fused allreduce. (#10087)
Signed-off-by: greg-kwasniewski1 <213329731+greg-kwasniewski1@users.noreply.github.com>
2025-12-23 00:03:32 +01:00
..
compile [None][feat] AutoDeploy: prepare_metadata revisited (#9764) 2025-12-12 20:14:14 +08:00
config [TRTLLM-9847][fix] WAR fix hanging fused allreduce. (#10087) 2025-12-23 00:03:32 +01:00
custom_ops [#9717][chore] Refactor MoE code to use enums (#9910) 2025-12-22 15:14:56 -05:00
distributed [#9198][feat] Refactor dist ops in AutoDeploy (#9301) 2025-12-02 02:36:32 +08:00
export [#9230][feat] Slimmed down implementation of nemotron H (#9235) 2025-11-23 03:13:32 -08:00
models [#9717][chore] Refactor MoE code to use enums (#9910) 2025-12-22 15:14:56 -05:00
shim [TRTLLM-7736][feat] Incrementally update the inputs of target and draft models (#9708) 2025-12-19 15:11:25 +08:00
transform [TRTLLM-9847][fix] WAR fix hanging fused allreduce. (#10087) 2025-12-23 00:03:32 +01:00
utils [TRTLLM-9136][feat] 2D parallel EP TP support (#9459) 2025-12-15 09:52:29 +01:00
__init__.py [AutoDeploy] merge feat/ad-2025-07-07 (#6196) 2025-07-23 05:11:04 +08:00
llm_args.py [None][feat] AutoDeploy: prepare_metadata revisited (#9764) 2025-12-12 20:14:14 +08:00
llm.py [TRTLLM-9065][chore] remove PyTorchConfig completely (#8856) 2025-11-06 22:37:03 -08:00