TensorRT-LLMs/tensorrt_llm/_torch/auto_deploy
gramnarayan a9eb5afc9f
[#9241][feat] AutoDeploy: Support Eagle3 Speculative Decoding (#9869)
Support two model flow with no overlap scheduler or chain drafter. Drafting model is in PyTorch backend.

Signed-off-by: Govind Ramnarayan <105831528+govind-ramnarayan@users.noreply.github.com>
2025-12-24 23:30:42 -05:00
..
compile [None][feat] AutoDeploy: prepare_metadata revisited (#9764) 2025-12-12 20:14:14 +08:00
config [#9241][feat] AutoDeploy: Support Eagle3 Speculative Decoding (#9869) 2025-12-24 23:30:42 -05:00
custom_ops [#10137][feat] AutoDeploy FP8 MoE refactor (#10138) 2025-12-24 18:58:10 +02:00
distributed [#9198][feat] Refactor dist ops in AutoDeploy (#9301) 2025-12-02 02:36:32 +08:00
export [#9230][feat] Slimmed down implementation of nemotron H (#9235) 2025-11-23 03:13:32 -08:00
models [TRTLLM-9565][fix] Fix deepseek sharding (#9984) 2025-12-23 10:28:14 -05:00
shim [#9241][feat] AutoDeploy: Support Eagle3 Speculative Decoding (#9869) 2025-12-24 23:30:42 -05:00
transform [#9241][feat] AutoDeploy: Support Eagle3 Speculative Decoding (#9869) 2025-12-24 23:30:42 -05:00
utils [#9241][feat] AutoDeploy: Support Eagle3 Speculative Decoding (#9869) 2025-12-24 23:30:42 -05:00
__init__.py [AutoDeploy] merge feat/ad-2025-07-07 (#6196) 2025-07-23 05:11:04 +08:00
llm_args.py [#9241][feat] AutoDeploy: Support Eagle3 Speculative Decoding (#9869) 2025-12-24 23:30:42 -05:00
llm.py [TRTLLM-9065][chore] remove PyTorchConfig completely (#8856) 2025-11-06 22:37:03 -08:00