TensorRT-LLMs/tensorrt_llm/_torch/auto_deploy
wili 2e3cf42e03
[refactor] Simplification of Speculative decoding configs (#5639)
Signed-off-by: wili-65535 <wili-65535@users.noreply.github.com>
Co-authored-by: wili-65535 <wili-65535@users.noreply.github.com>
2025-07-10 11:37:30 -04:00
..
compile [AutoDeploy]feat: Add an AutoDeploy compile backend that only calls torch.compile (#4240) 2025-05-16 08:38:15 +08:00
custom_ops feat: AutoDeploy fp8 quantization support for bmm (#3849) 2025-06-30 12:36:34 -04:00
distributed Use backend to replace macro to control enablement of MNNVL all reduce (#4635) 2025-06-12 11:22:49 +08:00
models [AutoDeploy] merge feat/ad-2025-06-29 (#5737) 2025-07-04 10:21:18 +09:00
shim [refactor] Simplification of Speculative decoding configs (#5639) 2025-07-10 11:37:30 -04:00
transformations [AutoDeploy] merge feat/ad-2025-06-29 (#5737) 2025-07-04 10:21:18 +09:00
utils [feat] Support torch compile for attention dp (#5086) 2025-07-01 13:48:52 -04:00
__init__.py [AutoDeploy] merge feat/ad-2025-06-13 (#5556) 2025-06-29 03:52:14 +08:00
llm_args.py [TRTLLM-5530][BREAKING CHANGE] refactor: LLM arglist rename mixed_sampler to enable_mixed_sampler (#5751) 2025-07-07 17:05:14 +08:00
llm.py [AutoDeploy] merge feat/ad-2025-06-13 (#5556) 2025-06-29 03:52:14 +08:00