TensorRT-LLMs/tensorrt_llm/_torch/auto_deploy
Netanel Haber 6ee94c7ac8
Reintroduce with perf fixes: feature: unify new_tokens format sample state to trtllm samper tokens format (#5513)
58a8a8f - these changes were previously merged to main here.
6aef149 - the changes were temporarily reverted in main, due to a significant perf regression in models using the TorchSampler (observed by @byshiue).
This PR is meant to re-merge these changes along with a fix to prevent the regression.

The first commit of this PR is actually just the reverted revert - filter it out of the changes to see previously unmerged changes.

Signed-off-by: Netanel Haber <nhaber@nvidia.com>
2025-06-30 11:58:59 -07:00
..
compile [AutoDeploy]feat: Add an AutoDeploy compile backend that only calls torch.compile (#4240) 2025-05-16 08:38:15 +08:00
custom_ops feat: AutoDeploy fp8 quantization support for bmm (#3849) 2025-06-30 12:36:34 -04:00
distributed Use backend to replace macro to control enablement of MNNVL all reduce (#4635) 2025-06-12 11:22:49 +08:00
models [AutoDeploy] merge feat/ad-2025-06-13 (#5556) 2025-06-29 03:52:14 +08:00
shim Reintroduce with perf fixes: feature: unify new_tokens format sample state to trtllm samper tokens format (#5513) 2025-06-30 11:58:59 -07:00
transformations feat: AutoDeploy fp8 quantization support for bmm (#3849) 2025-06-30 12:36:34 -04:00
utils feat: AutoDeploy fp8 quantization support for bmm (#3849) 2025-06-30 12:36:34 -04:00
__init__.py [AutoDeploy] merge feat/ad-2025-06-13 (#5556) 2025-06-29 03:52:14 +08:00
llm_args.py [AutoDeploy] merge feat/ad-2025-06-13 (#5556) 2025-06-29 03:52:14 +08:00
llm.py [AutoDeploy] merge feat/ad-2025-06-13 (#5556) 2025-06-29 03:52:14 +08:00