mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-14 06:27:45 +08:00
Optimize prepare_inputs routine in AutoDeploy, as part of the effort to reduce the performance gap compared to the default backend. This PR includes two major fixes, and some other minor tweaks: 1. Avoid back and forth data copies 2. Optimize position ids update by separating the implementation for generation mode and context mode. Signed-off-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com> Signed-off-by: Gal Hubara Agam <96368689+galagam@users.noreply.github.com> Co-authored-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| compile | ||
| config | ||
| custom_ops | ||
| distributed | ||
| export | ||
| models | ||
| shim | ||
| transform | ||
| transformations | ||
| utils | ||
| __init__.py | ||
| llm_args.py | ||
| llm.py | ||