TensorRT-LLMs/tensorrt_llm/_torch/auto_deploy
amitz-nv 98428f330e
[TRTLLM-5826][feat] Support pytorch LoRA adapter eviction (#5616)
Signed-off-by: Amit Zuker <203509407+amitz-nv@users.noreply.github.com>
2025-07-20 08:00:14 +03:00
..
compile [AutoDeploy]feat: Add an AutoDeploy compile backend that only calls torch.compile (#4240) 2025-05-16 08:38:15 +08:00
custom_ops feat: AutoDeploy fp8 quantization support for bmm (#3849) 2025-06-30 12:36:34 -04:00
distributed Use backend to replace macro to control enablement of MNNVL all reduce (#4635) 2025-06-12 11:22:49 +08:00
models [AutoDeploy] merge feat/ad-2025-06-29 (#5737) 2025-07-04 10:21:18 +09:00
shim [TRTLLM-5826][feat] Support pytorch LoRA adapter eviction (#5616) 2025-07-20 08:00:14 +03:00
transformations [AutoDeploy] merge feat/ad-2025-06-29 (#5737) 2025-07-04 10:21:18 +09:00
utils [feat] Support torch compile for attention dp (#5086) 2025-07-01 13:48:52 -04:00
__init__.py [AutoDeploy] merge feat/ad-2025-06-13 (#5556) 2025-06-29 03:52:14 +08:00
llm_args.py [TRTLLM-5530][BREAKING CHANGE] refactor: LLM arglist rename mixed_sampler to enable_mixed_sampler (#5751) 2025-07-07 17:05:14 +08:00
llm.py [AutoDeploy] merge feat/ad-2025-06-13 (#5556) 2025-06-29 03:52:14 +08:00