TensorRT-LLMs/tensorrt_llm/executor
shaharmor98 5fff8f0935
Add running E2E LoRA flow (#3648)
* add passing E2E LoRA flow

Signed-off-by: Shahar Mor <smor@nvidia.com>

* add experimental feature

Signed-off-by: Shahar Mor <smor@nvidia.com>

* fix llma_args definition

Signed-off-by: Shahar Mor <smor@nvidia.com>

* decreased manually size of max loras to address OOM

Signed-off-by: Shahar Mor <smor@nvidia.com>

---------

Signed-off-by: Shahar Mor <smor@nvidia.com>
2025-04-23 11:19:41 +08:00
..
__init__.py Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
executor.py Add running E2E LoRA flow (#3648) 2025-04-23 11:19:41 +08:00
ipc.py fix: Use hmac authentication for pickle encryption (#3384) 2025-04-17 00:40:13 +08:00
postproc_worker.py fix: Use hmac authentication for pickle encryption (#3384) 2025-04-17 00:40:13 +08:00
proxy.py fix: Proper error bubbling for PyExecutor (#3321) 2025-04-15 14:49:46 +08:00
request.py Add running E2E LoRA flow (#3648) 2025-04-23 11:19:41 +08:00
result.py chore: Unify Python NVTX call (#3450) 2025-04-15 23:25:36 +08:00
utils.py fix hmac in remote mpi session (#3649) 2025-04-18 17:47:51 +08:00
worker.py Add running E2E LoRA flow (#3648) 2025-04-23 11:19:41 +08:00