Anish Shanbhag
|
15de45d782
|
[TRTLLM-8682][chore] Remove auto_parallel module (#8329)
Signed-off-by: Anish Shanbhag <ashanbhag@nvidia.com>
|
2025-10-22 20:53:08 -04:00 |
|
amitz-nv
|
fac47e2826
|
[https://nvbugs/5510879][fix] Fix pytorch & TRT-python flows fused LoRA adapter modules weight split with TP>1 (#8063)
Signed-off-by: Amit Zuker <203509407+amitz-nv@users.noreply.github.com>
|
2025-10-12 12:29:52 -07:00 |
|
Wanli Jiang
|
07c711eb1f
|
[TRTLLM-6825][fix] Update lora for phi4-mm (#6817)
Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
|
2025-08-21 22:00:04 -04:00 |
|
rakib-hasan
|
7ab8112450
|
[None][fix] Refactoring to avoid circular import when importing torch models (#6720)
Signed-off-by: Rakib Hasan <rhasan@nvidia.com>
|
2025-08-11 18:00:42 -04:00 |
|