TensorRT-LLMs/tests/unittest/_torch/auto_deploy
Gal Hubara-Agam b2095aa074
[#4674][bugfix] AutoDeploy Fix memory leak in fuse_moe (#7844)
Delete the unstacked weights immediately to save GPU memory, cleanup occurs automatically after the transformation, but for large models we'll run out of memory during the transformation itself.

Signed-off-by: Gal Hubara Agam <96368689+galagam@users.noreply.github.com>
2025-09-29 11:01:07 +03:00
..
_utils_test [#7308] [feat] AutoDeploy: graph-less transformers mode for HF (#7635) 2025-09-18 10:44:24 +08:00
unit [#4674][bugfix] AutoDeploy Fix memory leak in fuse_moe (#7844) 2025-09-29 11:01:07 +03:00