TensorRT-LLMs/tests/unittest/_torch
Gal Hubara-Agam b2095aa074
[#4674][bugfix] AutoDeploy Fix memory leak in fuse_moe (#7844)
Delete the unstacked weights immediately to save GPU memory, cleanup occurs automatically after the transformation, but for large models we'll run out of memory during the transformation itself.

Signed-off-by: Gal Hubara Agam <96368689+galagam@users.noreply.github.com>
2025-09-29 11:01:07 +03:00
..
attention [https://nvbugs/5453806][unwaive] Unwaive fp8 kvcache attention test (#7243) 2025-09-05 12:13:57 -04:00
auto_deploy [#4674][bugfix] AutoDeploy Fix memory leak in fuse_moe (#7844) 2025-09-29 11:01:07 +03:00
compilation [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
debugger Fix: fix nvbug 5356427 (#5464) 2025-06-25 22:24:26 +08:00
executor [None][chore] extract weights loading related logic to model loader (#7579) 2025-09-25 10:19:22 -07:00
misc [TRTLLM-4500][feat] Add serialization/deserialization options for AutoTuner profiling cache (#7738) 2025-09-29 07:40:51 +08:00
modeling [TRTLLM-7330][feat] Eagle3 cuda graph support for the first draft model inference (#7363) 2025-09-26 11:28:05 +08:00
models/checkpoints/hf [None][feat] Skip prefetching consolidated safetensors when appropriate (#7013) 2025-08-25 23:56:21 -04:00
modules [None][fix] fix a bug in wideEp use DeepEP with num_chunks > 1 (#7954) 2025-09-25 07:53:42 -07:00
multi_gpu [TRTLLM-5966][feat] Helix: add alltoall op (#6815) 2025-09-25 07:18:29 -07:00
multi_gpu_modeling [https://nvbugs/5516710][fix] fix Llama 3.3 TP PP case (#7717) 2025-09-25 21:02:35 +08:00
multimodal [None][ci] Waive test_mm_encoder_standalone.py::test_multi_request_batch_chat[llava-v1.6-mistral-7b-hf] (#8010) 2025-09-26 11:07:54 +08:00
sampler [TRTLLM-7155][feat] Unify sampler handle logits implementation. (#6867) 2025-08-22 08:09:30 +02:00
speculative [TRTLLM-6393][feat] add static tree sampling and verification (#7161) 2025-09-26 13:16:16 -04:00
thop [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
helpers.py [TRTLLM-7330][feat] Eagle3 cuda graph support for the first draft model inference (#7363) 2025-09-26 11:28:05 +08:00
pattern_watcher.py [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
test_connector.py [None][feat] KV Cache Connector API (#7228) 2025-08-28 23:09:27 -04:00