TensorRT-LLMs/tests/unittest/_torch/modeling
shaharmor98 5fff8f0935
Add running E2E LoRA flow (#3648)
* add passing E2E LoRA flow

Signed-off-by: Shahar Mor <smor@nvidia.com>

* add experimental feature

Signed-off-by: Shahar Mor <smor@nvidia.com>

* fix llma_args definition

Signed-off-by: Shahar Mor <smor@nvidia.com>

* decreased manually size of max loras to address OOM

Signed-off-by: Shahar Mor <smor@nvidia.com>

---------

Signed-off-by: Shahar Mor <smor@nvidia.com>
2025-04-23 11:19:41 +08:00
..
test_modeling_bert.py feat: no-cache attention in PyTorch workflow (#3085) 2025-04-05 01:54:32 +08:00
test_modeling_llama.py Add running E2E LoRA flow (#3648) 2025-04-23 11:19:41 +08:00
test_modeling_mixtral.py feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
test_modeling_mllama.py test: reorganize tests folder hierarchy (#2996) 2025-03-27 12:07:53 +08:00
test_modeling_nemotron_h.py feat: Nemotron-H model support (#3430) 2025-04-16 14:05:56 -07:00
test_modeling_nemotron_nas.py feat: Support cos_sin_cache in all cases. (#3517) 2025-04-16 13:48:44 +08:00
test_modeling_nemotron.py test: reorganize tests folder hierarchy (#2996) 2025-03-27 12:07:53 +08:00
test_modeling_out_of_tree.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00
test_modeling_qwen_moe.py feat: add qwen2 moe to torch flow; fix wrong imported KvCacheConfig in gpqa… (#3369) 2025-04-10 22:45:57 +08:00
test_modeling_qwen.py feat: no-cache attention in PyTorch workflow (#3085) 2025-04-05 01:54:32 +08:00
test_modeling_vila.py fix vila test (#3042) 2025-04-04 14:30:06 +08:00