mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-29 15:15:08 +08:00
* add passing E2E LoRA flow Signed-off-by: Shahar Mor <smor@nvidia.com> * add experimental feature Signed-off-by: Shahar Mor <smor@nvidia.com> * fix llma_args definition Signed-off-by: Shahar Mor <smor@nvidia.com> * decreased manually size of max loras to address OOM Signed-off-by: Shahar Mor <smor@nvidia.com> --------- Signed-off-by: Shahar Mor <smor@nvidia.com> |
||
|---|---|---|
| .. | ||
| test_modeling_bert.py | ||
| test_modeling_llama.py | ||
| test_modeling_mixtral.py | ||
| test_modeling_mllama.py | ||
| test_modeling_nemotron_h.py | ||
| test_modeling_nemotron_nas.py | ||
| test_modeling_nemotron.py | ||
| test_modeling_out_of_tree.py | ||
| test_modeling_qwen_moe.py | ||
| test_modeling_qwen.py | ||
| test_modeling_vila.py | ||