TensorRT-LLMs/tests/unittest/_torch
shaharmor98 5fff8f0935
Add running E2E LoRA flow (#3648)
* add passing E2E LoRA flow

Signed-off-by: Shahar Mor <smor@nvidia.com>

* add experimental feature

Signed-off-by: Shahar Mor <smor@nvidia.com>

* fix llma_args definition

Signed-off-by: Shahar Mor <smor@nvidia.com>

* decreased manually size of max loras to address OOM

Signed-off-by: Shahar Mor <smor@nvidia.com>

---------

Signed-off-by: Shahar Mor <smor@nvidia.com>
2025-04-23 11:19:41 +08:00
..
auto_deploy feat: [AutoDeploy] generalizing cudagraph to multiple dynamic inputs (#3589) 2025-04-23 03:38:51 +08:00
compilation Update (#2978) 2025-03-23 16:39:35 +08:00
modeling Add running E2E LoRA flow (#3648) 2025-04-23 11:19:41 +08:00
modules/tests_lora_modules added loraOp into lora layer + test for mlp and comparison to lora plugin (#3455) 2025-04-17 12:48:27 +08:00
multi_gpu Unify two versions of AllReduce custom op (#3032) 2025-04-22 21:58:42 +08:00
multi_gpu_modeling test: Add llama 4 to ci (#3520) 2025-04-18 11:25:52 +08:00
speculative Support CUDA graphs for EAGLE3 (#3176) 2025-04-17 04:53:50 +08:00
thop test: fix cublas_scaled_mm with aligned workspace size (#3600) 2025-04-21 14:51:42 +08:00
deep_gemm_tests.py feat: use NVRTC for DeepGEMM JIT compilation (#3239) 2025-04-07 20:29:23 +08:00
helpers.py Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
pattern_watcher.py Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
test_attention_no_cache.py Remove dummy forward path (#3669) 2025-04-18 16:17:50 +08:00
test_attention.py reduce num layers in attention test (#3509) 2025-04-14 12:43:59 +08:00
test_autotuner.py feat: Apply the new torch-flow compatible AutoTuner to both Fused MoE and NVFP4 Linear operators. (#3151) 2025-04-08 14:28:36 +08:00
test_flashinfer_attention.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00
test_flashinfer_star_attn.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00
test_fp4_bmm_quantize.py feat: [Deepseek] Add trtllm-gen MOE FP4 MOE backend (#3387) 2025-04-21 10:01:33 +08:00
test_fp4_gemm_quantize.py feat: [Deepseek] Add trtllm-gen MOE FP4 MOE backend (#3387) 2025-04-21 10:01:33 +08:00
test_fp4_linear.py feat: Apply the new torch-flow compatible AutoTuner to both Fused MoE and NVFP4 Linear operators. (#3151) 2025-04-08 14:28:36 +08:00
test_fp8_batched_gemm.py feat: Adding FP8 BMM from Codegen (#3541) 2025-04-16 10:37:15 +02:00
test_fp8_block_scale_gemm.py feat: enable DeepGEMM by default (#3341) 2025-04-08 13:58:57 +08:00
test_fp8_linear.py test: fix cublas_scaled_mm with aligned workspace size (#3600) 2025-04-21 14:51:42 +08:00
test_fp8_quantize.py test: reorganize tests folder hierarchy (#2996) 2025-03-27 12:07:53 +08:00
test_fused_moe.py feat: Apply the new torch-flow compatible AutoTuner to both Fused MoE and NVFP4 Linear operators. (#3151) 2025-04-08 14:28:36 +08:00
test_moe_routing.py Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
test_moe.py feat: [Deepseek] Add trtllm-gen MOE FP4 MOE backend (#3387) 2025-04-21 10:01:33 +08:00
test_overlap_scheduler_input.json Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
test_overlap_scheduler.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00
test_pytorch_model_engine.py chore: move all distributed related codes into _torch.distributed directory (#3511) 2025-04-15 08:39:17 +08:00
test_resource_manager.py Feat/ Integrate peftCacheManager in PyExecutor creation (#3372) 2025-04-15 15:14:43 +08:00
test_vanilla_attention.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00