TensorRT-LLMs/tests/unittest/_torch
Yukun He 0ae7017342
Unify two versions of AllReduce custom op (#3032)
* Rewrite unit test for unified allreduce op. Removing the legacy unit test.
* Revise formats, fusion_op bindings. Put all tensors as optional inputs.
* Move the MoeAllreduceOp to a separate custom op.
* Move all the fusion patterns to the new version of the AllReduce fusion kernel. Remove the AllReduce strategy config. Revise the AllReduce strategies and fusion pattern definitions.
* Add more TODOs, fixing minor bugs, and remove legacy code.

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
2025-04-22 21:58:42 +08:00
..
auto_deploy feat:[AutoDeploy] Enhance RoPE support (#3115) 2025-04-11 23:51:24 +08:00
compilation Update (#2978) 2025-03-23 16:39:35 +08:00
modeling feat: Nemotron-H model support (#3430) 2025-04-16 14:05:56 -07:00
modules/tests_lora_modules added loraOp into lora layer + test for mlp and comparison to lora plugin (#3455) 2025-04-17 12:48:27 +08:00
multi_gpu Unify two versions of AllReduce custom op (#3032) 2025-04-22 21:58:42 +08:00
multi_gpu_modeling test: Add llama 4 to ci (#3520) 2025-04-18 11:25:52 +08:00
speculative Support CUDA graphs for EAGLE3 (#3176) 2025-04-17 04:53:50 +08:00
thop test: fix cublas_scaled_mm with aligned workspace size (#3600) 2025-04-21 14:51:42 +08:00
deep_gemm_tests.py feat: use NVRTC for DeepGEMM JIT compilation (#3239) 2025-04-07 20:29:23 +08:00
helpers.py Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
pattern_watcher.py Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
test_attention_no_cache.py Remove dummy forward path (#3669) 2025-04-18 16:17:50 +08:00
test_attention.py reduce num layers in attention test (#3509) 2025-04-14 12:43:59 +08:00
test_autotuner.py feat: Apply the new torch-flow compatible AutoTuner to both Fused MoE and NVFP4 Linear operators. (#3151) 2025-04-08 14:28:36 +08:00
test_flashinfer_attention.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00
test_flashinfer_star_attn.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00
test_fp4_bmm_quantize.py feat: [Deepseek] Add trtllm-gen MOE FP4 MOE backend (#3387) 2025-04-21 10:01:33 +08:00
test_fp4_gemm_quantize.py feat: [Deepseek] Add trtllm-gen MOE FP4 MOE backend (#3387) 2025-04-21 10:01:33 +08:00
test_fp4_linear.py feat: Apply the new torch-flow compatible AutoTuner to both Fused MoE and NVFP4 Linear operators. (#3151) 2025-04-08 14:28:36 +08:00
test_fp8_batched_gemm.py feat: Adding FP8 BMM from Codegen (#3541) 2025-04-16 10:37:15 +02:00
test_fp8_block_scale_gemm.py feat: enable DeepGEMM by default (#3341) 2025-04-08 13:58:57 +08:00
test_fp8_linear.py test: fix cublas_scaled_mm with aligned workspace size (#3600) 2025-04-21 14:51:42 +08:00
test_fp8_quantize.py test: reorganize tests folder hierarchy (#2996) 2025-03-27 12:07:53 +08:00
test_fused_moe.py feat: Apply the new torch-flow compatible AutoTuner to both Fused MoE and NVFP4 Linear operators. (#3151) 2025-04-08 14:28:36 +08:00
test_moe_routing.py Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
test_moe.py feat: [Deepseek] Add trtllm-gen MOE FP4 MOE backend (#3387) 2025-04-21 10:01:33 +08:00
test_overlap_scheduler_input.json Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
test_overlap_scheduler.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00
test_pytorch_model_engine.py chore: move all distributed related codes into _torch.distributed directory (#3511) 2025-04-15 08:39:17 +08:00
test_resource_manager.py Feat/ Integrate peftCacheManager in PyExecutor creation (#3372) 2025-04-15 15:14:43 +08:00
test_vanilla_attention.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00