TensorRT-LLMs/tests/unittest/_torch/thop
Nikita Korobov fa3879629e
feat: TRT-LLM Gen integration for BMM and MoE refactoring (#4280)
- Adds BatchedGemm cubins and the respective call interface from TensorRT-LLM Generator. 
- Refactors TRT-LLM Gen MoE runner to call to BMM interface
- The accuracy is verified for DeepSeek R1 FP4 

Signed-off-by: Nikita Korobov <nkorobov@nvidia.com>
2025-05-16 13:31:53 +02:00
..
deep_gemm_tests.py chore: reorganize some unit tests of PyTorch (#3780) 2025-04-23 11:19:10 -07:00
test_cublas_mm.py [fix] Remove stale cublas heuristics (#4326) 2025-05-14 17:35:51 -07:00
test_fp4_bmm_quantize.py chore: reorganize some unit tests of PyTorch (#3780) 2025-04-23 11:19:10 -07:00
test_fp4_gemm_quantize.py TRTLLM-4624 feat: Add nvfp4 gemm and moe support for SM120 (#3770) 2025-04-29 11:19:11 -04:00
test_fp4_linear.py chore: reorganize some unit tests of PyTorch (#3780) 2025-04-23 11:19:10 -07:00
test_fp8_block_scale_gemm.py chore: reorganize some unit tests of PyTorch (#3780) 2025-04-23 11:19:10 -07:00
test_fp8_linear.py chore: reorganize some unit tests of PyTorch (#3780) 2025-04-23 11:19:10 -07:00
test_fp8_quantize.py chore: reorganize some unit tests of PyTorch (#3780) 2025-04-23 11:19:10 -07:00
test_logits_bitmask_op.py Update (#2978) 2025-03-23 16:39:35 +08:00
test_mamba_conv1d_op.py test: reorganize tests folder hierarchy (#2996) 2025-03-27 12:07:53 +08:00
test_moe_alltoall.py feat: Add MNNVL MoE A2A support (#3504) 2025-04-25 17:29:08 +08:00
test_moe.py feat: TRT-LLM Gen integration for BMM and MoE refactoring (#4280) 2025-05-16 13:31:53 +02:00
test_noaux_tc.py Clean up modeling_deepseek.py (#3640) 2025-04-18 17:54:33 -07:00
test_scaled_mm.py test: fix cublas_scaled_mm with aligned workspace size (#3600) 2025-04-21 14:51:42 +08:00
test_selective_scan_op.py test: reorganize tests folder hierarchy (#2996) 2025-03-27 12:07:53 +08:00
test_tllmg_bmm.py feat: TRT-LLM Gen integration for BMM and MoE refactoring (#4280) 2025-05-16 13:31:53 +02:00