| .. |
|
deep_gemm_tests.py
|
[perf] Reduce the workspace size of FP4 activation scales for MoE (#4303)
|
2025-05-30 09:03:52 +08:00 |
|
test_causal_conv1d_op.py
|
[TRTLLM-4921][feat] Enable chunked prefill for Nemotron-H (#6334)
|
2025-08-22 12:15:20 -04:00 |
|
test_cublas_mm.py
|
[fix] Remove stale cublas heuristics (#4326)
|
2025-05-14 17:35:51 -07:00 |
|
test_custom_ops.py
|
[TRTLLM-6633][feat] Padding for piecewise cudagraph (#6750)
|
2025-08-26 18:31:33 -04:00 |
|
test_dsv3_fused_a_gemm.py
|
Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL (#4560)
|
2025-06-14 17:36:22 +08:00 |
|
test_dsv3_router_gemm.py
|
Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL (#4560)
|
2025-06-14 17:36:22 +08:00 |
|
test_finegrained_mixed_dtype_gemm.py
|
W4A8 GEMM (#6005)
|
2025-07-20 17:34:57 +03:00 |
|
test_fp4_bmm_quantize.py
|
[None] [feat] Add model gpt-oss (#6645)
|
2025-08-07 03:04:18 -04:00 |
|
test_fp4_calculate_global_scale.py
|
[None][perf] Accelerate global scale calculations for deepEP fp4 combine (#7126)
|
2025-08-27 00:13:13 +08:00 |
|
test_fp4_gemm_quantize.py
|
[None] [feat] Add model gpt-oss (#6645)
|
2025-08-07 03:04:18 -04:00 |
|
test_fp4_linear.py
|
[None] [feat] Add model gpt-oss (#6645)
|
2025-08-07 03:04:18 -04:00 |
|
test_fp4_swizzle.py
|
perf: Optimize swizzle_sf, unswizzle_sf, reswizzle_sf (#5318)
|
2025-06-26 14:03:56 +08:00 |
|
test_fp8_block_scale_gemm.py
|
[None][feat] Apply AutoTuner to fp8_block_scale_deep_gemm to trigger JIT ahead of time. (#7113)
|
2025-08-25 10:48:31 +08:00 |
|
test_fp8_linear.py
|
chore: reorganize some unit tests of PyTorch (#3780)
|
2025-04-23 11:19:10 -07:00 |
|
test_fp8_per_tensor_scale_tllmg_gemm.py
|
[None][ci] move unittests to sub-directories (#6635)
|
2025-08-20 05:42:22 -04:00 |
|
test_fp8_quantize.py
|
[None] [feat] Add model gpt-oss (#6645)
|
2025-08-07 03:04:18 -04:00 |
|
test_fp8_rowwise_linear.py
|
[TRTLLM-5812][feat] support FP8 row-wise dense GEMM in torch flow (#5615)
|
2025-07-07 18:04:57 +08:00 |
|
test_fused_qk_norm_rope.py
|
[None][feat] Support Yarn on Qwen3 (#6785)
|
2025-08-17 07:21:29 +08:00 |
|
test_logits_bitmask_op.py
|
Update (#2978)
|
2025-03-23 16:39:35 +08:00 |
|
test_mamba2_chunk_ss_update.py
|
[https://nvbugs/5477332][fix] Relax atol in test_mamba2_chunk_scan_combined_prefill_chunking (#7215)
|
2025-08-26 10:48:58 +03:00 |
|
test_mamba_conv1d_op.py
|
[ci] parallelize torch unittests (#5714)
|
2025-07-09 11:05:57 +03:00 |
|
test_moe_alltoall.py
|
[TRTLLM-6743][feat] Optimize and refactor alltoall in WideEP (#6973)
|
2025-08-24 08:15:29 -04:00 |
|
test_moe.py
|
[None] [feat] Add model gpt-oss (#6645)
|
2025-08-07 03:04:18 -04:00 |
|
test_noaux_tc.py
|
Clean up modeling_deepseek.py (#3640)
|
2025-04-18 17:54:33 -07:00 |
|
test_scaled_mm.py
|
[None][infra] "[TRTLLM-6960][fix] enable scaled_mm tests (#6936)" (#7059)
|
2025-08-20 01:45:09 -04:00 |
|
test_selective_scan_op.py
|
ci: [nvbugs/5280806] Unwaive unittests/_torch. (#4951)
|
2025-06-09 19:04:11 +08:00 |
|
test_tllmg_bmm.py
|
[TRTLLM-5589] feat: Integrate TRT-LLM Gen FP8 Batched GEMM with Pytorch workflow kernel autotuner (#4872)
|
2025-06-09 11:02:48 +01:00 |
|
test_w4a8_linear.py
|
W4A8 GEMM (#6005)
|
2025-07-20 17:34:57 +03:00 |
|
test_w4a8_mxfp4_mxfp8_gemm.py
|
[None] [feat] Add model gpt-oss (#6645)
|
2025-08-07 03:04:18 -04:00 |
|
test_w4a16_linear.py
|
W4A8 GEMM (#6005)
|
2025-07-20 17:34:57 +03:00 |
|
test_weight_only_quant_gemm.py
|
[TRTLLM-5863][feat] Support MoE INT8 Weight-Only-Quantization in PyTorch Workflow (#6629)
|
2025-08-15 17:15:49 -04:00 |
|
test_weight_only_quant_linear.py
|
[TRTLLM-5863][feat] Support Weight-Only-Quantization in PyTorch Workflow (#5850)
|
2025-07-21 15:17:35 +08:00 |