| .. |
|
deep_gemm_tests.py
|
[perf] Reduce the workspace size of FP4 activation scales for MoE (#4303)
|
2025-05-30 09:03:52 +08:00 |
|
test_cublas_mm.py
|
[fix] Remove stale cublas heuristics (#4326)
|
2025-05-14 17:35:51 -07:00 |
|
test_fp4_bmm_quantize.py
|
chore: reorganize some unit tests of PyTorch (#3780)
|
2025-04-23 11:19:10 -07:00 |
|
test_fp4_gemm_quantize.py
|
TRTLLM-4624 feat: Add nvfp4 gemm and moe support for SM120 (#3770)
|
2025-04-29 11:19:11 -04:00 |
|
test_fp4_linear.py
|
chore: reorganize some unit tests of PyTorch (#3780)
|
2025-04-23 11:19:10 -07:00 |
|
test_fp8_block_scale_gemm.py
|
[feat] support fp8 blockscale gemm on sm89 (#4481)
|
2025-05-23 10:39:10 +08:00 |
|
test_fp8_linear.py
|
chore: reorganize some unit tests of PyTorch (#3780)
|
2025-04-23 11:19:10 -07:00 |
|
test_fp8_quantize.py
|
chore: reorganize some unit tests of PyTorch (#3780)
|
2025-04-23 11:19:10 -07:00 |
|
test_fused_qk_norm_rope.py
|
perf: Add fused q_norm/k_norm/RoPE for Qwen3. (#4482)
|
2025-05-23 15:31:04 +08:00 |
|
test_logits_bitmask_op.py
|
Update (#2978)
|
2025-03-23 16:39:35 +08:00 |
|
test_mamba_conv1d_op.py
|
test: reorganize tests folder hierarchy (#2996)
|
2025-03-27 12:07:53 +08:00 |
|
test_moe_alltoall.py
|
feat: Add MNNVL MoE A2A support (#3504)
|
2025-04-25 17:29:08 +08:00 |
|
test_moe.py
|
Qwen3 supports TRTLLM FP4 MoE backend (#4530)
|
2025-05-23 18:31:08 +08:00 |
|
test_noaux_tc.py
|
Clean up modeling_deepseek.py (#3640)
|
2025-04-18 17:54:33 -07:00 |
|
test_scaled_mm.py
|
test: fix cublas_scaled_mm with aligned workspace size (#3600)
|
2025-04-21 14:51:42 +08:00 |
|
test_selective_scan_op.py
|
test: reorganize tests folder hierarchy (#2996)
|
2025-03-27 12:07:53 +08:00 |
|
test_tllmg_bmm.py
|
feat: TRT-LLM Gen integration for BMM and MoE refactoring (#4280)
|
2025-05-16 13:31:53 +02:00 |