TensorRT-LLMs/tests/unittest/_torch/auto_deploy
Neta Zmora 966231d29c
[#9626][feat] Add an auto-deploy transform for using cutlass FP4 MoE kernels (#10304)
Add a transform to relace torch.ops.auto_deploy.torch_quant_nvfp4_moe
with the optimized torch.ops.auto_deploy.trtllm_quant_nvfp4_moe_fused.

Currently generates the wrong results when the number of rows in MoE FC1 weights is not divisible by 128,
so torch.ops.auto_deploy.trtllm_quant_nvfp4_moe_fused is not set as the default FP4 MoE implementation (i.e. the transform is disabled).

Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com>
2025-12-29 23:18:15 +02:00
..
_utils_test [None][fix] Autodeploy: fix some legacy flashinfer attention test errors (#9928) 2025-12-17 12:27:22 -08:00
unit [#9626][feat] Add an auto-deploy transform for using cutlass FP4 MoE kernels (#10304) 2025-12-29 23:18:15 +02:00