mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-14 06:27:45 +08:00
Add a transform to relace torch.ops.auto_deploy.torch_quant_nvfp4_moe with the optimized torch.ops.auto_deploy.trtllm_quant_nvfp4_moe_fused. Currently generates the wrong results when the number of rows in MoE FC1 weights is not divisible by 128, so torch.ops.auto_deploy.trtllm_quant_nvfp4_moe_fused is not set as the default FP4 MoE implementation (i.e. the transform is disabled). Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| _torch | ||
| api_stability | ||
| bindings | ||
| disaggregated | ||
| executor | ||
| llmapi | ||
| others | ||
| scaffolding | ||
| tools | ||
| trt | ||
| utils | ||
| conftest.py | ||
| dump_checkpoint_stats.py | ||
| gc_utils.py | ||
| profile_utils.py | ||
| pytest.ini | ||
| test_model_runner_cpp.py | ||
| test_pip_install.py | ||