mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-02-19 17:25:17 +08:00
Add a transform to relace torch.ops.auto_deploy.torch_quant_nvfp4_moe with the optimized torch.ops.auto_deploy.trtllm_quant_nvfp4_moe_fused. Currently generates the wrong results when the number of rows in MoE FC1 weights is not divisible by 128, so torch.ops.auto_deploy.trtllm_quant_nvfp4_moe_fused is not set as the default FP4 MoE implementation (i.e. the transform is disabled). Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| compile | ||
| config | ||
| custom_ops | ||
| distributed | ||
| export | ||
| models | ||
| shim | ||
| transform | ||
| utils | ||
| __init__.py | ||
| llm_args.py | ||
| llm.py | ||