TensorRT-LLMs/tensorrt_llm/_torch/auto_deploy
Fridah-nv d008d6412f
feat:[AutoDeploy] Update MoE pattern matcher to drop expert selection logic (#3283)
* update matcher to match expert compute first, then extract other args with LCA

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* support 3D and 2D input in torch.ops.moe.trtllm_fused_moe

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>

* update custom ops to support 3D and 2D inputs

Signed-off-by: Ubuntu <201670829+Fridah-nv@users.noreply.github.com>

* update deepseek patch

Signed-off-by: Ubuntu <201670829+Fridah-nv@users.noreply.github.com>

---------

Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
2025-05-15 13:53:09 +08:00
..
compile feat: [AutoDeploy] generalizing cudagraph to multiple dynamic inputs (#3589) 2025-04-23 03:38:51 +08:00
custom_ops feat:[AutoDeploy] Update MoE pattern matcher to drop expert selection logic (#3283) 2025-05-15 13:53:09 +08:00
distributed [AutoDeploy] Make all ranks agree on kv-cache size (#4007) 2025-05-02 04:07:28 +08:00
models feat:[AutoDeploy] Update MoE pattern matcher to drop expert selection logic (#3283) 2025-05-15 13:53:09 +08:00
shim [AutoDeploy][perf] Further optimize flashinfer backend in AutoDeploy (#4024) 2025-05-06 10:46:36 +08:00
transformations feat:[AutoDeploy] Update MoE pattern matcher to drop expert selection logic (#3283) 2025-05-15 13:53:09 +08:00
utils feat: [AutoDeploy] unfusing attention for native support (#3668) 2025-05-02 09:06:49 +08:00
__init__.py Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00