mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-25 05:02:59 +08:00
* Why? We would like to show an alternative to monkey-patching in AutoDeploy. * What? This commit builds on the existing custom model implementation for NemotronH and adds the bits relevant for MoE layers. Part of #9150. Signed-off-by: William Zhang <133824995+2ez4bz@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| compile | ||
| config | ||
| custom_ops | ||
| distributed | ||
| export | ||
| models | ||
| shim | ||
| transform | ||
| utils | ||
| __init__.py | ||
| llm_args.py | ||
| llm.py | ||