mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-14 06:27:45 +08:00
C++, Python and Python MoE layer all share the definition of ActivationType. Currently this is done thru redefinition which is fragile and can break when adding new activation function types. tensorrt_llm/_torch/utils.py cpp/tensorrt_llm/kernels/cutlass_kernels/include/common.h => tensorrt_llm/layers/moe.py cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com> Co-authored-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| batch_manager | ||
| common | ||
| cutlass_extensions/include/cutlass_extensions | ||
| deep_ep | ||
| deep_gemm | ||
| executor | ||
| executor_worker | ||
| flash_mla | ||
| kernels | ||
| layers | ||
| nanobind | ||
| plugins | ||
| pybind | ||
| runtime | ||
| testing | ||
| thop | ||
| CMakeLists.txt | ||