mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-14 06:27:45 +08:00
C++, Python and Python MoE layer all share the definition of ActivationType. Currently this is done thru redefinition which is fragile and can break when adding new activation function types. tensorrt_llm/_torch/utils.py cpp/tensorrt_llm/kernels/cutlass_kernels/include/common.h => tensorrt_llm/layers/moe.py cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_kernels.cu Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com> Co-authored-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| allreduce_gemm | ||
| fp4_gemm | ||
| fp8_blockscale_gemm | ||
| fp8_rowwise_gemm | ||
| fpA_intB_gemm | ||
| fused_gated_gemm | ||
| include | ||
| int8_gemm | ||
| low_latency_gemm | ||
| moe_gemm | ||
| python | ||
| CMakeLists.txt | ||
| cutlass_heuristic.cpp | ||
| cutlass_heuristic.h | ||
| cutlass_preprocessors.cpp | ||
| cutlass_preprocessors.h | ||
| cutlass_type_conversion.h | ||