TensorRT-LLMs/cpp/tensorrt_llm/kernels/trtllmGenKernels
Anthony Chang 7d21b55b5a
[feat] Add TRTLLM MoE nvfp4 cubins for mid-high concurrency; attention_dp for TRTLLM MoE (#5723)
Signed-off-by: Anthony Chang <27950904+rosenrodt@users.noreply.github.com>
2025-07-10 14:06:50 +08:00
..
batchedGemm [feat] Add TRTLLM MoE nvfp4 cubins for mid-high concurrency; attention_dp for TRTLLM MoE (#5723) 2025-07-10 14:06:50 +08:00
blockScaleMoe [feat] Add TRTLLM MoE nvfp4 cubins for mid-high concurrency; attention_dp for TRTLLM MoE (#5723) 2025-07-10 14:06:50 +08:00
fmha
gemm [feat] Add TRTLLM MoE nvfp4 cubins for mid-high concurrency; attention_dp for TRTLLM MoE (#5723) 2025-07-10 14:06:50 +08:00
gemmGatedAct [feat] Adds optional module cache for TRT-LLM Gen Gemm interfaces (#5743) 2025-07-07 13:34:55 -07:00
CMakeLists.txt