TensorRT-LLMs/cpp/tensorrt_llm/kernels/trtllmGenKernels
davidclark-nv a1235ee978
[feat] Adds optional module cache for TRT-LLM Gen Gemm interfaces (#5743)
Signed-off-by: David Clark <215764518+davidclark-nv@users.noreply.github.com>
Co-authored-by: Nikita Korobov <14355239+nekorobov@users.noreply.github.com>
2025-07-07 13:34:55 -07:00
..
batchedGemm [feat] Adds optional module cache for TRT-LLM Gen Gemm interfaces (#5743) 2025-07-07 13:34:55 -07:00
blockScaleMoe Refactor the topk parallelization part for the routing kernels (#5567) 2025-07-07 15:53:25 +08:00
fmha [https://jirasw.nvidia.com/browse/TRTLLM-4645] support mutliCtasKvMode for high-throughput MLA kernels (#5426) 2025-06-25 16:31:10 +08:00
gemm [feat] Adds optional module cache for TRT-LLM Gen Gemm interfaces (#5743) 2025-07-07 13:34:55 -07:00
gemmGatedAct [feat] Adds optional module cache for TRT-LLM Gen Gemm interfaces (#5743) 2025-07-07 13:34:55 -07:00
CMakeLists.txt feat: update DeepSeek FP8 TRT-LLM Gen cubins (#4643) 2025-06-03 14:07:54 -07:00