TensorRT-LLMs/cpp/tensorrt_llm/kernels/trtllmGenKernels
dongxuy04 a370643b26
[None][fix] support topk autotuner input for expert slot per group larger than 32 (#9087)
Signed-off-by: Dongxu Yang <78518666+dongxuy04@users.noreply.github.com>
2025-11-14 08:37:20 +08:00
..
batchedGemm [None][feat] Update TRTLLM MoE MxFP4 cubins; autotune tileN (#8156) 2025-10-23 09:14:18 +08:00
blockScaleMoe [None][fix] support topk autotuner input for expert slot per group larger than 32 (#9087) 2025-11-14 08:37:20 +08:00
fmha [TRTLLM-8816][feat] add optimized trtllm-gen attention kernels on sm103 (#9081) 2025-11-13 12:41:07 +08:00
gemm [https://nvbugs/5503138] [fix] Remove compile warnings (#8167) 2025-10-13 13:24:23 +08:00
gemmGatedAct [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
CMakeLists.txt feat: update DeepSeek FP8 TRT-LLM Gen cubins (#4643) 2025-06-03 14:07:54 -07:00