TensorRT-LLMs/cpp/tensorrt_llm/kernels/trtllmGenKernels
Yan Chunwei a5eff139f1
[TRTLLM-5277] chore: refine llmapi examples for 1.0 (part1) (#5431)
Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>
Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>
Co-authored-by: Erin Ho <14718778+hchings@users.noreply.github.com>
2025-07-01 19:06:41 +08:00
..
batchedGemm [TRTLLM-5770] feat: Integrate TRT-LLM Gen FP8 block scale MoE with Pytorch workflow kernel autotuner (#5207) 2025-06-17 21:01:56 +08:00
blockScaleMoe Fix mPtrExpertCounts allocation in MoE TRT-LLM backend (nvfp4) (#5519) 2025-06-27 20:17:40 +08:00
fmha [https://jirasw.nvidia.com/browse/TRTLLM-4645] support mutliCtasKvMode for high-throughput MLA kernels (#5426) 2025-06-25 16:31:10 +08:00
gemm [TRTLLM-5277] chore: refine llmapi examples for 1.0 (part1) (#5431) 2025-07-01 19:06:41 +08:00
gemmGatedAct feat: update DeepSeek FP8 TRT-LLM Gen cubins (#4643) 2025-06-03 14:07:54 -07:00
CMakeLists.txt feat: update DeepSeek FP8 TRT-LLM Gen cubins (#4643) 2025-06-03 14:07:54 -07:00