TensorRT-LLMs/cpp/tensorrt_llm/kernels
Perkz Zheng 5a50e2b26b
[https://nvbugspro.nvidia.com/bug/5355054] fallback to cubins for fp8 fmha kernels on Ada. (#5779)
Signed-off-by: Qidi Sang <200703406+qsang-nv@users.noreply.github.com>
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
Co-authored-by: qsang-nv <200703406+qsang-nv@users.noreply.github.com>
2025-07-08 10:35:38 +08:00
..
beamSearchKernels Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979) 2025-05-12 22:32:29 +02:00
causalConv1d fix: fix license bug (#5200) 2025-06-13 18:58:15 +08:00
communicationKernels Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL (#4560) 2025-06-14 17:36:22 +08:00
contextFusedMultiHeadAttention [https://nvbugspro.nvidia.com/bug/5355054] fallback to cubins for fp8 fmha kernels on Ada. (#5779) 2025-07-08 10:35:38 +08:00
cutlass_kernels [NVBUG:5355009] Modify check for fuse_fp4_quant on SM120 (#5651) 2025-07-03 22:08:15 +09:00
decoderMaskedMultiheadAttention [nvbug 5333996 ][fix] Unload XQA cubins early to avoid static lifetime (#5133) 2025-06-13 15:53:29 +08:00
dsv3MinLatencyKernels Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL (#4560) 2025-06-14 17:36:22 +08:00
flashMLA fix: fix license bug (#5200) 2025-06-13 18:58:15 +08:00
fusedLayernormKernels feat: Add Mixture of Experts FP8xMXFP4 support (#4750) 2025-06-09 13:25:04 +08:00
groupRmsNormKernels feat: Add heuristic for GroupRMSNorm kernel selection. (#4047) 2025-05-13 08:52:53 +08:00
internal_cutlass_kernels Update internal cutlass commit. (#5228) 2025-06-17 10:47:45 +08:00
llama4MinLatencyKernels [fix] Fix Llama4 guradwords failures (#4844) 2025-06-02 13:43:42 -07:00
lora chore: Stabilize ABI boundary for internal kernel library (#3117) 2025-04-11 15:07:50 +08:00
moeLoadBalance feat: large-scale EP(part 6: Online EP load balancer integration for GB200 nvfp4) (#4818) 2025-06-08 10:25:18 +08:00
selectiveScan fix: fix license bug (#5200) 2025-06-13 18:58:15 +08:00
speculativeDecoding fix: Eagle decoding in TRT flow (#4229) 2025-05-14 16:10:49 +02:00
trtllmGenKernels [TRTLLM-5770] feat: Integrate TRT-LLM Gen FP8 block scale MoE with Pytorch workflow kernel autotuner (#5207) 2025-06-17 21:01:56 +08:00
unfusedAttentionKernels feat: Add Mixture of Experts FP8xMXFP4 support (#4750) 2025-06-09 13:25:04 +08:00
userbuffers feat: Add Mixture of Experts FP8xMXFP4 support (#4750) 2025-06-09 13:25:04 +08:00
weightOnlyBatchedGemv feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
attentionMask.cu
attentionMask.h
banBadWords.cu
banBadWords.h
banRepeatNgram.cu
banRepeatNgram.h
beamSearchKernels.cu Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979) 2025-05-12 22:32:29 +02:00
beamSearchKernels.h Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979) 2025-05-12 22:32:29 +02:00
buildRelativeAttentionBiasKernel.cu
buildRelativeAttentionBiasKernel.h
CMakeLists.txt Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL (#4560) 2025-06-14 17:36:22 +08:00
cumsumLastDim.cu
cumsumLastDim.h
customAllReduceKernels.cu Cherry pick feat/llama4 to main (#4739) 2025-05-30 05:28:40 +08:00
customAllReduceKernels.h [TRTLLM-3927] [feat] Finalize + Allreduce + add + rmsnorm fusion (#4756) 2025-06-10 19:55:16 +08:00
decoderMaskedMultiheadAttention.cu
decoderMaskedMultiheadAttention.h [https://nvbugspro.nvidia.com/bug/5300080] Fix the bug of setting attention_chunk_size and enable chunked-attention in the generation-phase by default (#4693) 2025-06-03 19:02:57 -04:00
decoderMaskedMultiheadAttentionUtils.h
decodingCommon.cu
decodingKernels.cu Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979) 2025-05-12 22:32:29 +02:00
decodingKernels.h refactor: Improve decoder finalize function (#3077) 2025-03-28 14:33:59 +08:00
delayStream.cu
delayStream.h
doraScaling.cu
doraScaling.h
fmhaDispatcher.cpp [https://nvbugspro.nvidia.com/bug/5300080] Fix the bug of setting attention_chunk_size and enable chunked-attention in the generation-phase by default (#4693) 2025-06-03 19:02:57 -04:00
fmhaDispatcher.h
fusedQKNormRopeKernel.cu perf: Add fused q_norm/k_norm/RoPE for Qwen3. (#4482) 2025-05-23 15:31:04 +08:00
fusedQKNormRopeKernel.h perf: Add fused q_norm/k_norm/RoPE for Qwen3. (#4482) 2025-05-23 15:31:04 +08:00
gptKernels.cu
gptKernels.h feat: add CGA reduction fmha kernels on Blackwell. (#3763) 2025-04-29 10:43:54 +08:00
groupGemm.cu
groupGemm.h
kvCachePartialCopy.cu [fix] Fix illegal mem access and possible accuracy lose. Cherry-pick … (#5017) 2025-06-09 17:50:57 +08:00
kvCacheUtils.h
layernormKernels.cu feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
layernormKernels.h feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
logitsBitmask.cu bitmask v3 (#3009) 2025-03-26 15:21:29 +08:00
logitsBitmask.h
lookupKernels.cu
lookupKernels.h
lruKernel.cu
lruKernel.h
mambaConv1dKernels.cu feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
mambaConv1dKernels.h
mlaKernels.cu [feat] Optimize KV Cache Reuse for MLA (#4869) 2025-06-13 11:03:05 +08:00
mlaKernels.h [feat] Optimize KV Cache Reuse for MLA (#4869) 2025-06-13 11:03:05 +08:00
moeCommKernels.cu optimize memset before alltoall communication (#5188) 2025-06-14 10:49:47 +08:00
moeCommKernels.h feat: Add MNNVL MoE A2A support (#3504) 2025-04-25 17:29:08 +08:00
multiHeadAttentionCommon.h chore: Stabilize ABI boundary for internal kernel library (#3117) 2025-04-11 15:07:50 +08:00
noAuxTcKernels.cu
noAuxTcKernels.h
penaltyKernels.cu
penaltyKernels.h
penaltyTypes.h
preQuantScaleKernel.cu chore: Mass integration of release/0.20. (#4871) 2025-06-04 14:12:27 +08:00
preQuantScaleKernel.h chore: Mass integration of release/0.20. (#4871) 2025-06-04 14:12:27 +08:00
qserveGemm.h
qserveGemmPerChannel.cu
qserveGemmPerGroup.cu
quantization.cu feat: Add Mixture of Experts FP8xMXFP4 support (#4750) 2025-06-09 13:25:04 +08:00
quantization.cuh feat: Add Mixture of Experts FP8xMXFP4 support (#4750) 2025-06-09 13:25:04 +08:00
quantization.h feat: Add Mixture of Experts FP8xMXFP4 support (#4750) 2025-06-09 13:25:04 +08:00
recoverFromRingAtten.cu Support RingAttention in the BertAttention plugin and the DiT model (#3661) 2025-05-09 08:06:54 +08:00
recoverFromRingAtten.h Support RingAttention in the BertAttention plugin and the DiT model (#3661) 2025-05-09 08:06:54 +08:00
renormMoeRoutingKernels.cu Add customized renormalized moe routing kernel for moe cutlass backend (#4955) 2025-06-09 17:38:50 +08:00
renormMoeRoutingKernels.h Add customized renormalized moe routing kernel for moe cutlass backend (#4955) 2025-06-09 17:38:50 +08:00
rmsnormKernels.cu
rmsnormKernels.h
sageAttentionKernels.cu
sageAttentionKernels.h
samplingAirTopPKernels.cu
samplingTopKKernels.cu
samplingTopKKernels.h
samplingTopPKernels.cu chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
samplingTopPKernels.h
splitkGroupGemm.cu
splitkGroupGemm.h
stopCriteriaKernels.cu
stopCriteriaKernels.h
topkLastDim.cu
topkLastDim.h
unfusedAttentionKernels.cu
unfusedAttentionKernels.h
xqaDispatcher.cpp [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
xqaDispatcher.h [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00