TensorRT-LLMs/cpp/tensorrt_llm/kernels
Daniel Stokes f277afdd93
perf: Enable 128x256 tile shapes for FP4 MOE CUTLASS backend (#5986)
Signed-off-by: Daniel Stokes <40156487+djns99@users.noreply.github.com>
2025-07-14 14:04:15 -07:00
..
beamSearchKernels Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979) 2025-05-12 22:32:29 +02:00
causalConv1d fix: fix license bug (#5200) 2025-06-13 18:58:15 +08:00
communicationKernels perf: better heuristic for allreduce (#5432) 2025-07-01 22:56:06 -04:00
contextFusedMultiHeadAttention [https://nvbugspro.nvidia.com/bug/5355054] fallback to cubins for fp8 fmha kernels on Ada. (#5779) 2025-07-14 17:17:30 +08:00
cutlass_kernels perf: Enable 128x256 tile shapes for FP4 MOE CUTLASS backend (#5986) 2025-07-14 14:04:15 -07:00
decoderMaskedMultiheadAttention Add is_fp8_output key to XQA kernel cubin hashing (solves Eagle3-one-engine Hopper fp8 bug) (#5813) 2025-07-09 09:26:27 +08:00
dsv3MinLatencyKernels Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL (#4560) 2025-06-14 17:36:22 +08:00
flashMLA feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
fusedLayernormKernels feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
groupRmsNormKernels feat: Add heuristic for GroupRMSNorm kernel selection. (#4047) 2025-05-13 08:52:53 +08:00
internal_cutlass_kernels feat: Add support for per expert activation scaling factors (#5013) 2025-06-28 09:10:35 +12:00
llama4MinLatencyKernels feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
lora chore: Stabilize ABI boundary for internal kernel library (#3117) 2025-04-11 15:07:50 +08:00
moeLoadBalance feat: Misc Opt for large scale EP (#5374) 2025-06-20 13:11:31 +08:00
selectiveScan fix: fix license bug (#5200) 2025-06-13 18:58:15 +08:00
speculativeDecoding refactor: Remove enforced sorted order of batch slots (#3502) 2025-07-14 17:23:02 +02:00
trtllmGenKernels fix: fast redux detection in trtllm gen routing kernel (#5941) 2025-07-13 16:35:07 +08:00
unfusedAttentionKernels feat: Add Mixture of Experts FP8xMXFP4 support (#4750) 2025-06-09 13:25:04 +08:00
userbuffers feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
weightOnlyBatchedGemv [NVBUG-5304516/5319741]Qwen2.5VL FP8 support (#5029) 2025-07-09 23:16:42 +08:00
attentionMask.cu Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
attentionMask.h
banBadWords.cu Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
banBadWords.h
banRepeatNgram.cu Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
banRepeatNgram.h
beamSearchKernels.cu Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979) 2025-05-12 22:32:29 +02:00
beamSearchKernels.h Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979) 2025-05-12 22:32:29 +02:00
buildRelativeAttentionBiasKernel.cu
buildRelativeAttentionBiasKernel.h
CMakeLists.txt feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
cumsumLastDim.cu
cumsumLastDim.h
customAllReduceKernels.cu Cherry pick feat/llama4 to main (#4739) 2025-05-30 05:28:40 +08:00
customAllReduceKernels.h [TRTLLM-3927] [feat] Finalize + Allreduce + add + rmsnorm fusion (#4756) 2025-06-10 19:55:16 +08:00
decoderMaskedMultiheadAttention.cu
decoderMaskedMultiheadAttention.h [https://nvbugspro.nvidia.com/bug/5300080] Fix the bug of setting attention_chunk_size and enable chunked-attention in the generation-phase by default (#4693) 2025-06-03 19:02:57 -04:00
decoderMaskedMultiheadAttentionUtils.h
decodingCommon.cu
decodingKernels.cu Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979) 2025-05-12 22:32:29 +02:00
decodingKernels.h refactor: Improve decoder finalize function (#3077) 2025-03-28 14:33:59 +08:00
delayStream.cu Update (#2978) 2025-03-23 16:39:35 +08:00
delayStream.h Update (#2978) 2025-03-23 16:39:35 +08:00
doraScaling.cu
doraScaling.h
fmhaDispatcher.cpp feat: chunked prefill for MLA (Blackwell) (#4651) 2025-06-26 09:01:00 +08:00
fmhaDispatcher.h
fusedQKNormRopeKernel.cu perf: Add fused q_norm/k_norm/RoPE for Qwen3. (#4482) 2025-05-23 15:31:04 +08:00
fusedQKNormRopeKernel.h perf: Add fused q_norm/k_norm/RoPE for Qwen3. (#4482) 2025-05-23 15:31:04 +08:00
gptKernels.cu
gptKernels.h feat: add CGA reduction fmha kernels on Blackwell. (#3763) 2025-04-29 10:43:54 +08:00
groupGemm.cu Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
groupGemm.h
kvCachePartialCopy.cu [fix] Fix illegal mem access and possible accuracy lose. Cherry-pick … (#5017) 2025-06-09 17:50:57 +08:00
kvCacheUtils.h chore: Improve documentation of Kv_block_array (#5765) 2025-07-05 22:25:27 +02:00
layernormKernels.cu feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
layernormKernels.h feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
logitsBitmask.cu bitmask v3 (#3009) 2025-03-26 15:21:29 +08:00
logitsBitmask.h
lookupKernels.cu
lookupKernels.h
lruKernel.cu
lruKernel.h
mambaConv1dKernels.cu feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
mambaConv1dKernels.h
mlaChunkedPrefill.cu [TRTLLM-3602][feat] support nvfp4 model and fp8 kv cache for MLA chunked prefill (Blackwell) (#5475) 2025-06-26 22:18:08 +08:00
mlaChunkedPrefill.cuh [TRTLLM-3602][feat] support nvfp4 model and fp8 kv cache for MLA chunked prefill (Blackwell) (#5475) 2025-06-26 22:18:08 +08:00
mlaKernels.cu [TRTLLM-3602][feat] support nvfp4 model and fp8 kv cache for MLA chunked prefill (Blackwell) (#5475) 2025-06-26 22:18:08 +08:00
mlaKernels.h feat: chunked prefill for MLA (Blackwell) (#4651) 2025-06-26 09:01:00 +08:00
moeCommKernels.cu [NvBug 5378370] fix: Fix alltoall for llama4 (apply_router_weight_on_input=True) (#5902) 2025-07-12 15:50:31 +09:00
moeCommKernels.h feat: Add MNNVL MoE A2A support (#3504) 2025-04-25 17:29:08 +08:00
moePrepareKernels.cu [NvBug 5378370] fix: Fix alltoall for llama4 (apply_router_weight_on_input=True) (#5902) 2025-07-12 15:50:31 +09:00
moePrepareKernels.h [TRTLLM-5331] perf: Replace allgaher with AllToAllPrepare (#5570) 2025-06-30 13:06:09 +08:00
multiHeadAttentionCommon.h [TRTLLM-5366][feat]Add support for sm121 (#5524) 2025-07-08 14:27:00 -07:00
noAuxTcKernels.cu Update (#2978) 2025-03-23 16:39:35 +08:00
noAuxTcKernels.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
penaltyKernels.cu Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
penaltyKernels.h
penaltyTypes.h
preQuantScaleKernel.cu chore: Mass integration of release/0.20. (#4871) 2025-06-04 14:12:27 +08:00
preQuantScaleKernel.h chore: Mass integration of release/0.20. (#4871) 2025-06-04 14:12:27 +08:00
qserveGemm.h
qserveGemmPerChannel.cu
qserveGemmPerGroup.cu
quantization.cu perf: Optimize swizzle_sf, unswizzle_sf, reswizzle_sf (#5318) 2025-06-26 14:03:56 +08:00
quantization.cuh feat: Add support for MXFP8xMXFP4 in pytorch (#5535) 2025-07-06 15:32:06 -07:00
quantization.h perf: Optimize swizzle_sf, unswizzle_sf, reswizzle_sf (#5318) 2025-06-26 14:03:56 +08:00
recoverFromRingAtten.cu Support RingAttention in the BertAttention plugin and the DiT model (#3661) 2025-05-09 08:06:54 +08:00
recoverFromRingAtten.h Support RingAttention in the BertAttention plugin and the DiT model (#3661) 2025-05-09 08:06:54 +08:00
renormMoeRoutingKernels.cu feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
renormMoeRoutingKernels.h Add customized renormalized moe routing kernel for moe cutlass backend (#4955) 2025-06-09 17:38:50 +08:00
rmsnormKernels.cu
rmsnormKernels.h
sageAttentionKernels.cu Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
sageAttentionKernels.h Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
samplingAirTopPKernels.cu
samplingTopKKernels.cu Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
samplingTopKKernels.h
samplingTopPKernels.cu chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
samplingTopPKernels.h
splitkGroupGemm.cu Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
splitkGroupGemm.h
stopCriteriaKernels.cu Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
stopCriteriaKernels.h
topkLastDim.cu
topkLastDim.h
unfusedAttentionKernels.cu fix: fix for cp > kvHeadNum (#3002) 2025-03-26 12:39:02 +08:00
unfusedAttentionKernels.h fix: fix for cp > kvHeadNum (#3002) 2025-03-26 12:39:02 +08:00
xqaDispatcher.cpp [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
xqaDispatcher.h [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00