TensorRT-LLMs/cpp/tensorrt_llm/kernels
dongxuy04 b37a8a9a74
[None][fix] fix EPLB init hang (#8649)
Signed-off-by: Dongxu Yang <78518666+dongxuy04@users.noreply.github.com>
2025-10-28 05:22:49 -04:00
..
beamSearchKernels
causalConv1d
communicationKernels [TRTLLM-7318][feat] MnnvlThroughput AlltoAll implementation. (#7499) 2025-10-27 13:23:06 -04:00
contextFusedMultiHeadAttention [https://nvbugs/5542862][fix] Upgrade fmha_v2. (#8364) 2025-10-20 10:20:23 +08:00
cutlass_kernels [None][feat] Add FP8 rowwise GEMMs for B200 (#8332) 2025-10-27 16:33:14 -04:00
decoderMaskedMultiheadAttention [TRTLLM-8536][feat] Add the sparse attention framework and one use case--RocketKV support (#8086) 2025-10-14 08:23:16 -07:00
dsv3MinLatencyKernels
flashMLA
fusedLayernormKernels
groupRmsNormKernels
internal_cutlass_kernels [None][fix] Fix the performance issue of FP8 blockwise grouped GEMM when using attention DP (#8501) 2025-10-27 10:18:19 +08:00
llama4MinLatencyKernels
lora
moeLoadBalance [None][fix] fix EPLB init hang (#8649) 2025-10-28 05:22:49 -04:00
selectiveScan
speculativeDecoding
tinygemm2 [TRTLLM-7775][feat] Integrate tinygemm2 for gpt-oss (#7916) 2025-10-02 10:47:04 -07:00
trtllmGenKernels [None][feat] Update TRTLLM MoE MxFP4 cubins; autotune tileN (#8156) 2025-10-23 09:14:18 +08:00
unfusedAttentionKernels [TRTLLM-8536][feat] Add the sparse attention framework and one use case--RocketKV support (#8086) 2025-10-14 08:23:16 -07:00
userbuffers
weightOnlyBatchedGemv
attentionMask.cu
attentionMask.h
banBadWords.cu
banBadWords.h
banRepeatNgram.cu
banRepeatNgram.h
beamSearchKernels.cu
beamSearchKernels.h
buildRelativeAttentionBiasKernel.cu
buildRelativeAttentionBiasKernel.h
CMakeLists.txt
cumsumLastDim.cu
cumsumLastDim.h
customAllReduceKernels.cu
customAllReduceKernels.h
customMoeRoutingKernels.cu
customMoeRoutingKernels.h
decoderMaskedMultiheadAttention.cu
decoderMaskedMultiheadAttention.h
decoderMaskedMultiheadAttentionUtils.h
decodingCommon.cu
decodingKernels.cu
decodingKernels.h
delayStream.cu
delayStream.h
doraScaling.cu
doraScaling.h
fmhaDispatcher.cpp [None][feat] Add fmha_v2 kernel for head_dim=80 and sm=100 to support VLM (#8392) 2025-10-17 19:42:47 +08:00
fmhaDispatcher.h
fusedMoeCommKernels.cu [TRTLLM-6748][feat] add PDL support for more kernels (#7977) 2025-10-11 08:32:05 +08:00
fusedMoeCommKernels.h [TRTLLM-6748][feat] add PDL support for more kernels (#7977) 2025-10-11 08:32:05 +08:00
fusedQKNormRopeKernel.cu [None][feat] Support Qwen3 next (#7892) 2025-09-29 21:16:07 +08:00
fusedQKNormRopeKernel.h
gptKernels.cu
gptKernels.h
groupGemm.cu
groupGemm.h
kvCachePartialCopy.cu
kvCacheUtils.h
layernormKernels.cu
layernormKernels.h
logitsBitmask.cu
logitsBitmask.h
lookupKernels.cu
lookupKernels.h
lruKernel.cu
lruKernel.h
mambaConv1dKernels.cu
mambaConv1dKernels.h
mlaChunkedPrefill.cu
mlaChunkedPrefill.cuh
mlaKernels.cu
mlaKernels.h
moe_utils.cuh [https://nvbugs/5378031] [feat] W4A8 AWQ MoE supports Per Expert Pre-quant Scale Factor for PyT backend (#7286) 2025-10-16 11:07:48 +08:00
moeCommKernelsCommon.h
moePrepareKernels.cu [TRTLLM-6748][feat] add PDL support for more kernels (#7977) 2025-10-11 08:32:05 +08:00
moePrepareKernels.h [TRTLLM-6748][feat] add PDL support for more kernels (#7977) 2025-10-11 08:32:05 +08:00
moeTopKFuncs.cuh [TRTLLM-8637][feat] Optimize the routing kernel for DeepseekV3 (MoE CUTLASS backend); Add support for KimiK2 and Qwen-next (MoE TRTLLM backend) (#7761) 2025-10-20 10:08:31 +08:00
multiHeadAttentionCommon.h
noAuxTcKernels.cu [TRTLLM-8637][feat] Optimize the routing kernel for DeepseekV3 (MoE CUTLASS backend); Add support for KimiK2 and Qwen-next (MoE TRTLLM backend) (#7761) 2025-10-20 10:08:31 +08:00
noAuxTcKernels.h [TRTLLM-8637][feat] Optimize the routing kernel for DeepseekV3 (MoE CUTLASS backend); Add support for KimiK2 and Qwen-next (MoE TRTLLM backend) (#7761) 2025-10-20 10:08:31 +08:00
penaltyKernels.cu [None][feat] Support ignored prompt length for penalties via new sampling config parameter (#8127) 2025-10-27 13:12:31 -04:00
penaltyKernels.h [None][feat] Support ignored prompt length for penalties via new sampling config parameter (#8127) 2025-10-27 13:12:31 -04:00
penaltyTypes.h [None][feat] Support ignored prompt length for penalties via new sampling config parameter (#8127) 2025-10-27 13:12:31 -04:00
preQuantScaleKernel.cu [https://nvbugs/5378031] [feat] W4A8 AWQ MoE supports Per Expert Pre-quant Scale Factor for PyT backend (#7286) 2025-10-16 11:07:48 +08:00
preQuantScaleKernel.h [https://nvbugs/5378031] [feat] W4A8 AWQ MoE supports Per Expert Pre-quant Scale Factor for PyT backend (#7286) 2025-10-16 11:07:48 +08:00
qserveGemm.h
qserveGemmPerChannel.cu
qserveGemmPerGroup.cu
quantization.cu
quantization.cuh
quantization.h
recoverFromRingAtten.cu [https://nvbugs/5503138] [fix] Remove compile warnings (#8167) 2025-10-13 13:24:23 +08:00
recoverFromRingAtten.h
rmsnormKernels.cu
rmsnormKernels.h
sageAttentionKernels.cu
sageAttentionKernels.h
samplingAirTopPKernels.cu
samplingTopKKernels.cu
samplingTopKKernels.h
samplingTopPKernels.cu
samplingTopPKernels.h
sparseAttentionKernels.cu [TRTLLM-8536][feat] Add the sparse attention framework and one use case--RocketKV support (#8086) 2025-10-14 08:23:16 -07:00
sparseAttentionKernels.h [TRTLLM-8536][feat] Add the sparse attention framework and one use case--RocketKV support (#8086) 2025-10-14 08:23:16 -07:00
splitkGroupGemm.cu
splitkGroupGemm.h
stopCriteriaKernels.cu
stopCriteriaKernels.h
topkLastDim.cu
topkLastDim.h
unfusedAttentionKernels.cu
unfusedAttentionKernels.h [TRTLLM-8536][feat] Add the sparse attention framework and one use case--RocketKV support (#8086) 2025-10-14 08:23:16 -07:00
xqaDispatcher.cpp [TRTLLM-8536][feat] Add the sparse attention framework and one use case--RocketKV support (#8086) 2025-10-14 08:23:16 -07:00
xqaDispatcher.h