..
beamSearchKernels
Feat: Variable-Beam-Width-Search (VBWS) part4 ( #3979 )
2025-05-12 22:32:29 +02:00
causalConv1d
fix: fix license bug ( #5200 )
2025-06-13 18:58:15 +08:00
communicationKernels
[None] [feat] Add model gpt-oss ( #6645 )
2025-08-07 03:04:18 -04:00
contextFusedMultiHeadAttention
[None][feat] Use Separate QKV Input Layout for Context MLA ( #6538 )
2025-08-19 22:04:48 +08:00
cutlass_kernels
[None][perf] Make finalize fusion part of the tactic selection logic ( #6915 )
2025-08-21 14:08:03 -07:00
decoderMaskedMultiheadAttention
[TRTLLM-7348] [feat] Enable Cross-Attention to use XQA kernels for Whisper ( #7035 )
2025-08-20 10:11:25 -04:00
dsv3MinLatencyKernels
[ https://nvbugs/5381276 ][fix] fix warning for fused_a_gemm ( #6402 )
2025-08-01 09:37:21 -04:00
flashMLA
feat: reduce unnecessary kernel generation ( #5476 )
2025-07-04 14:37:49 +08:00
fusedLayernormKernels
[None] [feat] Add model gpt-oss ( #6645 )
2025-08-07 03:04:18 -04:00
groupRmsNormKernels
feat: Add heuristic for GroupRMSNorm kernel selection. ( #4047 )
2025-05-13 08:52:53 +08:00
internal_cutlass_kernels
[None][perf] Make finalize fusion part of the tactic selection logic ( #6915 )
2025-08-21 14:08:03 -07:00
llama4MinLatencyKernels
feat: reduce unnecessary kernel generation ( #5476 )
2025-07-04 14:37:49 +08:00
lora
chore: Stabilize ABI boundary for internal kernel library ( #3117 )
2025-04-11 15:07:50 +08:00
moeLoadBalance
feat: Misc Opt for large scale EP ( #5374 )
2025-06-20 13:11:31 +08:00
selectiveScan
fix: fix license bug ( #5200 )
2025-06-13 18:58:15 +08:00
speculativeDecoding
refactor: Remove enforced sorted order of batch slots ( #3502 )
2025-07-14 17:23:02 +02:00
trtllmGenKernels
[ https://nvbugs/5392414 ] [fix] Add customized default routing method ( #6818 )
2025-08-21 16:58:41 +08:00
unfusedAttentionKernels
[TRTLLM-7348] [feat] Enable Cross-Attention to use XQA kernels for Whisper ( #7035 )
2025-08-20 10:11:25 -04:00
userbuffers
[None][feat] Add NCCL Symmetric Integration for All Reduce ( #4500 )
2025-08-07 17:28:14 -07:00
weightOnlyBatchedGemv
[NVBUG-5304516/5319741]Qwen2.5VL FP8 support ( #5029 )
2025-07-09 23:16:42 +08:00
attentionMask.cu
Update TensorRT-LLM ( #2873 )
2025-03-11 21:13:42 +08:00
attentionMask.h
Update TensorRT-LLM ( #2363 )
2024-10-22 20:27:35 +08:00
banBadWords.cu
Update TensorRT-LLM ( #2873 )
2025-03-11 21:13:42 +08:00
banBadWords.h
banRepeatNgram.cu
Update TensorRT-LLM ( #2873 )
2025-03-11 21:13:42 +08:00
banRepeatNgram.h
beamSearchKernels.cu
Feat: Variable-Beam-Width-Search (VBWS) part4 ( #3979 )
2025-05-12 22:32:29 +02:00
beamSearchKernels.h
Feat: Variable-Beam-Width-Search (VBWS) part4 ( #3979 )
2025-05-12 22:32:29 +02:00
buildRelativeAttentionBiasKernel.cu
buildRelativeAttentionBiasKernel.h
CMakeLists.txt
feat: reduce unnecessary kernel generation ( #5476 )
2025-07-04 14:37:49 +08:00
cumsumLastDim.cu
open source 7f370deb0090d885d7518c2b146399ba3933c004 ( #2273 )
2024-09-30 13:51:19 +02:00
cumsumLastDim.h
customAllReduceKernels.cu
Cherry pick feat/llama4 to main ( #4739 )
2025-05-30 05:28:40 +08:00
customAllReduceKernels.h
[None][feat] Add NCCL Symmetric Integration for All Reduce ( #4500 )
2025-08-07 17:28:14 -07:00
customMoeRoutingKernels.cu
[ https://nvbugs/5392414 ] [fix] Add customized default routing method ( #6818 )
2025-08-21 16:58:41 +08:00
customMoeRoutingKernels.h
[ https://nvbugs/5392414 ] [fix] Add customized default routing method ( #6818 )
2025-08-21 16:58:41 +08:00
decoderMaskedMultiheadAttention.cu
Update TensorRT-LLM ( #2502 )
2024-11-26 16:51:34 +08:00
decoderMaskedMultiheadAttention.h
[None] [feat] Add model gpt-oss ( #6645 )
2025-08-07 03:04:18 -04:00
decoderMaskedMultiheadAttentionUtils.h
[None] [feat] Add model gpt-oss ( #6645 )
2025-08-07 03:04:18 -04:00
decodingCommon.cu
Update TensorRT-LLM ( #2755 )
2025-02-11 03:01:00 +00:00
decodingKernels.cu
Feat: Variable-Beam-Width-Search (VBWS) part4 ( #3979 )
2025-05-12 22:32:29 +02:00
decodingKernels.h
refactor: Improve decoder finalize function ( #3077 )
2025-03-28 14:33:59 +08:00
delayStream.cu
Update ( #2978 )
2025-03-23 16:39:35 +08:00
delayStream.h
Update ( #2978 )
2025-03-23 16:39:35 +08:00
doraScaling.cu
Update TensorRT-LLM ( #2755 )
2025-02-11 03:01:00 +00:00
doraScaling.h
Update TensorRT-LLM ( #2755 )
2025-02-11 03:01:00 +00:00
fmhaDispatcher.cpp
[None][feat] Use Separate QKV Input Layout for Context MLA ( #6538 )
2025-08-19 22:04:48 +08:00
fmhaDispatcher.h
Update TensorRT-LLM ( #2755 )
2025-02-11 03:01:00 +00:00
fusedMoeCommKernels.cu
[TRTLLM-6743][feat] Optimize and refactor alltoall in WideEP ( #6973 )
2025-08-24 08:15:29 -04:00
fusedMoeCommKernels.h
[TRTLLM-6743][feat] Optimize and refactor alltoall in WideEP ( #6973 )
2025-08-24 08:15:29 -04:00
fusedQKNormRopeKernel.cu
[None][feat] Support Yarn on Qwen3 ( #6785 )
2025-08-17 07:21:29 +08:00
fusedQKNormRopeKernel.h
[None][feat] Support Yarn on Qwen3 ( #6785 )
2025-08-17 07:21:29 +08:00
gptKernels.cu
Update TensorRT-LLM ( #2755 )
2025-02-11 03:01:00 +00:00
gptKernels.h
feat: add CGA reduction fmha kernels on Blackwell. ( #3763 )
2025-04-29 10:43:54 +08:00
groupGemm.cu
Update TensorRT-LLM ( #2873 )
2025-03-11 21:13:42 +08:00
groupGemm.h
Update TensorRT-LLM ( #2562 )
2024-12-11 00:31:05 -08:00
kvCachePartialCopy.cu
[fix] Fix illegal mem access and possible accuracy lose. Cherry-pick … ( #5017 )
2025-06-09 17:50:57 +08:00
kvCacheUtils.h
chore: Improve documentation of Kv_block_array ( #5765 )
2025-07-05 22:25:27 +02:00
layernormKernels.cu
feat: Add support for fp8 rowwise quantization ( #4876 )
2025-06-14 06:37:48 -07:00
layernormKernels.h
feat: Add support for fp8 rowwise quantization ( #4876 )
2025-06-14 06:37:48 -07:00
logitsBitmask.cu
bitmask v3 ( #3009 )
2025-03-26 15:21:29 +08:00
logitsBitmask.h
Update TensorRT-LLM ( #2532 )
2024-12-04 21:16:56 +08:00
lookupKernels.cu
lookupKernels.h
lruKernel.cu
lruKernel.h
mambaConv1dKernels.cu
feat: Add FP8 support for SM 120 ( #3248 )
2025-04-14 16:05:41 -07:00
mambaConv1dKernels.h
mlaChunkedPrefill.cu
[None][feat] Use Separate QKV Input Layout for Context MLA ( #6538 )
2025-08-19 22:04:48 +08:00
mlaChunkedPrefill.cuh
[None][feat] Use Separate QKV Input Layout for Context MLA ( #6538 )
2025-08-19 22:04:48 +08:00
mlaKernels.cu
[None][feat] Use Separate QKV Input Layout for Context MLA ( #6538 )
2025-08-19 22:04:48 +08:00
mlaKernels.h
[None][feat] Use Separate QKV Input Layout for Context MLA ( #6538 )
2025-08-19 22:04:48 +08:00
moeCommKernelsCommon.h
[TRTLLM-6743][feat] Optimize and refactor alltoall in WideEP ( #6973 )
2025-08-24 08:15:29 -04:00
moePrepareKernels.cu
[TRTLLM-6743][feat] Optimize and refactor alltoall in WideEP ( #6973 )
2025-08-24 08:15:29 -04:00
moePrepareKernels.h
[TRTLLM-6743][feat] Optimize and refactor alltoall in WideEP ( #6973 )
2025-08-24 08:15:29 -04:00
moeTopKFuncs.cuh
[ https://nvbugs/5392414 ] [fix] Add customized default routing method ( #6818 )
2025-08-21 16:58:41 +08:00
multiHeadAttentionCommon.h
[TRTLLM-5366][feat]Add support for sm121 ( #5524 )
2025-07-08 14:27:00 -07:00
noAuxTcKernels.cu
Update ( #2978 )
2025-03-23 16:39:35 +08:00
noAuxTcKernels.h
Update TensorRT-LLM ( #2873 )
2025-03-11 21:13:42 +08:00
penaltyKernels.cu
Update TensorRT-LLM ( #2849 )
2025-03-04 18:44:00 +08:00
penaltyKernels.h
Update TensorRT-LLM ( #2502 )
2024-11-26 16:51:34 +08:00
penaltyTypes.h
preQuantScaleKernel.cu
chore: Mass integration of release/0.20. ( #4871 )
2025-06-04 14:12:27 +08:00
preQuantScaleKernel.h
chore: Mass integration of release/0.20. ( #4871 )
2025-06-04 14:12:27 +08:00
qserveGemm.h
Update TensorRT-LLM ( #2436 )
2024-11-12 15:27:49 +08:00
qserveGemmPerChannel.cu
Update TensorRT-LLM ( #2532 )
2024-12-04 21:16:56 +08:00
qserveGemmPerGroup.cu
Update TensorRT-LLM ( #2502 )
2024-11-26 16:51:34 +08:00
quantization.cu
[TRTLLM-7093][fix] the perf regression to cvt_fp4 kernels ( #6851 )
2025-08-13 19:13:40 +08:00
quantization.cuh
[None] [feat] Add model gpt-oss ( #6645 )
2025-08-07 03:04:18 -04:00
quantization.h
[None] [feat] Add model gpt-oss ( #6645 )
2025-08-07 03:04:18 -04:00
recoverFromRingAtten.cu
[None][feat] Add support for Hopper MLA chunked prefill ( #6655 )
2025-08-14 10:39:26 +08:00
recoverFromRingAtten.h
Support RingAttention in the BertAttention plugin and the DiT model ( #3661 )
2025-05-09 08:06:54 +08:00
rmsnormKernels.cu
Update TensorRT-LLM ( #2436 )
2024-11-12 15:27:49 +08:00
rmsnormKernels.h
Update TensorRT-LLM ( #2436 )
2024-11-12 15:27:49 +08:00
sageAttentionKernels.cu
Update TensorRT-LLM ( #2849 )
2025-03-04 18:44:00 +08:00
sageAttentionKernels.h
Update TensorRT-LLM ( #2849 )
2025-03-04 18:44:00 +08:00
samplingAirTopPKernels.cu
Update TensorRT-LLM ( #2783 )
2025-02-13 18:40:22 +08:00
samplingTopKKernels.cu
Update TensorRT-LLM ( #2849 )
2025-03-04 18:44:00 +08:00
samplingTopKKernels.h
[TRTLLM-6785][feat] BREAKING CHANGE Enable TRTLLM sampler by default ( #6216 )
2025-08-07 22:19:37 -04:00
samplingTopPKernels.cu
chore: remove usernames from comments ( #3291 )
2025-04-05 13:44:28 +08:00
samplingTopPKernels.h
Update TensorRT-LLM ( #2755 )
2025-02-11 03:01:00 +00:00
splitkGroupGemm.cu
Update TensorRT-LLM ( #2873 )
2025-03-11 21:13:42 +08:00
splitkGroupGemm.h
Update TensorRT-LLM ( #2792 )
2025-02-18 21:27:39 +08:00
stopCriteriaKernels.cu
Update TensorRT-LLM ( #2873 )
2025-03-11 21:13:42 +08:00
stopCriteriaKernels.h
open source 4dbf696ae9b74a26829d120b67ab8443d70c8e58 ( #2297 )
2024-10-08 12:19:19 +02:00
topkLastDim.cu
[None][chore] Mass integration of release/1.0 ( #6864 )
2025-08-22 09:25:15 +08:00
topkLastDim.h
Update TensorRT-LLM ( #2436 )
2024-11-12 15:27:49 +08:00
unfusedAttentionKernels.cu
fix: fix for cp > kvHeadNum ( #3002 )
2025-03-26 12:39:02 +08:00
unfusedAttentionKernels.h
fix: fix for cp > kvHeadNum ( #3002 )
2025-03-26 12:39:02 +08:00
xqaDispatcher.cpp
[TRTLLM-7348] [feat] Enable Cross-Attention to use XQA kernels for Whisper ( #7035 )
2025-08-20 10:11:25 -04:00
xqaDispatcher.h
[feat] Support XQA-based MLA on SM120 ( #4858 )
2025-06-06 22:32:49 +08:00