| .. |
|
beamSearchKernels
|
Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979)
|
2025-05-12 22:32:29 +02:00 |
|
causalConv1d
|
fix: fix license bug (#5200)
|
2025-06-13 18:58:15 +08:00 |
|
communicationKernels
|
Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL (#4560)
|
2025-06-14 17:36:22 +08:00 |
|
contextFusedMultiHeadAttention
|
use cu for fmha_v2 (#4694)
|
2025-06-15 18:40:44 +08:00 |
|
cutlass_kernels
|
[TRTLLM-5330] perf: Optimize MoE supplementary kernels for large-scale EP (#5215)
|
2025-06-17 15:23:24 +08:00 |
|
decoderMaskedMultiheadAttention
|
[nvbug 5333996 ][fix] Unload XQA cubins early to avoid static lifetime (#5133)
|
2025-06-13 15:53:29 +08:00 |
|
dsv3MinLatencyKernels
|
Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL (#4560)
|
2025-06-14 17:36:22 +08:00 |
|
flashMLA
|
fix: fix license bug (#5200)
|
2025-06-13 18:58:15 +08:00 |
|
fusedLayernormKernels
|
feat: Add Mixture of Experts FP8xMXFP4 support (#4750)
|
2025-06-09 13:25:04 +08:00 |
|
groupRmsNormKernels
|
feat: Add heuristic for GroupRMSNorm kernel selection. (#4047)
|
2025-05-13 08:52:53 +08:00 |
|
internal_cutlass_kernels
|
Update internal cutlass commit. (#5228)
|
2025-06-17 10:47:45 +08:00 |
|
llama4MinLatencyKernels
|
[fix] Fix Llama4 guradwords failures (#4844)
|
2025-06-02 13:43:42 -07:00 |
|
lora
|
chore: Stabilize ABI boundary for internal kernel library (#3117)
|
2025-04-11 15:07:50 +08:00 |
|
moeLoadBalance
|
feat: large-scale EP(part 6: Online EP load balancer integration for GB200 nvfp4) (#4818)
|
2025-06-08 10:25:18 +08:00 |
|
selectiveScan
|
fix: fix license bug (#5200)
|
2025-06-13 18:58:15 +08:00 |
|
speculativeDecoding
|
fix: Eagle decoding in TRT flow (#4229)
|
2025-05-14 16:10:49 +02:00 |
|
trtllmGenKernels
|
feat: MoE trtllm backend kernel update (#5183)
|
2025-06-16 14:46:13 +08:00 |
|
unfusedAttentionKernels
|
feat: Add Mixture of Experts FP8xMXFP4 support (#4750)
|
2025-06-09 13:25:04 +08:00 |
|
userbuffers
|
feat: Add Mixture of Experts FP8xMXFP4 support (#4750)
|
2025-06-09 13:25:04 +08:00 |
|
weightOnlyBatchedGemv
|
feat: Add FP8 support for SM 120 (#3248)
|
2025-04-14 16:05:41 -07:00 |
|
attentionMask.cu
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
attentionMask.h
|
Update TensorRT-LLM (#2363)
|
2024-10-22 20:27:35 +08:00 |
|
banBadWords.cu
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
banBadWords.h
|
|
|
|
banRepeatNgram.cu
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
banRepeatNgram.h
|
|
|
|
beamSearchKernels.cu
|
Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979)
|
2025-05-12 22:32:29 +02:00 |
|
beamSearchKernels.h
|
Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979)
|
2025-05-12 22:32:29 +02:00 |
|
buildRelativeAttentionBiasKernel.cu
|
|
|
|
buildRelativeAttentionBiasKernel.h
|
|
|
|
CMakeLists.txt
|
Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL (#4560)
|
2025-06-14 17:36:22 +08:00 |
|
cumsumLastDim.cu
|
open source 7f370deb0090d885d7518c2b146399ba3933c004 (#2273)
|
2024-09-30 13:51:19 +02:00 |
|
cumsumLastDim.h
|
|
|
|
customAllReduceKernels.cu
|
Cherry pick feat/llama4 to main (#4739)
|
2025-05-30 05:28:40 +08:00 |
|
customAllReduceKernels.h
|
[TRTLLM-3927] [feat] Finalize + Allreduce + add + rmsnorm fusion (#4756)
|
2025-06-10 19:55:16 +08:00 |
|
decoderMaskedMultiheadAttention.cu
|
Update TensorRT-LLM (#2502)
|
2024-11-26 16:51:34 +08:00 |
|
decoderMaskedMultiheadAttention.h
|
[https://nvbugspro.nvidia.com/bug/5300080] Fix the bug of setting attention_chunk_size and enable chunked-attention in the generation-phase by default (#4693)
|
2025-06-03 19:02:57 -04:00 |
|
decoderMaskedMultiheadAttentionUtils.h
|
Update TensorRT-LLM (#2363)
|
2024-10-22 20:27:35 +08:00 |
|
decodingCommon.cu
|
Update TensorRT-LLM (#2755)
|
2025-02-11 03:01:00 +00:00 |
|
decodingKernels.cu
|
Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979)
|
2025-05-12 22:32:29 +02:00 |
|
decodingKernels.h
|
refactor: Improve decoder finalize function (#3077)
|
2025-03-28 14:33:59 +08:00 |
|
delayStream.cu
|
Update (#2978)
|
2025-03-23 16:39:35 +08:00 |
|
delayStream.h
|
Update (#2978)
|
2025-03-23 16:39:35 +08:00 |
|
doraScaling.cu
|
Update TensorRT-LLM (#2755)
|
2025-02-11 03:01:00 +00:00 |
|
doraScaling.h
|
Update TensorRT-LLM (#2755)
|
2025-02-11 03:01:00 +00:00 |
|
fmhaDispatcher.cpp
|
[https://nvbugspro.nvidia.com/bug/5300080] Fix the bug of setting attention_chunk_size and enable chunked-attention in the generation-phase by default (#4693)
|
2025-06-03 19:02:57 -04:00 |
|
fmhaDispatcher.h
|
Update TensorRT-LLM (#2755)
|
2025-02-11 03:01:00 +00:00 |
|
fusedQKNormRopeKernel.cu
|
perf: Add fused q_norm/k_norm/RoPE for Qwen3. (#4482)
|
2025-05-23 15:31:04 +08:00 |
|
fusedQKNormRopeKernel.h
|
perf: Add fused q_norm/k_norm/RoPE for Qwen3. (#4482)
|
2025-05-23 15:31:04 +08:00 |
|
gptKernels.cu
|
Update TensorRT-LLM (#2755)
|
2025-02-11 03:01:00 +00:00 |
|
gptKernels.h
|
feat: add CGA reduction fmha kernels on Blackwell. (#3763)
|
2025-04-29 10:43:54 +08:00 |
|
groupGemm.cu
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
groupGemm.h
|
Update TensorRT-LLM (#2562)
|
2024-12-11 00:31:05 -08:00 |
|
kvCachePartialCopy.cu
|
[fix] Fix illegal mem access and possible accuracy lose. Cherry-pick … (#5017)
|
2025-06-09 17:50:57 +08:00 |
|
kvCacheUtils.h
|
Update TensorRT-LLM (#2582)
|
2024-12-16 21:50:47 -08:00 |
|
layernormKernels.cu
|
feat: Add support for fp8 rowwise quantization (#4876)
|
2025-06-14 06:37:48 -07:00 |
|
layernormKernels.h
|
feat: Add support for fp8 rowwise quantization (#4876)
|
2025-06-14 06:37:48 -07:00 |
|
logitsBitmask.cu
|
bitmask v3 (#3009)
|
2025-03-26 15:21:29 +08:00 |
|
logitsBitmask.h
|
Update TensorRT-LLM (#2532)
|
2024-12-04 21:16:56 +08:00 |
|
lookupKernels.cu
|
|
|
|
lookupKernels.h
|
|
|
|
lruKernel.cu
|
|
|
|
lruKernel.h
|
|
|
|
mambaConv1dKernels.cu
|
feat: Add FP8 support for SM 120 (#3248)
|
2025-04-14 16:05:41 -07:00 |
|
mambaConv1dKernels.h
|
|
|
|
mlaKernels.cu
|
[feat] Optimize KV Cache Reuse for MLA (#4869)
|
2025-06-13 11:03:05 +08:00 |
|
mlaKernels.h
|
[feat] Optimize KV Cache Reuse for MLA (#4869)
|
2025-06-13 11:03:05 +08:00 |
|
moeCommKernels.cu
|
optimize memset before alltoall communication (#5188)
|
2025-06-14 10:49:47 +08:00 |
|
moeCommKernels.h
|
feat: Add MNNVL MoE A2A support (#3504)
|
2025-04-25 17:29:08 +08:00 |
|
multiHeadAttentionCommon.h
|
chore: Stabilize ABI boundary for internal kernel library (#3117)
|
2025-04-11 15:07:50 +08:00 |
|
noAuxTcKernels.cu
|
Update (#2978)
|
2025-03-23 16:39:35 +08:00 |
|
noAuxTcKernels.h
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
penaltyKernels.cu
|
Update TensorRT-LLM (#2849)
|
2025-03-04 18:44:00 +08:00 |
|
penaltyKernels.h
|
Update TensorRT-LLM (#2502)
|
2024-11-26 16:51:34 +08:00 |
|
penaltyTypes.h
|
|
|
|
preQuantScaleKernel.cu
|
chore: Mass integration of release/0.20. (#4871)
|
2025-06-04 14:12:27 +08:00 |
|
preQuantScaleKernel.h
|
chore: Mass integration of release/0.20. (#4871)
|
2025-06-04 14:12:27 +08:00 |
|
qserveGemm.h
|
Update TensorRT-LLM (#2436)
|
2024-11-12 15:27:49 +08:00 |
|
qserveGemmPerChannel.cu
|
Update TensorRT-LLM (#2532)
|
2024-12-04 21:16:56 +08:00 |
|
qserveGemmPerGroup.cu
|
Update TensorRT-LLM (#2502)
|
2024-11-26 16:51:34 +08:00 |
|
quantization.cu
|
feat: Add Mixture of Experts FP8xMXFP4 support (#4750)
|
2025-06-09 13:25:04 +08:00 |
|
quantization.cuh
|
feat: Add Mixture of Experts FP8xMXFP4 support (#4750)
|
2025-06-09 13:25:04 +08:00 |
|
quantization.h
|
feat: Add Mixture of Experts FP8xMXFP4 support (#4750)
|
2025-06-09 13:25:04 +08:00 |
|
recoverFromRingAtten.cu
|
Support RingAttention in the BertAttention plugin and the DiT model (#3661)
|
2025-05-09 08:06:54 +08:00 |
|
recoverFromRingAtten.h
|
Support RingAttention in the BertAttention plugin and the DiT model (#3661)
|
2025-05-09 08:06:54 +08:00 |
|
renormMoeRoutingKernels.cu
|
Add customized renormalized moe routing kernel for moe cutlass backend (#4955)
|
2025-06-09 17:38:50 +08:00 |
|
renormMoeRoutingKernels.h
|
Add customized renormalized moe routing kernel for moe cutlass backend (#4955)
|
2025-06-09 17:38:50 +08:00 |
|
rmsnormKernels.cu
|
Update TensorRT-LLM (#2436)
|
2024-11-12 15:27:49 +08:00 |
|
rmsnormKernels.h
|
Update TensorRT-LLM (#2436)
|
2024-11-12 15:27:49 +08:00 |
|
sageAttentionKernels.cu
|
Update TensorRT-LLM (#2849)
|
2025-03-04 18:44:00 +08:00 |
|
sageAttentionKernels.h
|
Update TensorRT-LLM (#2849)
|
2025-03-04 18:44:00 +08:00 |
|
samplingAirTopPKernels.cu
|
Update TensorRT-LLM (#2783)
|
2025-02-13 18:40:22 +08:00 |
|
samplingTopKKernels.cu
|
Update TensorRT-LLM (#2849)
|
2025-03-04 18:44:00 +08:00 |
|
samplingTopKKernels.h
|
Update TensorRT-LLM (#2755)
|
2025-02-11 03:01:00 +00:00 |
|
samplingTopPKernels.cu
|
chore: remove usernames from comments (#3291)
|
2025-04-05 13:44:28 +08:00 |
|
samplingTopPKernels.h
|
Update TensorRT-LLM (#2755)
|
2025-02-11 03:01:00 +00:00 |
|
splitkGroupGemm.cu
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
splitkGroupGemm.h
|
Update TensorRT-LLM (#2792)
|
2025-02-18 21:27:39 +08:00 |
|
stopCriteriaKernels.cu
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
stopCriteriaKernels.h
|
open source 4dbf696ae9b74a26829d120b67ab8443d70c8e58 (#2297)
|
2024-10-08 12:19:19 +02:00 |
|
topkLastDim.cu
|
Update TensorRT-LLM (#2436)
|
2024-11-12 15:27:49 +08:00 |
|
topkLastDim.h
|
Update TensorRT-LLM (#2436)
|
2024-11-12 15:27:49 +08:00 |
|
unfusedAttentionKernels.cu
|
fix: fix for cp > kvHeadNum (#3002)
|
2025-03-26 12:39:02 +08:00 |
|
unfusedAttentionKernels.h
|
fix: fix for cp > kvHeadNum (#3002)
|
2025-03-26 12:39:02 +08:00 |
|
xqaDispatcher.cpp
|
[feat] Support XQA-based MLA on SM120 (#4858)
|
2025-06-06 22:32:49 +08:00 |
|
xqaDispatcher.h
|
[feat] Support XQA-based MLA on SM120 (#4858)
|
2025-06-06 22:32:49 +08:00 |