| .. |
|
allgatherOp.cpp
|
[fix] Move NCCL group in all-gather and reduce-scatter OPs outside the outer loop (#6053)
|
2025-07-16 00:25:32 +09:00 |
|
allreduceOp.cpp
|
[None][feat] Add NCCL Symmetric Integration for All Reduce (#4500)
|
2025-08-07 17:28:14 -07:00 |
|
attentionOp.cpp
|
[TRTLLM-5966][feat] Helix: add custom position ids to MLA kernels (#6904)
|
2025-09-19 20:55:32 +08:00 |
|
attentionOp.h
|
[TRTLLM-5966][feat] Helix: add custom position ids to MLA kernels (#6904)
|
2025-09-19 20:55:32 +08:00 |
|
causalConv1dOp.cpp
|
fix: fix license bug (#5200)
|
2025-06-13 18:58:15 +08:00 |
|
CMakeLists.txt
|
[OMNIML-2336][feat] Add NVFP4 x FP8 (#6809)
|
2025-09-04 09:03:38 -07:00 |
|
convertSpecDecodingMaskToPackedMaskOp.cpp
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
cublasScaledMM.cpp
|
[TRTLLM-4279] fix: Add a protection test for checking trtllm custom ops (#6515)
|
2025-08-01 15:59:09 +08:00 |
|
cublasScaledMM.h
|
Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL (#4560)
|
2025-06-14 17:36:22 +08:00 |
|
cudaScaledMM.cpp
|
[NVBUG-5304516/5319741]Qwen2.5VL FP8 support (#5029)
|
2025-07-09 23:16:42 +08:00 |
|
customMoeRoutingOp.cpp
|
[None] [feat] Enable run_post_quant_allgather for MoE TRTLLM backend (#6794)
|
2025-09-23 08:24:21 +08:00 |
|
cutlassScaledMM.cpp
|
refactoring: port customized kernels with public cutlass version (#5027)
|
2025-06-13 16:19:31 +08:00 |
|
dsv3FusedAGemmOp.cpp
|
Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL (#4560)
|
2025-06-14 17:36:22 +08:00 |
|
dsv3RouterGemmOp.cpp
|
Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL (#4560)
|
2025-06-14 17:36:22 +08:00 |
|
dynamicDecodeOp.cpp
|
Feat: Variable-Beam-Width-Search (VBWS) part3 (#3338)
|
2025-04-08 23:51:27 +08:00 |
|
dynamicDecodeOp.h
|
Update TensorRT-LLM (#2783)
|
2025-02-13 18:40:22 +08:00 |
|
finegrained_mixed_dtype_gemm_thop.cpp
|
W4A8 GEMM (#6005)
|
2025-07-20 17:34:57 +03:00 |
|
finegrained_mixed_dtype_gemm_thop.h
|
W4A8 GEMM (#6005)
|
2025-07-20 17:34:57 +03:00 |
|
fmhaPackMaskOp.cpp
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
fp4BatchedQuantize.cpp
|
[None] [feat] Add model gpt-oss (#6645)
|
2025-08-07 03:04:18 -04:00 |
|
fp4BlockScaleMoe.cpp
|
[OMNIML-2336][feat] Add NVFP4 x FP8 moe kernels (#7821)
|
2025-09-24 12:14:35 -07:00 |
|
fp4Gemm.cpp
|
[TRTLLM-4629] [feat] Add support of CUDA13 and sm103 devices (#7568)
|
2025-09-16 09:56:18 +08:00 |
|
fp4GemmTrtllmGen.cpp
|
[OMNIML-2336][feat] Add NVFP4 x FP8 (#6809)
|
2025-09-04 09:03:38 -07:00 |
|
fp4Op.cpp
|
[None] [feat] Add model gpt-oss (#6645)
|
2025-08-07 03:04:18 -04:00 |
|
fp4Quantize.cpp
|
[None][perf] Accelerate global scale calculations for deepEP fp4 combine (#7126)
|
2025-08-27 00:13:13 +08:00 |
|
fp4Quantize.h
|
[None][perf] Accelerate global scale calculations for deepEP fp4 combine (#7126)
|
2025-08-27 00:13:13 +08:00 |
|
fp4xFp8GemmTrtllmGen.cpp
|
[OMNIML-2336][feat] Add NVFP4 x FP8 (#6809)
|
2025-09-04 09:03:38 -07:00 |
|
fp8BatchedGemmTrtllmGen.cpp
|
[None] [feat] Add model gpt-oss (#6645)
|
2025-08-07 03:04:18 -04:00 |
|
fp8BlockScaleMoe.cpp
|
[None] [feat] Enable run_post_quant_allgather for MoE TRTLLM backend (#6794)
|
2025-09-23 08:24:21 +08:00 |
|
fp8BlockScalingGemm.cpp
|
[TRTLLM-4629] [feat] Add support of CUDA13 and sm103 devices (#7568)
|
2025-09-16 09:56:18 +08:00 |
|
fp8Op.cpp
|
[None] [feat] Add model gpt-oss (#6645)
|
2025-08-07 03:04:18 -04:00 |
|
fp8Op.h
|
[None] [feat] Add model gpt-oss (#6645)
|
2025-08-07 03:04:18 -04:00 |
|
fp8PerTensorScaleMoe.cpp
|
[None] [feat] Enable run_post_quant_allgather for MoE TRTLLM backend (#6794)
|
2025-09-23 08:24:21 +08:00 |
|
fp8PerTensorScalingTrtllmGenGemm.cpp
|
[TRTLLM-4629] [feat] trtllm-gen kernels support sm103 (#7570)
|
2025-09-07 10:04:10 +08:00 |
|
fp8Quantize.cpp
|
[TRTLLM-4629] [feat] Add support of CUDA13 and sm103 devices (#7568)
|
2025-09-16 09:56:18 +08:00 |
|
fp8RowwiseGemm.cpp
|
[TRTLLM-5812][feat] support FP8 row-wise dense GEMM in torch flow (#5615)
|
2025-07-07 18:04:57 +08:00 |
|
fusedQKNormRopeOp.cpp
|
[None][feat] Support Yarn on Qwen3 (#6785)
|
2025-08-17 07:21:29 +08:00 |
|
fusedTopkSoftmax.cpp
|
refactoring: port customized kernels with public cutlass version (#5027)
|
2025-06-13 16:19:31 +08:00 |
|
gatherTreeOp.cpp
|
[TRTLLM-5171] chore: Remove GptSession/V1 from TRT workflow (#4092)
|
2025-05-14 23:10:04 +02:00 |
|
groupRmsNormOp.cpp
|
feat: Add heuristic for GroupRMSNorm kernel selection. (#4047)
|
2025-05-13 08:52:53 +08:00 |
|
llama4MinLatency.cpp
|
Cherry pick feat/llama4 to main (#4739)
|
2025-05-30 05:28:40 +08:00 |
|
logitsBitmaskOp.cpp
|
[TRTLLM-7027][feat] Fuse d2t to logitsBitmaskKernel and fix a race condition in one-model spec (#7481)
|
2025-09-04 23:30:14 +08:00 |
|
loraOp.cpp
|
[TRTLLM-7263][fix] Prevent recreation of cublas handles in lora_grouped_gemm every call (#6968)
|
2025-08-19 15:39:56 +03:00 |
|
mambaConv1dOp.cpp
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
mlaPreprocessOp.cpp
|
[TRTLLM-7192][feat] optimize MLA chunked prefill && support fp8 mla chunked prefill (#7477)
|
2025-09-15 21:43:49 +08:00 |
|
moeCommOp.cpp
|
[TRTLLM-6876][feat] Add low precision all2all for mnnvl (#7155)
|
2025-08-28 18:26:16 +08:00 |
|
moeLoadBalanceOp.cpp
|
[TRTLLM-6743][feat] Optimize and refactor alltoall in WideEP (#6973)
|
2025-08-24 08:15:29 -04:00 |
|
moeOp.cpp
|
[https://nvbugs/5532248][fix] Fix fused_moe OOM (#7931)
|
2025-09-24 02:22:38 -07:00 |
|
moeUtilOp.cpp
|
[TRTLLM-7319][perf] Fuse slicing into MoE. (#6728)
|
2025-08-25 16:52:30 -04:00 |
|
mtpOp.cpp
|
fix: refactor and fix mtp vanilla (#4762)
|
2025-06-20 05:23:39 +08:00 |
|
mxFp4BlockScaleMoe.cpp
|
[None] [feat] Enable run_post_quant_allgather for MoE TRTLLM backend (#6794)
|
2025-09-23 08:24:21 +08:00 |
|
mxFp8Quantize.cpp
|
[None] [feat] Add model gpt-oss (#6645)
|
2025-08-07 03:04:18 -04:00 |
|
ncclCommunicatorOp.cpp
|
feat: [nvbugs/5261055][nvbugs/5170160] non-invasive pipeline parallelism (#4034)
|
2025-05-16 04:16:53 +08:00 |
|
ncclCommunicatorOp.h
|
feat: [nvbugs/5261055][nvbugs/5170160] non-invasive pipeline parallelism (#4034)
|
2025-05-16 04:16:53 +08:00 |
|
noAuxTcOp.cpp
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
parallelDecodeKVCacheUpdateOp.cpp
|
Update TensorRT-LLM (#2582)
|
2024-12-16 21:50:47 -08:00 |
|
redrafterCurandOp.cpp
|
[TRTLLM-5171] chore: Remove GptSession/V1 from TRT workflow (#4092)
|
2025-05-14 23:10:04 +02:00 |
|
reducescatterOp.cpp
|
[fix] Move NCCL group in all-gather and reduce-scatter OPs outside the outer loop (#6053)
|
2025-07-16 00:25:32 +09:00 |
|
relativeAttentionBiasOp.cpp
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
selectiveScanOp.cpp
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
thUtils.cpp
|
Update TensorRT-LLM (#2849)
|
2025-03-04 18:44:00 +08:00 |
|
thUtils.h
|
[TRTLLM-5812][feat] support FP8 row-wise dense GEMM in torch flow (#5615)
|
2025-07-07 18:04:57 +08:00 |
|
userbuffersFinalizeOp.cpp
|
feat: Introduce UB allocator for pytorch flow (#3257)
|
2025-04-08 18:39:49 +08:00 |
|
userbuffersTensor.cpp
|
feat: Introduce UB allocator for pytorch flow (#3257)
|
2025-04-08 18:39:49 +08:00 |
|
userbuffersTensor.h
|
feat: Introduce UB allocator for pytorch flow (#3257)
|
2025-04-08 18:39:49 +08:00 |
|
virtualMemoryAllocator.cpp
|
[TRTLLM-4406][feat] LLM sleep & wakeup Part 1: virtual device memory (#5034)
|
2025-08-04 13:51:01 +08:00 |
|
weightOnlyQuantGemm.cpp
|
[TRTLLM-5863][feat] Support Weight-Only-Quantization in PyTorch Workflow (#5850)
|
2025-07-21 15:17:35 +08:00 |
|
weightOnlyQuantGemm.h
|
[TRTLLM-5863][feat] Support Weight-Only-Quantization in PyTorch Workflow (#5850)
|
2025-07-21 15:17:35 +08:00 |
|
weightOnlyQuantOp.cpp
|
[None] [feat] Add model gpt-oss (#6645)
|
2025-08-07 03:04:18 -04:00 |