..
allgatherOp.cpp
feat: forward exceptions to Python and catch OOMs ( #4497 )
2025-05-28 11:58:10 +02:00
allreduceOp.cpp
Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL ( #4560 )
2025-06-14 17:36:22 +08:00
attentionOp.cpp
[feat] Piecewise cuda graph support for MLA ( #4467 )
2025-06-17 18:58:38 +08:00
causalConv1dOp.cpp
fix: fix license bug ( #5200 )
2025-06-13 18:58:15 +08:00
CMakeLists.txt
Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL ( #4560 )
2025-06-14 17:36:22 +08:00
convertSpecDecodingMaskToPackedMaskOp.cpp
Update TensorRT-LLM ( #2873 )
2025-03-11 21:13:42 +08:00
cublasScaledMM.cpp
[fix] Remove stale cublas heuristics ( #4326 )
2025-05-14 17:35:51 -07:00
cublasScaledMM.h
Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL ( #4560 )
2025-06-14 17:36:22 +08:00
cutlassScaledMM.cpp
refactoring: port customized kernels with public cutlass version ( #5027 )
2025-06-13 16:19:31 +08:00
dsv3FusedAGemmOp.cpp
Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL ( #4560 )
2025-06-14 17:36:22 +08:00
dsv3RouterGemmOp.cpp
Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL ( #4560 )
2025-06-14 17:36:22 +08:00
dynamicDecodeOp.cpp
Feat: Variable-Beam-Width-Search (VBWS) part3 ( #3338 )
2025-04-08 23:51:27 +08:00
dynamicDecodeOp.h
Update TensorRT-LLM ( #2783 )
2025-02-13 18:40:22 +08:00
fmhaPackMaskOp.cpp
Update TensorRT-LLM ( #2873 )
2025-03-11 21:13:42 +08:00
fp4BatchedQuantize.cpp
update FP4 quantize layout ( #3045 )
2025-04-03 13:13:54 -04:00
fp4BlockScaleMoe.cpp
[TRTLLM-5770] feat: Integrate TRT-LLM Gen FP8 block scale MoE with Pytorch workflow kernel autotuner ( #5207 )
2025-06-17 21:01:56 +08:00
fp4Gemm.cpp
feat: Add w4a8_mxfp4_fp8 quantization recipe. ( #4867 )
2025-06-16 11:30:57 +08:00
fp4GemmTrtllmGen.cpp
feat: update DeepSeek FP8 TRT-LLM Gen cubins ( #4643 )
2025-06-03 14:07:54 -07:00
fp4Op.cpp
feat: [Deepseek] Add trtllm-gen MOE FP4 MOE backend ( #3387 )
2025-04-21 10:01:33 +08:00
fp4Quantize.cpp
feat: Fallback to NCCL for various patterns when input size is large. ( #4080 )
2025-05-08 11:13:13 -07:00
fp4Quantize.h
feat: Fallback to NCCL for various patterns when input size is large. ( #4080 )
2025-05-08 11:13:13 -07:00
fp8BatchedGemmTrtllmGen.cpp
[TRTLLM-5589] feat: Integrate TRT-LLM Gen FP8 Batched GEMM with Pytorch workflow kernel autotuner ( #4872 )
2025-06-09 11:02:48 +01:00
fp8BlockScaleMoe.cpp
[TRTLLM-5770] feat: Integrate TRT-LLM Gen FP8 block scale MoE with Pytorch workflow kernel autotuner ( #5207 )
2025-06-17 21:01:56 +08:00
fp8BlockScalingGemm.cpp
feat: update DeepSeek FP8 TRT-LLM Gen cubins ( #4643 )
2025-06-03 14:07:54 -07:00
fp8Op.cpp
feat: Fallback to NCCL for various patterns when input size is large. ( #4080 )
2025-05-08 11:13:13 -07:00
fp8Op.h
feat: Fallback to NCCL for various patterns when input size is large. ( #4080 )
2025-05-08 11:13:13 -07:00
fp8PerTensorScaleMoe.cpp
[TRTLLM-5770] feat: Integrate TRT-LLM Gen FP8 block scale MoE with Pytorch workflow kernel autotuner ( #5207 )
2025-06-17 21:01:56 +08:00
fp8PerTensorScalingTrtllmGenGemm.cpp
feat: update DeepSeek FP8 TRT-LLM Gen cubins ( #4643 )
2025-06-03 14:07:54 -07:00
fp8Quantize.cpp
[feat] open source fp8_blockscale_gemm ( #3071 )
2025-04-02 12:12:52 +08:00
fusedQKNormRopeOp.cpp
perf: Add fused q_norm/k_norm/RoPE for Qwen3. ( #4482 )
2025-05-23 15:31:04 +08:00
fusedTopkSoftmax.cpp
refactoring: port customized kernels with public cutlass version ( #5027 )
2025-06-13 16:19:31 +08:00
gatherTreeOp.cpp
[TRTLLM-5171] chore: Remove GptSession/V1 from TRT workflow ( #4092 )
2025-05-14 23:10:04 +02:00
groupRmsNormOp.cpp
feat: Add heuristic for GroupRMSNorm kernel selection. ( #4047 )
2025-05-13 08:52:53 +08:00
llama4MinLatency.cpp
Cherry pick feat/llama4 to main ( #4739 )
2025-05-30 05:28:40 +08:00
logitsBitmaskOp.cpp
Update ( #2978 )
2025-03-23 16:39:35 +08:00
loraOp.cpp
added loraOp into lora layer + test for mlp and comparison to lora plugin ( #3455 )
2025-04-17 12:48:27 +08:00
mambaConv1dOp.cpp
Update TensorRT-LLM ( #2873 )
2025-03-11 21:13:42 +08:00
mlaPreprocessOp.cpp
[feat] Optimize KV Cache Reuse for MLA ( #4869 )
2025-06-13 11:03:05 +08:00
moeCommOp.cpp
feat: Add MNNVL MoE A2A support ( #3504 )
2025-04-25 17:29:08 +08:00
moeLoadBalanceOp.cpp
feat: large-scale EP(part 6: Online EP load balancer integration for GB200 nvfp4) ( #4818 )
2025-06-08 10:25:18 +08:00
moeOp.cpp
[TRTLLM-5330] perf: Optimize MoE supplementary kernels for large-scale EP ( #5215 )
2025-06-17 15:23:24 +08:00
mtpOp.cpp
[ https://nvbugs/5123103 ][fix] Fix torch compile for DeepSeekV3 ( #3952 )
2025-05-19 22:12:25 +08:00
ncclCommunicatorOp.cpp
feat: [nvbugs/5261055][nvbugs/5170160] non-invasive pipeline parallelism ( #4034 )
2025-05-16 04:16:53 +08:00
ncclCommunicatorOp.h
feat: [nvbugs/5261055][nvbugs/5170160] non-invasive pipeline parallelism ( #4034 )
2025-05-16 04:16:53 +08:00
noAuxTcOp.cpp
Update TensorRT-LLM ( #2873 )
2025-03-11 21:13:42 +08:00
parallelDecodeKVCacheUpdateOp.cpp
Update TensorRT-LLM ( #2582 )
2024-12-16 21:50:47 -08:00
redrafterCurandOp.cpp
[TRTLLM-5171] chore: Remove GptSession/V1 from TRT workflow ( #4092 )
2025-05-14 23:10:04 +02:00
reducescatterOp.cpp
feat: forward exceptions to Python and catch OOMs ( #4497 )
2025-05-28 11:58:10 +02:00
relativeAttentionBiasOp.cpp
Update TensorRT-LLM ( #2873 )
2025-03-11 21:13:42 +08:00
renormMoeRoutingOp.cpp
Add customized renormalized moe routing kernel for moe cutlass backend ( #4955 )
2025-06-09 17:38:50 +08:00
selectiveScanOp.cpp
Update TensorRT-LLM ( #2873 )
2025-03-11 21:13:42 +08:00
thUtils.cpp
Update TensorRT-LLM ( #2849 )
2025-03-04 18:44:00 +08:00
thUtils.h
feat: Low Precision Allreduce for PCIe based GPU ( #4344 )
2025-05-20 06:53:46 +08:00
userbuffersFinalizeOp.cpp
feat: Introduce UB allocator for pytorch flow ( #3257 )
2025-04-08 18:39:49 +08:00
userbuffersTensor.cpp
feat: Introduce UB allocator for pytorch flow ( #3257 )
2025-04-08 18:39:49 +08:00
userbuffersTensor.h
feat: Introduce UB allocator for pytorch flow ( #3257 )
2025-04-08 18:39:49 +08:00
weightOnlyQuantOp.cpp
chore: remove usernames from comments ( #3291 )
2025-04-05 13:44:28 +08:00