| .. |
|
allgatherOp.cpp
|
[fix] Move NCCL group in all-gather and reduce-scatter OPs outside the outer loop (#6053)
|
2025-07-16 00:25:32 +09:00 |
|
allreduceOp.cpp
|
perf: better heuristic for allreduce (#5432)
|
2025-07-01 22:56:06 -04:00 |
|
attentionOp.cpp
|
[TRTLLM-6674][feat] (Breaking Change) Hopper SWA non-cyclic kernels + KV reuse + Spec Dec (#6379)
|
2025-08-05 07:47:41 +00:00 |
|
causalConv1dOp.cpp
|
fix: fix license bug (#5200)
|
2025-06-13 18:58:15 +08:00 |
|
CMakeLists.txt
|
[https://nvbugs/5392414] [fix] For release 1.0 cherry pick. Add customized default routing method (#7068)
|
2025-08-21 20:06:50 +08:00 |
|
convertSpecDecodingMaskToPackedMaskOp.cpp
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
cublasScaledMM.cpp
|
[TRTLLM-4279] fix: Add a protection test for checking trtllm custom ops (#6515)
|
2025-08-01 15:59:09 +08:00 |
|
cublasScaledMM.h
|
Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL (#4560)
|
2025-06-14 17:36:22 +08:00 |
|
cudaScaledMM.cpp
|
[NVBUG-5304516/5319741]Qwen2.5VL FP8 support (#5029)
|
2025-07-09 23:16:42 +08:00 |
|
customMoeRoutingOp.cpp
|
[https://nvbugs/5392414] [fix] For release 1.0 cherry pick. Add customized default routing method (#7068)
|
2025-08-21 20:06:50 +08:00 |
|
cutlassScaledMM.cpp
|
refactoring: port customized kernels with public cutlass version (#5027)
|
2025-06-13 16:19:31 +08:00 |
|
dsv3FusedAGemmOp.cpp
|
Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL (#4560)
|
2025-06-14 17:36:22 +08:00 |
|
dsv3RouterGemmOp.cpp
|
Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL (#4560)
|
2025-06-14 17:36:22 +08:00 |
|
dynamicDecodeOp.cpp
|
Feat: Variable-Beam-Width-Search (VBWS) part3 (#3338)
|
2025-04-08 23:51:27 +08:00 |
|
dynamicDecodeOp.h
|
Update TensorRT-LLM (#2783)
|
2025-02-13 18:40:22 +08:00 |
|
finegrained_mixed_dtype_gemm_thop.cpp
|
W4A8 GEMM (#6005)
|
2025-07-20 17:34:57 +03:00 |
|
finegrained_mixed_dtype_gemm_thop.h
|
W4A8 GEMM (#6005)
|
2025-07-20 17:34:57 +03:00 |
|
fmhaPackMaskOp.cpp
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
fp4BatchedQuantize.cpp
|
update FP4 quantize layout (#3045)
|
2025-04-03 13:13:54 -04:00 |
|
fp4BlockScaleMoe.cpp
|
[nvbugs/5336321][fix] Enable attention dp = False test case, Fix TRTLLM Gen Moe workspace allocation (#5463)
|
2025-07-14 17:17:30 +08:00 |
|
fp4Gemm.cpp
|
feat: Add w4a8_mxfp4_fp8 quantization recipe. (#4867)
|
2025-06-16 11:30:57 +08:00 |
|
fp4GemmTrtllmGen.cpp
|
feat: update DeepSeek FP8 TRT-LLM Gen cubins (#4643)
|
2025-06-03 14:07:54 -07:00 |
|
fp4Op.cpp
|
[feat] Support torch compile for attention dp (#5086)
|
2025-07-01 13:48:52 -04:00 |
|
fp4Quantize.cpp
|
feat: Fallback to NCCL for various patterns when input size is large. (#4080)
|
2025-05-08 11:13:13 -07:00 |
|
fp4Quantize.h
|
feat: Fallback to NCCL for various patterns when input size is large. (#4080)
|
2025-05-08 11:13:13 -07:00 |
|
fp8BatchedGemmTrtllmGen.cpp
|
[TRTLLM-5589] feat: Minor optimizations for tunable FP8 batched GEMM op. (#5139)
|
2025-06-18 14:33:46 +08:00 |
|
fp8BlockScaleMoe.cpp
|
[TRTLLM-6100] fix: Nvbug 5356427: autotuned TRTLLM Gen fp8 block scale MoE illegal memory access (#5676)
|
2025-07-14 17:17:30 +08:00 |
|
fp8BlockScalingGemm.cpp
|
feat: update DeepSeek FP8 TRT-LLM Gen cubins (#4643)
|
2025-06-03 14:07:54 -07:00 |
|
fp8Op.cpp
|
feat: Fallback to NCCL for various patterns when input size is large. (#4080)
|
2025-05-08 11:13:13 -07:00 |
|
fp8Op.h
|
feat: Fallback to NCCL for various patterns when input size is large. (#4080)
|
2025-05-08 11:13:13 -07:00 |
|
fp8PerTensorScaleMoe.cpp
|
[TRTLLM-5770] feat: Integrate TRT-LLM Gen FP8 block scale MoE with Pytorch workflow kernel autotuner (#5207)
|
2025-06-17 21:01:56 +08:00 |
|
fp8PerTensorScalingTrtllmGenGemm.cpp
|
feat: update DeepSeek FP8 TRT-LLM Gen cubins (#4643)
|
2025-06-03 14:07:54 -07:00 |
|
fp8Quantize.cpp
|
[feat] open source fp8_blockscale_gemm (#3071)
|
2025-04-02 12:12:52 +08:00 |
|
fp8RowwiseGemm.cpp
|
[TRTLLM-5812][feat] support FP8 row-wise dense GEMM in torch flow (#5615)
|
2025-07-07 18:04:57 +08:00 |
|
fusedQKNormRopeOp.cpp
|
[https://nvbugs/5340941] - fix: Correct custom ops used by Qwen3 Moe … (#6285)
|
2025-07-25 14:49:45 +08:00 |
|
fusedTopkSoftmax.cpp
|
refactoring: port customized kernels with public cutlass version (#5027)
|
2025-06-13 16:19:31 +08:00 |
|
gatherTreeOp.cpp
|
[TRTLLM-5171] chore: Remove GptSession/V1 from TRT workflow (#4092)
|
2025-05-14 23:10:04 +02:00 |
|
groupRmsNormOp.cpp
|
feat: Add heuristic for GroupRMSNorm kernel selection. (#4047)
|
2025-05-13 08:52:53 +08:00 |
|
llama4MinLatency.cpp
|
Cherry pick feat/llama4 to main (#4739)
|
2025-05-30 05:28:40 +08:00 |
|
logitsBitmaskOp.cpp
|
[TRTLLM-6406] feat: Enable guided decoding with overlap scheduler (#6000)
|
2025-07-17 17:46:10 +08:00 |
|
loraOp.cpp
|
[TRTLLM-7263][fix] Prevent recreation of cublas handles in lora_grouped_gemm every call (#7053)
|
2025-08-20 06:41:20 -04:00 |
|
mambaConv1dOp.cpp
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
mlaPreprocessOp.cpp
|
[TRTLLM-3602][feat] support nvfp4 model and fp8 kv cache for MLA chunked prefill (Blackwell) (#5475)
|
2025-06-26 22:18:08 +08:00 |
|
moeCommOp.cpp
|
[NvBug 5378370] fix: Fix alltoall for llama4 (apply_router_weight_on_input=True) (#5902)
|
2025-07-12 15:50:31 +09:00 |
|
moeLoadBalanceOp.cpp
|
feat: large-scale EP(part 8: Online EP load balancer integration for PCIe fp8) (#5226)
|
2025-06-25 22:25:13 -07:00 |
|
moeOp.cpp
|
[https://nvbugs/5412562][feat] Allocate MoE workspace only when necessary (release/1.0 retargeted) (#6955)
|
2025-08-18 08:50:35 +08:00 |
|
moeUtilOp.cpp
|
feat: Add support for MXFP8xMXFP4 in pytorch (#5535)
|
2025-07-06 15:32:06 -07:00 |
|
mtpOp.cpp
|
fix: refactor and fix mtp vanilla (#4762)
|
2025-06-20 05:23:39 +08:00 |
|
ncclCommunicatorOp.cpp
|
feat: [nvbugs/5261055][nvbugs/5170160] non-invasive pipeline parallelism (#4034)
|
2025-05-16 04:16:53 +08:00 |
|
ncclCommunicatorOp.h
|
feat: [nvbugs/5261055][nvbugs/5170160] non-invasive pipeline parallelism (#4034)
|
2025-05-16 04:16:53 +08:00 |
|
noAuxTcOp.cpp
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
parallelDecodeKVCacheUpdateOp.cpp
|
Update TensorRT-LLM (#2582)
|
2024-12-16 21:50:47 -08:00 |
|
redrafterCurandOp.cpp
|
[TRTLLM-5171] chore: Remove GptSession/V1 from TRT workflow (#4092)
|
2025-05-14 23:10:04 +02:00 |
|
reducescatterOp.cpp
|
[fix] Move NCCL group in all-gather and reduce-scatter OPs outside the outer loop (#6053)
|
2025-07-16 00:25:32 +09:00 |
|
relativeAttentionBiasOp.cpp
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
selectiveScanOp.cpp
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
thUtils.cpp
|
Update TensorRT-LLM (#2849)
|
2025-03-04 18:44:00 +08:00 |
|
thUtils.h
|
[TRTLLM-5812][feat] support FP8 row-wise dense GEMM in torch flow (#5615)
|
2025-07-07 18:04:57 +08:00 |
|
userbuffersFinalizeOp.cpp
|
feat: Introduce UB allocator for pytorch flow (#3257)
|
2025-04-08 18:39:49 +08:00 |
|
userbuffersTensor.cpp
|
feat: Introduce UB allocator for pytorch flow (#3257)
|
2025-04-08 18:39:49 +08:00 |
|
userbuffersTensor.h
|
feat: Introduce UB allocator for pytorch flow (#3257)
|
2025-04-08 18:39:49 +08:00 |
|
virtualMemoryAllocator.cpp
|
[TRTLLM-4406][feat] LLM sleep & wakeup Part 1: virtual device memory (#5034)
|
2025-08-04 13:51:01 +08:00 |
|
weightOnlyQuantGemm.cpp
|
[TRTLLM-5863][feat] Support Weight-Only-Quantization in PyTorch Workflow (#5850)
|
2025-07-21 15:17:35 +08:00 |
|
weightOnlyQuantGemm.h
|
[TRTLLM-5863][feat] Support Weight-Only-Quantization in PyTorch Workflow (#5850)
|
2025-07-21 15:17:35 +08:00 |
|
weightOnlyQuantOp.cpp
|
chore: remove usernames from comments (#3291)
|
2025-04-05 13:44:28 +08:00 |