TensorRT-LLMs/cpp/tensorrt_llm/plugins
Sergey Klevtsov 27fc35175e
[None][feat] CUTLASS MoE FC2+Finalize fusion (#3294)
Signed-off-by: Sergey Klevtsov <sklevtsov@nvidia.com>
2025-08-12 15:56:48 +08:00
..
api Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
bertAttentionPlugin Support RingAttention in the BertAttention plugin and the DiT model (#3661) 2025-05-09 08:06:54 +08:00
common refactoring: port customized kernels with public cutlass version (#5027) 2025-06-13 16:19:31 +08:00
cpSplitPlugin Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
cudaStreamPlugin Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
cumsumLastDimPlugin Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
doraPlugin Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
eaglePlugin fix: Eagle decoding in TRT flow (#4229) 2025-05-14 16:10:49 +02:00
fp4GemmPlugin refactoring: port customized kernels with public cutlass version (#5027) 2025-06-13 16:19:31 +08:00
fp8RowwiseGemmPlugin Mxfp8xmxfp4 quant mode(#4978) 2025-06-10 22:01:37 +08:00
fusedLayernormPlugin [nvbugs/5321981] Cherrypick fix: Fix the Llama3.1 405B hanging issue. (#5698) (#5925) 2025-07-11 07:51:43 +08:00
gemmAllReducePlugin fix TMA error with GEMM+AR on TP=2 (#6075) 2025-07-18 10:26:08 +08:00
gemmPlugin [NVBUG-5304516/5319741]Qwen2.5VL FP8 support (#5029) 2025-07-09 23:16:42 +08:00
gemmSwigluPlugin Mxfp8xmxfp4 quant mode(#4978) 2025-06-10 22:01:37 +08:00
gptAttentionCommon [None][feat] Multi-block mode for Hopper spec dec XQA kernel (#4416) 2025-08-03 14:31:33 -07:00
gptAttentionPlugin [None][feat] Multi-block mode for Hopper spec dec XQA kernel (#4416) 2025-08-03 14:31:33 -07:00
identityPlugin Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
layernormQuantizationPlugin feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
lookupPlugin Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
loraPlugin Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
lowLatencyGemmPlugin feat: support add internal cutlass kernels as subproject (#3658) 2025-05-06 11:35:07 +08:00
lowLatencyGemmSwigluPlugin feat: support add internal cutlass kernels as subproject (#3658) 2025-05-06 11:35:07 +08:00
lruPlugin Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
mambaConv1dPlugin Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
mixtureOfExperts [None][feat] CUTLASS MoE FC2+Finalize fusion (#3294) 2025-08-12 15:56:48 +08:00
ncclPlugin refactoring: port customized kernels with public cutlass version (#5027) 2025-06-13 16:19:31 +08:00
qserveGemmPlugin Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
quantizePerTokenPlugin Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
quantizeTensorPlugin Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
quantizeToFP4Plugin [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00
rmsnormQuantizationPlugin Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
selectiveScanPlugin Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
smoothQuantGemmPlugin [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00
topkLastDimPlugin Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
weightOnlyGroupwiseQuantMatmulPlugin chore: Mass integration of release/0.20. (#4871) 2025-06-04 14:12:27 +08:00
weightOnlyQuantMatmulPlugin Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
CMakeLists.txt feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
exports.def Update 2023-10-10 23:22:17 -07:00
exports.map Update TensorRT-LLM (#1530) 2024-04-30 17:19:10 +08:00