TensorRT-LLMs/cpp/tensorrt_llm/plugins
Enwei Zhu 4b82b8b4c7
[TRTLLM-5330] perf: Optimize MoE supplementary kernels for large-scale EP (#5215)
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
2025-06-17 15:23:24 +08:00
..
api Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
bertAttentionPlugin Support RingAttention in the BertAttention plugin and the DiT model (#3661) 2025-05-09 08:06:54 +08:00
common refactoring: port customized kernels with public cutlass version (#5027) 2025-06-13 16:19:31 +08:00
cpSplitPlugin Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
cudaStreamPlugin Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
cumsumLastDimPlugin Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
doraPlugin Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
eaglePlugin fix: Eagle decoding in TRT flow (#4229) 2025-05-14 16:10:49 +02:00
fp4GemmPlugin refactoring: port customized kernels with public cutlass version (#5027) 2025-06-13 16:19:31 +08:00
fp8RowwiseGemmPlugin Mxfp8xmxfp4 quant mode(#4978) 2025-06-10 22:01:37 +08:00
fusedLayernormPlugin Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
gemmAllReducePlugin refactoring: port customized kernels with public cutlass version (#5027) 2025-06-13 16:19:31 +08:00
gemmPlugin feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
gemmSwigluPlugin Mxfp8xmxfp4 quant mode(#4978) 2025-06-10 22:01:37 +08:00
gptAttentionCommon [nvbug 5333996 ][fix] Unload XQA cubins early to avoid static lifetime (#5133) 2025-06-13 15:53:29 +08:00
gptAttentionPlugin Feat: Variable-Beam-Width-Search (VBWS) part3 (#3338) 2025-04-08 23:51:27 +08:00
identityPlugin Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
layernormQuantizationPlugin feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
lookupPlugin Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
loraPlugin Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
lowLatencyGemmPlugin feat: support add internal cutlass kernels as subproject (#3658) 2025-05-06 11:35:07 +08:00
lowLatencyGemmSwigluPlugin feat: support add internal cutlass kernels as subproject (#3658) 2025-05-06 11:35:07 +08:00
lruPlugin Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
mambaConv1dPlugin Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
mixtureOfExperts [TRTLLM-5330] perf: Optimize MoE supplementary kernels for large-scale EP (#5215) 2025-06-17 15:23:24 +08:00
ncclPlugin refactoring: port customized kernels with public cutlass version (#5027) 2025-06-13 16:19:31 +08:00
qserveGemmPlugin Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
quantizePerTokenPlugin Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
quantizeTensorPlugin Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
quantizeToFP4Plugin update FP4 quantize layout (#3045) 2025-04-03 13:13:54 -04:00
rmsnormQuantizationPlugin Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
selectiveScanPlugin Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
smoothQuantGemmPlugin Mxfp8xmxfp4 quant mode(#4978) 2025-06-10 22:01:37 +08:00
topkLastDimPlugin Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
weightOnlyGroupwiseQuantMatmulPlugin chore: Mass integration of release/0.20. (#4871) 2025-06-04 14:12:27 +08:00
weightOnlyQuantMatmulPlugin Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
CMakeLists.txt refactoring: port customized kernels with public cutlass version (#5027) 2025-06-13 16:19:31 +08:00
exports.def Update 2023-10-10 23:22:17 -07:00
exports.map Update TensorRT-LLM (#1530) 2024-04-30 17:19:10 +08:00