TensorRT-LLMs/cpp/tensorrt_llm/plugins
NVJiangShao 2f2f5cc72c
[TRTLLM-6744][feat] Remove input_sf swizzle for module WideEPMoE (#6231)
Signed-off-by: Jiang Shao <91270701+StudyingShao@users.noreply.github.com>
2025-08-08 11:13:42 +08:00
..
api
bertAttentionPlugin
common
cpSplitPlugin
cudaStreamPlugin
cumsumLastDimPlugin
doraPlugin
eaglePlugin
fp4GemmPlugin
fp8RowwiseGemmPlugin
fusedLayernormPlugin [nvbugs/5321981] Cherrypick fix: Fix the Llama3.1 405B hanging issue. (#5698) (#5925) 2025-07-11 07:51:43 +08:00
gemmAllReducePlugin fix TMA error with GEMM+AR on TP=2 (#6075) 2025-07-18 10:26:08 +08:00
gemmPlugin [NVBUG-5304516/5319741]Qwen2.5VL FP8 support (#5029) 2025-07-09 23:16:42 +08:00
gemmSwigluPlugin
gptAttentionCommon [None][feat] Multi-block mode for Hopper spec dec XQA kernel (#4416) 2025-08-03 14:31:33 -07:00
gptAttentionPlugin [None][feat] Multi-block mode for Hopper spec dec XQA kernel (#4416) 2025-08-03 14:31:33 -07:00
identityPlugin
layernormQuantizationPlugin
lookupPlugin
loraPlugin
lowLatencyGemmPlugin
lowLatencyGemmSwigluPlugin
lruPlugin
mambaConv1dPlugin
mixtureOfExperts [TRTLLM-6744][feat] Remove input_sf swizzle for module WideEPMoE (#6231) 2025-08-08 11:13:42 +08:00
ncclPlugin
qserveGemmPlugin
quantizePerTokenPlugin
quantizeTensorPlugin
quantizeToFP4Plugin [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00
rmsnormQuantizationPlugin
selectiveScanPlugin
smoothQuantGemmPlugin [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00
topkLastDimPlugin
weightOnlyGroupwiseQuantMatmulPlugin
weightOnlyQuantMatmulPlugin
CMakeLists.txt feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
exports.def
exports.map