TensorRT-LLMs/tensorrt_llm/_torch/modules
Simeng Liu 286a789549
feat: Add heuristic for GroupRMSNorm kernel selection. (#4047)
* feat: Add heuristic for GroupRMSNorm kernel selection.

Implements a logistic regression model to dynamically select between:
- GroupRMSNormBaseKernel: Allocates warps proportional to sum of dimensions
  (better SM occupancy in most cases)
- GroupRMSNormLargeBatch: Allocates warps proportional to max dimension
  (better block scheduling in large batch scenarios)

Selection heuristic considers batch size, allocated warps, and scheduling
efficiency on the current GPU architecture. Models for Compute Capability
9.x and 10.x are trained base on nsys kernel runtime data.
The default kernel selection is the base kernel.

The python operator group_rms_norm will use the heuristic by default.
User can pick to use the base or large batch kernels as well.

Signed-off-by: Simeng Liu <simengl@nvidia.com>

* Address the comments.

Signed-off-by: Simeng Liu <simengl@nvidia.com>

---------

Signed-off-by: Simeng Liu <simengl@nvidia.com>
2025-05-13 08:52:53 +08:00
..
mamba Fix create_weights in attention (#3692) 2025-04-24 07:30:00 +08:00
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
attention.py refactor: Allow models to override apply_qk_norm. (#4078) 2025-05-12 19:38:24 +08:00
decoder_layer.py chore: Use ellipsis as default value to detect whether residual argument is provided (#3626) 2025-04-17 12:31:58 +08:00
embedding.py feat: llama4 input processor (#3383) 2025-04-25 16:47:14 -07:00
fused_moe.py Cherry-pick: Use multi-threading to load MoE expert weights (#4137) 2025-05-09 17:29:24 +08:00
gated_mlp.py feat: support multi lora adapters and TP (#3885) 2025-05-08 23:45:45 +08:00
linear.py feat: support multi lora adapters and TP (#3885) 2025-05-08 23:45:45 +08:00
logits_processor.py feat: LogitsProcessor in PyTorch backend (#3145) 2025-05-01 14:15:30 -07:00
mlp.py feat: support multi lora adapters and TP (#3885) 2025-05-08 23:45:45 +08:00
multi_stream_utils.py [chore] Make llama4 MoE use maybe_execute_in_parallel (#3779) 2025-04-28 10:58:03 -04:00
rms_norm.py feat: Add heuristic for GroupRMSNorm kernel selection. (#4047) 2025-05-13 08:52:53 +08:00
rotary_embedding.py feat: Add group_rms_norm kernel to normalize multiple inputs in a single operator. (#3438) 2025-05-02 13:25:30 +08:00