TensorRT-LLMs/tensorrt_llm/_torch/modules
Simeng Liu 873c7532fd
feat: Add group_rms_norm kernel to normalize multiple inputs in a single operator. (#3438)
* feat: Add group_rms_norm kernel to normalize multiple inputs in a single operator.

Previously, the RMSNorm implementation only supported a single input tensor. With group_rms_norm, multiple tensors can be normalized together:
```python
input_a, input_b, ... = group_rms_norm([input_a, input_b, ...])
```
All input tensors must share the same batch dimension. The kernel partitions work by dynamically assigning warp groups proportional to the last dimension of each input, improving launch efficiency and reducing overhead.

This MR provides two implementations:
GroupRMSNormKernel: Optimized for small-to-medium batch sizes
GroupRMSNormKernelLargeBatch: Contains additional optimizations for large batch sizes

Both kernels are currently exposed as custom PyTorch ops. A future MR will implement heuristic-based kernel selection and expose a unified interface.

Signed-off-by: Simeng Liu <simengl@nvidia.com>

* Resolve comments and fix typo with IS_FLASHINFER_AVAILABLE

Signed-off-by: Simeng Liu <simengl@nvidia.com>

---------

Signed-off-by: Simeng Liu <simengl@nvidia.com>
2025-05-02 13:25:30 +08:00
..
mamba Fix create_weights in attention (#3692) 2025-04-24 07:30:00 +08:00
__init__.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
attention.py model: support Qwen3 (#4010) 2025-05-01 23:12:41 +08:00
decoder_layer.py chore: Use ellipsis as default value to detect whether residual argument is provided (#3626) 2025-04-17 12:31:58 +08:00
embedding.py feat: llama4 input processor (#3383) 2025-04-25 16:47:14 -07:00
fused_moe.py refactor: (part1) Add contraints doc for fusedMoe module. (#3882) 2025-04-29 22:23:02 +08:00
gated_mlp.py feat: Add group_rms_norm kernel to normalize multiple inputs in a single operator. (#3438) 2025-05-02 13:25:30 +08:00
linear.py model: support Qwen3 (#4010) 2025-05-01 23:12:41 +08:00
logits_processor.py feat: LogitsProcessor in PyTorch backend (#3145) 2025-05-01 14:15:30 -07:00
mlp.py Fix create_weights in attention (#3692) 2025-04-24 07:30:00 +08:00
multi_stream_utils.py [chore] Make llama4 MoE use maybe_execute_in_parallel (#3779) 2025-04-28 10:58:03 -04:00
rms_norm.py feat: Add group_rms_norm kernel to normalize multiple inputs in a single operator. (#3438) 2025-05-02 13:25:30 +08:00
rotary_embedding.py feat: Add group_rms_norm kernel to normalize multiple inputs in a single operator. (#3438) 2025-05-02 13:25:30 +08:00