mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-14 06:27:45 +08:00
* feat: Add group_rms_norm kernel to normalize multiple inputs in a single operator. Previously, the RMSNorm implementation only supported a single input tensor. With group_rms_norm, multiple tensors can be normalized together: ```python input_a, input_b, ... = group_rms_norm([input_a, input_b, ...]) ``` All input tensors must share the same batch dimension. The kernel partitions work by dynamically assigning warp groups proportional to the last dimension of each input, improving launch efficiency and reducing overhead. This MR provides two implementations: GroupRMSNormKernel: Optimized for small-to-medium batch sizes GroupRMSNormKernelLargeBatch: Contains additional optimizations for large batch sizes Both kernels are currently exposed as custom PyTorch ops. A future MR will implement heuristic-based kernel selection and expose a unified interface. Signed-off-by: Simeng Liu <simengl@nvidia.com> * Resolve comments and fix typo with IS_FLASHINFER_AVAILABLE Signed-off-by: Simeng Liu <simengl@nvidia.com> --------- Signed-off-by: Simeng Liu <simengl@nvidia.com> |
||
|---|---|---|
| .. | ||
| _torch | ||
| auto_parallel | ||
| bench | ||
| commands | ||
| evaluate | ||
| executor | ||
| inputs | ||
| layers | ||
| llmapi | ||
| models | ||
| plugin | ||
| quantization | ||
| runtime | ||
| scaffolding | ||
| serve | ||
| tools | ||
| __init__.py | ||
| _common.py | ||
| _dlpack_utils.py | ||
| _ipc_utils.py | ||
| _mnnvl_utils.py | ||
| _utils.py | ||
| builder.py | ||
| disaggregated_params.py | ||
| functional.py | ||
| graph_rewriting.py | ||
| logger.py | ||
| lora_manager.py | ||
| mapping.py | ||
| module.py | ||
| network.py | ||
| parameter.py | ||
| profiler.py | ||
| prompt_adapter_manager.py | ||
| python_plugin.py | ||
| sampling_params.py | ||
| top_model_mixin.py | ||
| version.py | ||