TensorRT-LLMs/tests/unittest/_torch
Simeng Liu 286a789549
feat: Add heuristic for GroupRMSNorm kernel selection. (#4047)
* feat: Add heuristic for GroupRMSNorm kernel selection.

Implements a logistic regression model to dynamically select between:
- GroupRMSNormBaseKernel: Allocates warps proportional to sum of dimensions
  (better SM occupancy in most cases)
- GroupRMSNormLargeBatch: Allocates warps proportional to max dimension
  (better block scheduling in large batch scenarios)

Selection heuristic considers batch size, allocated warps, and scheduling
efficiency on the current GPU architecture. Models for Compute Capability
9.x and 10.x are trained base on nsys kernel runtime data.
The default kernel selection is the base kernel.

The python operator group_rms_norm will use the heuristic by default.
User can pick to use the base or large batch kernels as well.

Signed-off-by: Simeng Liu <simengl@nvidia.com>

* Address the comments.

Signed-off-by: Simeng Liu <simengl@nvidia.com>

---------

Signed-off-by: Simeng Liu <simengl@nvidia.com>
2025-05-13 08:52:53 +08:00
..
auto_deploy [AutoDeploy][perf] Further optimize flashinfer backend in AutoDeploy (#4024) 2025-05-06 10:46:36 +08:00
compilation [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
modeling feat: support multi lora adapters and TP (#3885) 2025-05-08 23:45:45 +08:00
modules chore: reorganize some unit tests of PyTorch (#3780) 2025-04-23 11:19:10 -07:00
multi_gpu chore: Clean up the legacy DeepseekAllreudceFusionOp. (#4081) 2025-05-09 10:20:41 +08:00
multi_gpu_modeling [infra] Improve llama4 parallelism test coverage (#3821) 2025-05-02 16:15:04 -04:00
speculative [fix] Fix relaxed acceptance to support enabling it in context phase (#4126) 2025-05-09 14:11:14 +08:00
thop Cherry-pick trtllm-gen from feat/llama4 to main (#4086) 2025-05-08 14:13:01 -07:00
helpers.py Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
pattern_watcher.py [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
test_attention_mla.py [fix] Loosen the thresholds of test_attention_mla (#4074) 2025-05-06 11:31:09 +08:00
test_attention_no_cache.py refactor(test): remove random context sequence lengths and set seed for reproducibility in attention tests (#3919) 2025-04-29 10:08:04 +08:00
test_attention.py reduce num layers in attention test (#3509) 2025-04-14 12:43:59 +08:00
test_autotuner.py feat: Apply the new torch-flow compatible AutoTuner to both Fused MoE and NVFP4 Linear operators. (#3151) 2025-04-08 14:28:36 +08:00
test_flashinfer_attention.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00
test_flashinfer_star_attn.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00
test_fp8_per_tensor_scale_tllmg_gemm.py Cherry-pick trtllm-gen from feat/llama4 to main (#4086) 2025-05-08 14:13:01 -07:00
test_group_rmn_norm.py feat: Add heuristic for GroupRMSNorm kernel selection. (#4047) 2025-05-13 08:52:53 +08:00
test_mnnvl_memory.py feat: Add MNNVL MoE A2A support (#3504) 2025-04-25 17:29:08 +08:00
test_overlap_scheduler_input.json fix: Fix C++ decoder synchronization in PyTorch (#3106) 2025-04-23 23:55:27 +08:00
test_overlap_scheduler.py chore: Cleanup deprecated APIs from LLM-API (part 1/2) (#3732) 2025-05-07 13:20:25 +08:00
test_pytorch_model_engine.py chore: move all distributed related codes into _torch.distributed directory (#3511) 2025-04-15 08:39:17 +08:00
test_resource_manager.py feat: support multi lora adapters and TP (#3885) 2025-05-08 23:45:45 +08:00
test_return_logits.py feat: adopt new logprob definition in PyTorch flow (#4057) 2025-05-08 20:16:40 +08:00
test_trtllm_decoder.py chore: Cleanup deprecated APIs from LLM-API (part 1/2) (#3732) 2025-05-07 13:20:25 +08:00
test_vanilla_attention.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00