mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-23 12:12:39 +08:00
tile_tokens_dim directly depends on the num_token, which is a dynamic shape during tuning and inference. When AutoTuner prepares dummy tensors with different num_tokens, it does not update the value of tile_tokens_dim automatically. Therefore, the value stored in the AutoTuner cache is misaligned, which will introduce a lot of cache misses during inference, which hurts perf a lot. To avoid this issue, we move the calculation of tile_tokens_dim right before kernel launching, so that the value of tile_tokens_dim is always up to date with the num_tokens of the current input tensor used for the kernel runner. Also, the tile_tokens_dim is calculated based on the number of tokens of a tuned bucket, instead of the original token number. Because we only tune the value for the buckets, not for the raw input token number, to avoid unexpected misalignment between tile_tokens_dim and the token number. This PR also removes the warmup requests with the extra input shapes, which are triggered in the CUDA graph warmup phase. Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| fused_moe | ||
| mamba | ||
| __init__.py | ||
| attention.py | ||
| decoder_layer.py | ||
| embedding.py | ||
| gated_mlp.py | ||
| layer_norm.py | ||
| linear.py | ||
| logits_processor.py | ||
| mlp.py | ||
| multi_stream_utils.py | ||
| qk_norm_attention.py | ||
| rms_norm.py | ||
| rotary_embedding.py | ||
| swiglu.py | ||
| triton_linear.py | ||