mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-26 05:32:57 +08:00
tile_tokens_dim directly depends on the num_token, which is a dynamic shape during tuning and inference. When AutoTuner prepares dummy tensors with different num_tokens, it does not update the value of tile_tokens_dim automatically. Therefore, the value stored in the AutoTuner cache is misaligned, which will introduce a lot of cache misses during inference, which hurts perf a lot. To avoid this issue, we move the calculation of tile_tokens_dim right before kernel launching, so that the value of tile_tokens_dim is always up to date with the num_tokens of the current input tensor used for the kernel runner. Also, the tile_tokens_dim is calculated based on the number of tokens of a tuned bucket, instead of the original token number. Because we only tune the value for the buckets, not for the raw input token number, to avoid unexpected misalignment between tile_tokens_dim and the token number. This PR also removes the warmup requests with the extra input shapes, which are triggered in the CUDA graph warmup phase. Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| attention_backend | ||
| auto_deploy | ||
| compilation | ||
| custom_ops | ||
| cute_dsl_kernels | ||
| debug | ||
| distributed | ||
| models | ||
| modules | ||
| peft | ||
| pyexecutor | ||
| shared_tensor | ||
| speculative | ||
| __init__.py | ||
| autotuner.py | ||
| expert_statistic.py | ||
| flashinfer_utils.py | ||
| hostfunc.py | ||
| llm.py | ||
| metadata.py | ||
| model_config.py | ||
| utils.py | ||
| virtual_memory.py | ||