mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-23 04:03:22 +08:00
tile_tokens_dim directly depends on the num_token, which is a dynamic shape during tuning and inference. When AutoTuner prepares dummy tensors with different num_tokens, it does not update the value of tile_tokens_dim automatically. Therefore, the value stored in the AutoTuner cache is misaligned, which will introduce a lot of cache misses during inference, which hurts perf a lot. To avoid this issue, we move the calculation of tile_tokens_dim right before kernel launching, so that the value of tile_tokens_dim is always up to date with the num_tokens of the current input tensor used for the kernel runner. Also, the tile_tokens_dim is calculated based on the number of tokens of a tuned bucket, instead of the original token number. Because we only tune the value for the buckets, not for the raw input token number, to avoid unexpected misalignment between tile_tokens_dim and the token number. This PR also removes the warmup requests with the extra input shapes, which are triggered in the CUDA graph warmup phase. Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| _util.py | ||
| config_utils.py | ||
| config.py | ||
| cuda_graph_runner.py | ||
| executor_request_queue.py | ||
| finish_reason.py | ||
| grammar_matcher.py | ||
| guided_decoder.py | ||
| handle_logits.py | ||
| kv_cache_connector.py | ||
| kv_cache_transceiver.py | ||
| layerwise_nvtx_marker.py | ||
| llm_request.py | ||
| make_decoding_batch_input_output.py | ||
| mamba_cache_manager.py | ||
| model_engine.py | ||
| py_executor_creator.py | ||
| py_executor.py | ||
| resource_manager.py | ||
| sampler.py | ||
| scheduler.py | ||
| seq_slot_manager.py | ||