mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-14 06:27:45 +08:00
* Several optimizations and fixings on the Autotuner. Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> * Apply the new Python side Autotuner on current linear for nvFP4 data type. Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> * Apply the new Python side Autotuner on MoE op * Remove routers from cache key to improve inference perf * Prevent unnecessary code profiling. Use do_preparation keyword to select which part should be executed during before evaluating any tactic. * Remove try-catch inside moe profiling process. * Move default tactic -1 to 0 transforms in cpp runner. * Revise relavant tests. * Predefined the bucketizing strategy for fused_moe Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> * Add specific_profile support for AutoTuner to bypass the standard cache search process for perf optimization * Add specific_profile for moe * Add specific profile for linear Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> * Fixing and revising according to reviewer's suggestions. Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> * Use lru_cache for inference pref optimization. * Revert gen_custom_cache_key feature Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> * Replace runner with runner id to achieve a serializable cache. Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> * Code clean up and minor fixings. Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> * Move all tunable runners and custom ops into torch_custom_ops. Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> * Treat min_latency_mode as a independent dynamic tensor. Modify get_valid_tactics to suit for it. Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> --------- Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| attention.py | ||
| decoder_layer.py | ||
| embedding.py | ||
| fused_moe.py | ||
| gated_mlp.py | ||
| linear.py | ||
| logits_procesor.py | ||
| mamba.py | ||
| mlp.py | ||
| rms_norm.py | ||
| rotary_embedding.py | ||