mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-24 04:33:04 +08:00
* Skip the shape profile generating process if the profile has already been found in the cache under tuning mode. This is a prerequisite for nested autotuning because host overhead might be included during the profiling of the high-level op. * Enable the profiling with CUDA graph as the default profiling method. * Apply a heuristic method to cut off the number of repeat times of profiling according to a few-run time measurement. |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| cpp_custom_ops.py | ||
| cute_dsl_custom_ops.py | ||
| flashinfer_custom_ops.py | ||
| torch_custom_ops.py | ||
| trtllm_gen_custom_ops.py | ||
| userbuffers_custom_ops.py | ||