TensorRT-LLMs/tensorrt_llm/_torch/custom_ops
Yukun He e580da4155
[TRTLLM-7963][feat] Cold L2 cache when doing autotune benchmarking. (#8779)
The performance results of some kernels could be easily affected by the warm/cold L2 cache status. To achieve more precise profiling results, the L2 cache is cleared for every execution by the circular buffer method for better benchmarking during autotuning.

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
2025-11-25 15:06:22 +08:00
..
__init__.py [None][feat] Support Qwen3 next (#7892) 2025-09-29 21:16:07 +08:00
cpp_custom_ops.py [None][feat] Update the indexer topK (#9255) 2025-11-19 11:49:00 +08:00
cute_dsl_custom_ops.py [TRTLLM-7963][feat] Cold L2 cache when doing autotune benchmarking. (#8779) 2025-11-25 15:06:22 +08:00
flashinfer_custom_ops.py [None][feat] Support Qwen3 next (#7892) 2025-09-29 21:16:07 +08:00
torch_custom_ops.py [None] [fix] Fix missing ActivationType issue (#9171) 2025-11-17 10:43:25 +08:00
trtllm_gen_custom_ops.py [None][feat] Update TRTLLM MoE cubins; reduce mxfp4 weight padding requirement; tighten TMA bound (#9025) 2025-11-17 10:04:29 +08:00
userbuffers_custom_ops.py feat: Introduce UB allocator for pytorch flow (#3257) 2025-04-08 18:39:49 +08:00