TensorRT-LLMs/tensorrt_llm/_torch/custom_ops
Yukun He c3acf965a6
[TRTLLM-7963][fix] Several improvements of autotuning quality (#9348)
* Skip the shape profile generating process if the profile has already been found in the cache under tuning mode. This is a prerequisite for nested autotuning because host overhead might be included during the profiling of the high-level op.
* Enable the profiling with CUDA graph as the default profiling method.
* Apply a heuristic method to cut off the number of repeat times of profiling according to a few-run time measurement.
2025-11-24 10:38:45 +08:00
..
__init__.py [None][feat] Support Qwen3 next (#7892) 2025-09-29 21:16:07 +08:00
cpp_custom_ops.py [None][feat] Update the indexer topK (#9255) 2025-11-19 11:49:00 +08:00
cute_dsl_custom_ops.py [TRTLLM-7963][fix] Several improvements of autotuning quality (#9348) 2025-11-24 10:38:45 +08:00
flashinfer_custom_ops.py [None][feat] Support Qwen3 next (#7892) 2025-09-29 21:16:07 +08:00
torch_custom_ops.py [None] [fix] Fix missing ActivationType issue (#9171) 2025-11-17 10:43:25 +08:00
trtllm_gen_custom_ops.py [None][feat] Update TRTLLM MoE cubins; reduce mxfp4 weight padding requirement; tighten TMA bound (#9025) 2025-11-17 10:04:29 +08:00
userbuffers_custom_ops.py feat: Introduce UB allocator for pytorch flow (#3257) 2025-04-08 18:39:49 +08:00