TensorRT-LLMs/tensorrt_llm/_torch/custom_ops
Anthony Chang 225d3a9001
[None][perf] TRTLLM MoE maps to lower tuning buckets when ep>1 (#9998)
Signed-off-by: Anthony Chang <27950904+rosenrodt@users.noreply.github.com>
2026-01-05 17:16:12 +01:00
..
__init__.py [None][feat] Support Qwen3 next (#7892) 2025-09-29 21:16:07 +08:00
cpp_custom_ops.py [TRTLLM-8821][feat] Apply AutoTuner to AllReduce Op for strategy tuning. (#8531) 2026-01-05 15:44:37 +08:00
cute_dsl_custom_ops.py [TRTLLM-9831][perf] Enable 2CTA with autotune for CuteDSL MoE and Grouped GEMM optimizations (#10201) 2025-12-25 09:04:20 -05:00
flashinfer_custom_ops.py [TRTC-1943][feat] Env vars override support in LLM API (#9104) 2025-12-01 10:04:49 -08:00
torch_custom_ops.py [TRTLLM-8821][feat] Apply AutoTuner to AllReduce Op for strategy tuning. (#8531) 2026-01-05 15:44:37 +08:00
trtllm_gen_custom_ops.py [None][perf] TRTLLM MoE maps to lower tuning buckets when ep>1 (#9998) 2026-01-05 17:16:12 +01:00
userbuffers_custom_ops.py feat: Introduce UB allocator for pytorch flow (#3257) 2025-04-08 18:39:49 +08:00