TensorRT-LLMs/tensorrt_llm/_torch/custom_ops
2026-02-05 23:12:38 +08:00
..
__init__.py [None][feat] Integrate cuda.tile RMS norm kernels (#9725) 2026-02-02 19:44:27 +08:00
cpp_custom_ops.py [TRTLLM-9390][chore] Add Fake OPs for One-Sided AlltoAll. (#11002) 2026-01-27 15:55:07 +08:00
cuda_tile_custom_ops.py [None][feat] Integrate cuda.tile RMS norm kernels (#9725) 2026-02-02 19:44:27 +08:00
cute_dsl_custom_ops.py [TRTLLM-9831][perf] Use TMA.RED to improve effective memory bandwidth (#10987) 2026-01-27 16:15:32 +08:00
flashinfer_custom_ops.py [TRTC-1943][feat] Env vars override support in LLM API (#9104) 2025-12-01 10:04:49 -08:00
torch_custom_ops.py [https://nvbugs/5820874][fix] Adjust deepgemm tuning buckets to cover larger num_tokens's scope (#11259) 2026-02-05 23:12:38 +08:00
trtllm_gen_custom_ops.py [TRTLLM-10398][feat] Enable TRTLLM moe backend for Nemotron Super (#10791) 2026-01-31 13:48:25 +08:00
userbuffers_custom_ops.py feat: Introduce UB allocator for pytorch flow (#3257) 2025-04-08 18:39:49 +08:00