TensorRT-LLMs/tensorrt_llm/_torch/custom_ops
Rundong Li f1b85fea4c
[None][feat] Integrate cuda.tile RMS norm kernels (#9725)
Signed-off-by: Rundong (David) Li <davidli@nvidia.com>
Co-authored-by: Jinman Xie <jinmanx@nvidia.com>
Co-authored-by: Alexey Bylinkin <abylinkin@nvidia.com>
Co-authored-by: Qiqi Xiao <qiqix@nvidia.com>
Co-authored-by: Biao Wang <biaow@nvidia.com>
Co-authored-by: Thomas Schmid <thschmid@nvidia.com>
2026-02-02 19:44:27 +08:00
..
__init__.py [None][feat] Integrate cuda.tile RMS norm kernels (#9725) 2026-02-02 19:44:27 +08:00
cpp_custom_ops.py [TRTLLM-9390][chore] Add Fake OPs for One-Sided AlltoAll. (#11002) 2026-01-27 15:55:07 +08:00
cuda_tile_custom_ops.py [None][feat] Integrate cuda.tile RMS norm kernels (#9725) 2026-02-02 19:44:27 +08:00
cute_dsl_custom_ops.py [TRTLLM-9831][perf] Use TMA.RED to improve effective memory bandwidth (#10987) 2026-01-27 16:15:32 +08:00
flashinfer_custom_ops.py [TRTC-1943][feat] Env vars override support in LLM API (#9104) 2025-12-01 10:04:49 -08:00
torch_custom_ops.py [None][fix] nccl symmetric with graceful fallbacks (#11042) 2026-01-28 15:43:24 -08:00
trtllm_gen_custom_ops.py [TRTLLM-10398][feat] Enable TRTLLM moe backend for Nemotron Super (#10791) 2026-01-31 13:48:25 +08:00
userbuffers_custom_ops.py feat: Introduce UB allocator for pytorch flow (#3257) 2025-04-08 18:39:49 +08:00