TensorRT-LLMs/tensorrt_llm/_torch/custom_ops
Ludwig Schneider 5130cbd73e
[None][fix] Pre-Allocation for Auto-Tuning NCCL_SYMMETRIC (#11326)
Signed-off-by: Ludwig Schneider <lschneider@nvidia.com>
2026-02-12 14:31:51 -08:00
..
__init__.py [None][feat] Integrate cuda.tile RMS norm kernels (#9725) 2026-02-02 19:44:27 +08:00
cpp_custom_ops.py [None][feat] Optimize NemotronH model with elementwise and nvfp4 fusion (#11273) 2026-02-12 09:25:31 -05:00
cuda_tile_custom_ops.py [None][feat] Integrate cuda.tile RMS norm kernels (#9725) 2026-02-02 19:44:27 +08:00
cute_dsl_custom_ops.py [TRTLLM-9457][feat] Add cute dsl fp8 gemm for Blackwell (#10130) 2026-02-06 09:49:30 +08:00
flashinfer_custom_ops.py [TRTC-1943][feat] Env vars override support in LLM API (#9104) 2025-12-01 10:04:49 -08:00
torch_custom_ops.py [None][fix] Pre-Allocation for Auto-Tuning NCCL_SYMMETRIC (#11326) 2026-02-12 14:31:51 -08:00
trtllm_gen_custom_ops.py [None][feat] Remove the hard code for activation type definition in T… (#11164) 2026-02-11 21:50:45 +08:00
userbuffers_custom_ops.py feat: Introduce UB allocator for pytorch flow (#3257) 2025-04-08 18:39:49 +08:00