TensorRT-LLMs/tensorrt_llm/_torch/custom_ops
Enwei Zhu 1745102e72
[TRTLLM-7027][feat] Fuse d2t to logitsBitmaskKernel and fix a race condition in one-model spec (#7481)
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
Co-authored-by: Jin Li <59594262+liji-nv@users.noreply.github.com>
2025-09-04 23:30:14 +08:00
..
__init__.py [None][ci] move unittests to sub-directories (#6635) 2025-08-20 05:42:22 -04:00
cpp_custom_ops.py [TRTLLM-7027][feat] Fuse d2t to logitsBitmaskKernel and fix a race condition in one-model spec (#7481) 2025-09-04 23:30:14 +08:00
flashinfer_custom_ops.py [None][ci] move unittests to sub-directories (#6635) 2025-08-20 05:42:22 -04:00
torch_custom_ops.py [None][chore] Wrap the swiglu into custom op to avoid redundant device copy. (#7021) 2025-08-27 13:02:10 +08:00
trtllm_gen_custom_ops.py [None][perf] Make finalize fusion part of the tactic selection logic (#6915) 2025-08-21 14:08:03 -07:00
userbuffers_custom_ops.py feat: Introduce UB allocator for pytorch flow (#3257) 2025-04-08 18:39:49 +08:00