TensorRT-LLMs/tensorrt_llm/_torch/custom_ops
Yukun He bed5bc9f2e
[None][chore] Wrap the swiglu into custom op to avoid redundant device copy. (#7021)
A redundant D2D copy is observed when enabling torch.compile for the Llama model due to the swiglu triton kernel, which brings perf overhead. Use a custom op to wrap the swiglu op to avoid this overhead.

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
2025-08-27 13:02:10 +08:00
..
__init__.py [None][ci] move unittests to sub-directories (#6635) 2025-08-20 05:42:22 -04:00
cpp_custom_ops.py [None][perf] Accelerate global scale calculations for deepEP fp4 combine (#7126) 2025-08-27 00:13:13 +08:00
flashinfer_custom_ops.py [None][ci] move unittests to sub-directories (#6635) 2025-08-20 05:42:22 -04:00
torch_custom_ops.py [None][chore] Wrap the swiglu into custom op to avoid redundant device copy. (#7021) 2025-08-27 13:02:10 +08:00
trtllm_gen_custom_ops.py [None][perf] Make finalize fusion part of the tactic selection logic (#6915) 2025-08-21 14:08:03 -07:00
userbuffers_custom_ops.py feat: Introduce UB allocator for pytorch flow (#3257) 2025-04-08 18:39:49 +08:00