TensorRT-LLMs/tensorrt_llm/_torch/custom_ops
Jinyang Yuan 0a0f93d4a8
[None][fix] Fix the performance issue of FP8 blockwise grouped GEMM when using attention DP (#8501)
Signed-off-by: Jinyang Yuan <154768711+jinyangyuan-nvidia@users.noreply.github.com>
2025-10-27 10:18:19 +08:00
..
__init__.py [None][feat] Support Qwen3 next (#7892) 2025-09-29 21:16:07 +08:00
cpp_custom_ops.py [None][feat] Add torch compile support for cuda core GEMM OP (#8261) 2025-10-12 20:57:17 -07:00
cute_dsl_custom_ops.py [TRTLLM-6898][feat] Add swapab, tileN64, cga sync support for cute dsl nvfp4 gemm (#7764) 2025-09-18 21:20:04 +08:00
flashinfer_custom_ops.py [None][feat] Support Qwen3 next (#7892) 2025-09-29 21:16:07 +08:00
torch_custom_ops.py [None][fix] Fix the performance issue of FP8 blockwise grouped GEMM when using attention DP (#8501) 2025-10-27 10:18:19 +08:00
trtllm_gen_custom_ops.py [None][feat] Update TRTLLM MoE MxFP4 cubins; autotune tileN (#8156) 2025-10-23 09:14:18 +08:00
userbuffers_custom_ops.py feat: Introduce UB allocator for pytorch flow (#3257) 2025-04-08 18:39:49 +08:00