TensorRT-LLMs/tensorrt_llm/_torch/custom_ops
Li Min 6021a439ab
Make moe permute and final as custom op (#5412)
Signed-off-by: Mindy Li <11663212+limin2021@users.noreply.github.com>
2025-06-27 15:48:33 -07:00
..
__init__.py [TRTLLM-5770] feat: Integrate TRT-LLM Gen FP8 block scale MoE with Pytorch workflow kernel autotuner (#5207) 2025-06-17 21:01:56 +08:00
cpp_custom_ops.py Make moe permute and final as custom op (#5412) 2025-06-27 15:48:33 -07:00
flashinfer_custom_ops.py Cherry pick feat/llama4 to main (#4739) 2025-05-30 05:28:40 +08:00
torch_custom_ops.py feat: Expose bias and FP8_MXFP4 MOE CUTLASS backend features to pytorch (#5410) 2025-06-27 12:29:34 +08:00
trtllm_gen_custom_ops.py [5356427] fix: Remove the seq_len of 4096 from FP8 block scale MoE tuning configs. (#5485) 2025-06-26 08:38:35 +08:00
userbuffers_custom_ops.py feat: Introduce UB allocator for pytorch flow (#3257) 2025-04-08 18:39:49 +08:00