TensorRT-LLMs/tensorrt_llm/_torch/custom_ops
Enwei Zhu bc1d4fb5da
[NvBug 5378370] fix: Fix alltoall for llama4 (apply_router_weight_on_input=True) (#5902)
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
2025-07-12 15:50:31 +09:00
..
__init__.py [TRTLLM-5770] feat: Integrate TRT-LLM Gen FP8 block scale MoE with Pytorch workflow kernel autotuner (#5207) 2025-06-17 21:01:56 +08:00
cpp_custom_ops.py [NvBug 5378370] fix: Fix alltoall for llama4 (apply_router_weight_on_input=True) (#5902) 2025-07-12 15:50:31 +09:00
flashinfer_custom_ops.py Cherry pick feat/llama4 to main (#4739) 2025-05-30 05:28:40 +08:00
torch_custom_ops.py [TRTLLM-5812][feat] support FP8 row-wise dense GEMM in torch flow (#5615) 2025-07-07 18:04:57 +08:00
trtllm_gen_custom_ops.py [TRTLLM-5881] feat: Integrate TRT-LLM Gen FP4 block scale MoE with Pytorch workflow kernel autotuner (#5764) 2025-07-09 08:21:58 +01:00
userbuffers_custom_ops.py feat: Introduce UB allocator for pytorch flow (#3257) 2025-04-08 18:39:49 +08:00