TensorRT-LLMs/tensorrt_llm/_torch/custom_ops
jmydurant 578dbc8d9a
feat: chunked prefill for MLA (Blackwell) (#4651)
Signed-off-by: Mingyang Jiang <13463932+jmydurant@users.noreply.github.com>
2025-06-26 09:01:00 +08:00
..
__init__.py [TRTLLM-5770] feat: Integrate TRT-LLM Gen FP8 block scale MoE with Pytorch workflow kernel autotuner (#5207) 2025-06-17 21:01:56 +08:00
cpp_custom_ops.py feat: Misc Opt for large scale EP (#5374) 2025-06-20 13:11:31 +08:00
flashinfer_custom_ops.py Cherry pick feat/llama4 to main (#4739) 2025-05-30 05:28:40 +08:00
torch_custom_ops.py feat: chunked prefill for MLA (Blackwell) (#4651) 2025-06-26 09:01:00 +08:00
trtllm_gen_custom_ops.py [5356427] fix: Remove the seq_len of 4096 from FP8 block scale MoE tuning configs. (#5485) 2025-06-26 08:38:35 +08:00
userbuffers_custom_ops.py feat: Introduce UB allocator for pytorch flow (#3257) 2025-04-08 18:39:49 +08:00