TensorRT-LLMs/tensorrt_llm/_torch/custom_ops
Jinyang Yuan 97f7e12588
[fix] Fix perf regression caused by MoE autotuner when using DeepEPLowLatency (#6288)
Signed-off-by: Jinyang Yuan <154768711+jinyangyuan-nvidia@users.noreply.github.com>
2025-07-28 01:37:11 -04:00
..
__init__.py [TRTLLM-5770] feat: Integrate TRT-LLM Gen FP8 block scale MoE with Pytorch workflow kernel autotuner (#5207) 2025-06-17 21:01:56 +08:00
cpp_custom_ops.py [https://nvbugs/5340941] - fix: Correct custom ops used by Qwen3 Moe … (#6285) 2025-07-25 14:49:45 +08:00
flashinfer_custom_ops.py Cherry pick feat/llama4 to main (#4739) 2025-05-30 05:28:40 +08:00
torch_custom_ops.py [fix] Fix perf regression caused by MoE autotuner when using DeepEPLowLatency (#6288) 2025-07-28 01:37:11 -04:00
trtllm_gen_custom_ops.py [TRTLLM-4279] feat: Multistream initial support for torch compile flow (#5847) 2025-07-21 19:10:22 +08:00
userbuffers_custom_ops.py feat: Introduce UB allocator for pytorch flow (#3257) 2025-04-08 18:39:49 +08:00