TensorRT-LLMs/tensorrt_llm/_torch/custom_ops
Yukun He 83b36ebecd
Fix fused_moe fallback issue. (#3652)
min_latency_mode is only set to False during warmup phase. Thus when it becomes true during inference, all tactics fall back to the default one and thus cause perf regression.

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
2025-04-17 23:17:04 +08:00
..
__init__.py feat: Support cos_sin_cache in all cases. (#3517) 2025-04-16 13:48:44 +08:00
cpp_custom_ops.py feat: Add NVFP4 UB pattern optimization pass in torch compile (#3371) 2025-04-11 21:25:29 +08:00
flashinfer_custom_ops.py feat: Support cos_sin_cache in all cases. (#3517) 2025-04-16 13:48:44 +08:00
torch_custom_ops.py Fix fused_moe fallback issue. (#3652) 2025-04-17 23:17:04 +08:00
userbuffers_custom_ops.py feat: Introduce UB allocator for pytorch flow (#3257) 2025-04-08 18:39:49 +08:00