TensorRT-LLMs/cpp/tensorrt_llm
Yukun He ab2f663101
fix: Reduce memory usage in fused moe op associated with AutoTuning and fix moe fallback issue. (#3793)
* Reduce memory usage in fused moe op associated with AutoTuning.
* Replace pre-defined bucket size strategy with a generating function based on the tune_max_num_tokens.
* Add free_memory logic of workspace in min_latency_mode fused moe path.

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>

* Fix fused_moe fallback issue. (#3652)

min_latency_mode is only set to False during warmup phase. Thus when it becomes true during inference, all tactics fall back to the default one and thus cause perf regression.

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>

---------

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
2025-04-24 10:14:26 +08:00
..
batch_manager fix: disable KV cache reuse if using attention sink (#3021) 2025-04-16 03:07:32 +08:00
common fix: FP8 kv accuracy (#3675) 2025-04-21 15:59:31 +08:00
cutlass_extensions/include/cutlass_extensions feat: Update cutlass (#2981) 2025-03-26 22:36:27 +08:00
executor chore: Clean up cpp runtime (#3537) 2025-04-15 16:06:14 +08:00
executor_worker Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
kernels fix: nvbugs/5187237: fix deterministic mode crash (#3448) 2025-04-17 12:01:57 +08:00
layers fix: Eagle decoding (#3456) 2025-04-11 22:06:38 +08:00
plugins feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
pybind Feat/ Integrate peftCacheManager in PyExecutor creation (#3372) 2025-04-15 15:14:43 +08:00
runtime chore: Clean up cpp runtime (#3537) 2025-04-15 16:06:14 +08:00
thop fix: Reduce memory usage in fused moe op associated with AutoTuning and fix moe fallback issue. (#3793) 2025-04-24 10:14:26 +08:00
CMakeLists.txt Revert "infra: move nvrtc_wrapper to conan (#3282)" (#3573) 2025-04-15 22:45:13 +08:00