mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-14 06:27:45 +08:00
* Several optimizations and fixings on the Autotuner. Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> * Apply the new Python side Autotuner on current linear for nvFP4 data type. Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> * Apply the new Python side Autotuner on MoE op * Remove routers from cache key to improve inference perf * Prevent unnecessary code profiling. Use do_preparation keyword to select which part should be executed during before evaluating any tactic. * Remove try-catch inside moe profiling process. * Move default tactic -1 to 0 transforms in cpp runner. * Revise relavant tests. * Predefined the bucketizing strategy for fused_moe Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> * Add specific_profile support for AutoTuner to bypass the standard cache search process for perf optimization * Add specific_profile for moe * Add specific profile for linear Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> * Fixing and revising according to reviewer's suggestions. Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> * Use lru_cache for inference pref optimization. * Revert gen_custom_cache_key feature Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> * Replace runner with runner id to achieve a serializable cache. Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> * Code clean up and minor fixings. Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> * Move all tunable runners and custom ops into torch_custom_ops. Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> * Treat min_latency_mode as a independent dynamic tensor. Modify get_valid_tactics to suit for it. Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> --------- Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| allgatherOp.cpp | ||
| allreduceOp.cpp | ||
| attentionOp.cpp | ||
| CMakeLists.txt | ||
| convertSpecDecodingMaskToPackedMaskOp.cpp | ||
| cublasScaledMM.cpp | ||
| cutlassScaledMM.cpp | ||
| deepseekAllreduceFusionOp.cpp | ||
| dynamicDecodeOp.cpp | ||
| dynamicDecodeOp.h | ||
| fmhaPackMaskOp.cpp | ||
| fp4BatchedQuantize.cpp | ||
| fp4Gemm.cpp | ||
| fp4Op.cpp | ||
| fp4Quantize.cpp | ||
| fp8BlockScaleMoe.cpp | ||
| fp8BlockScalingGemm.cpp | ||
| fp8Op.cpp | ||
| fp8Quantize.cpp | ||
| gatherTreeOp.cpp | ||
| logitsBitmaskOp.cpp | ||
| mambaConv1dOp.cpp | ||
| moeOp.cpp | ||
| mtpOp.cpp | ||
| ncclCommunicatorOp.cpp | ||
| ncclCommunicatorOp.h | ||
| noAuxTcOp.cpp | ||
| parallelDecodeKVCacheUpdateOp.cpp | ||
| redrafterCurandOp.cpp | ||
| reducescatterOp.cpp | ||
| relativeAttentionBiasOp.cpp | ||
| selectiveScanOp.cpp | ||
| thUtils.cpp | ||
| thUtils.h | ||
| userbuffersFinalizeOp.cpp | ||
| weightOnlyQuantOp.cpp | ||