TensorRT-LLMs/tensorrt_llm/_torch
Yukun He 00059de380
chore: Improve the AutoTuner log information. (#6368)
* Change the fallback alert from DEBUG to WARNING level and only do it once.
* Add debug information for profiling cache right after the warmup phase.
* Change the level of exception message during tactic profiling from ERROR to WARNING level. All exception details are pushed to the DEBUG level.
* Other trivial refinements and cleanups.

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
2025-08-01 09:19:52 +08:00
..
attention_backend fix: fix illeagel memory access (#6437) 2025-07-31 10:01:34 +08:00
auto_deploy [AutoDeploy] merge feat/ad-2025-07-07 (#6196) 2025-07-23 05:11:04 +08:00
compilation [https://nvbugs/5340941] - fix: Correct custom ops used by Qwen3 Moe … (#6285) 2025-07-25 14:49:45 +08:00
custom_ops [fix] Fix perf regression caused by MoE autotuner when using DeepEPLowLatency (#6288) 2025-07-28 01:37:11 -04:00
debug Add debug hook to support dump tensor data and add new debug functions easily (#5182) 2025-06-24 17:45:28 +08:00
distributed [fix][nvbugs/5399355] Fix Lamport buffer clear issue for MNNVL TwoShot Allreduce and add FP16 support. (#6237) 2025-07-25 08:01:40 +08:00
models fix: Fix poor generation with FP8 Gemma3 1B checkpoint (#6499) 2025-07-31 17:18:23 -07:00
modules [fix] Fix wide EP when using DeepEP with online EPLB (#6429) 2025-07-30 00:13:18 -04:00
peft feat: support multi lora adapters and TP (#3885) 2025-05-08 23:45:45 +08:00
pyexecutor chore: Improve the AutoTuner log information. (#6368) 2025-08-01 09:19:52 +08:00
shared_tensor [1/N][TRTLLM-5195][feat] Share PyTorch tensor between processes (#5396) 2025-07-10 05:12:53 +09:00
speculative [feat] Auto-enable ngram with concurrency <= 32. (#6232) 2025-07-31 18:45:51 -04:00
__init__.py [nvbugs/5401156][fix] Avoid import all models when import trtllm._common (#6266) 2025-07-27 23:29:21 -04:00
autotuner.py chore: Improve the AutoTuner log information. (#6368) 2025-08-01 09:19:52 +08:00
expert_statistic.py Add MTP support for Online EPLB (#5213) 2025-06-25 07:58:13 +08:00
llm.py [TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312) 2025-06-20 03:01:10 +08:00
metadata.py feat: no-cache attention in PyTorch workflow (#3085) 2025-04-05 01:54:32 +08:00
model_config.py Bugfix/fix nemotron nas lora support (#6380) 2025-07-31 13:39:35 -04:00
utils.py [fix] Fix perf regression caused by MoE autotuner when using DeepEPLowLatency (#6288) 2025-07-28 01:37:11 -04:00