TensorRT-LLMs/tensorrt_llm
Yukun He 9c5b464fe0
[None][feat] Apply AutoTuner to fp8_block_scale_deep_gemm to trigger JIT ahead of time. (#7113)
Because deep_gemm.gp8_gemm_nt will trigger many JIT processes during the inference phase, we need to sweep these shapes ahead of time. Apply the AutoTuner framework to achieve this and retain the potential capability to tune the swap_ab flag.

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
2025-08-25 10:48:31 +08:00
..
_tensorrt_engine [TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312) 2025-06-20 03:01:10 +08:00
_torch [None][feat] Apply AutoTuner to fp8_block_scale_deep_gemm to trigger JIT ahead of time. (#7113) 2025-08-25 10:48:31 +08:00
auto_parallel [None][fix] Migrate to new cuda binding package name (#6700) 2025-08-07 16:29:55 -04:00
bench [None][fix] Correct KV cache percentage report out. (#7102) 2025-08-22 10:28:57 -07:00
commands [#7136][feat] trtllm-serve + autodeploy integration (#7141) 2025-08-22 08:30:53 -07:00
evaluate [TRTLLM-6771][feat] Support MMMU for multimodal models (#6828) 2025-08-21 08:54:12 +08:00
executor [None][chore] Mass integration of release/1.0 (#6864) 2025-08-22 09:25:15 +08:00
inputs [None][fix] Fix mm_placholder_counts extraction issue. (#7118) 2025-08-22 12:28:30 +08:00
layers [TRTLLM-5863][feat] Support MoE INT8 Weight-Only-Quantization in PyTorch Workflow (#6629) 2025-08-15 17:15:49 -04:00
llmapi [None][feat] Deepseek: Start Eagle work (#6210) 2025-08-22 12:57:17 -04:00
metrics [None][feat] Core Metrics Implementation (#5785) 2025-08-09 02:48:53 -04:00
models [None][fix] Refactoring to avoid circular import when importing torch models (#6720) 2025-08-11 18:00:42 -04:00
plugin feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
quantization [None][fix] Pre-allocate workspaces for DeepGEMM MoE to avoid frequent cudaFree/cudaMalloc (#6811) 2025-08-13 10:27:57 +08:00
runtime [None][fix] Migrate to new cuda binding package name (#6700) 2025-08-07 16:29:55 -04:00
scaffolding [https://nvbugs/5387375] fix(scaffolding): fix scaffolding aime test in test_e2e (#6140) 2025-07-18 10:34:37 +08:00
serve [TRTLLM-6771][feat] Support MMMU for multimodal models (#6828) 2025-08-21 08:54:12 +08:00
tools [None] [feat] nsys profile output kernel classifier (#7020) 2025-08-23 00:57:37 -04:00
__init__.py [TRTLLM-7326][feat] Add standalone multimodal encoder (#6743) 2025-08-19 21:42:50 -07:00
_common.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
_dlpack_utils.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
_ipc_utils.py [None][fix] Migrate to new cuda binding package name (#6700) 2025-08-07 16:29:55 -04:00
_mnnvl_utils.py [TRTLLM-6743][feat] Optimize and refactor alltoall in WideEP (#6973) 2025-08-24 08:15:29 -04:00
_utils.py [None][feat] Core Metrics Implementation (#5785) 2025-08-09 02:48:53 -04:00
builder.py [None][fix] Refactoring to avoid circular import when importing torch models (#6720) 2025-08-11 18:00:42 -04:00
disaggregated_params.py [None][fix] Refactoring to avoid circular import when importing torch models (#6720) 2025-08-11 18:00:42 -04:00
functional.py [None] [feat] Add Tencent HunYuanMoEV1 model support (#5521) 2025-08-15 06:56:44 +08:00
graph_rewriting.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
logger.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
lora_helper.py [TRTLLM-6825][fix] Update lora for phi4-mm (#6817) 2025-08-21 22:00:04 -04:00
lora_manager.py [https://nvbugs/5467232][fix] Fix load_torch_hf_lora to override lora_config.trtllm_modules_to_hf_modules with default only when it has no value (#7132) 2025-08-24 15:00:24 +03:00
mapping.py [TRTLLM-5966][feat] Helix: extend mapping to support different CP types (#6816) 2025-08-14 09:00:02 -07:00
math_utils.py perf: Optimize swizzle_sf, unswizzle_sf, reswizzle_sf (#5318) 2025-06-26 14:03:56 +08:00
module.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
network.py chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
parameter.py fix:https://nvbugs/5234033 enable starcoder trt-flow with transforme… (#3909) 2025-05-15 11:16:45 +08:00
profiler.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
prompt_adapter_manager.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
python_plugin.py linting(python): Enable ruff on more files (wave 1/N) (#5140) 2025-06-14 19:19:34 +08:00
sampling_params.py [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00
scheduling_params.py [None][feat] Add support of scheduling attention dp request (#6246) 2025-08-01 20:38:01 -04:00
serialization.py [None] [feat] Add model gpt-oss (#6645) 2025-08-07 03:04:18 -04:00
top_model_mixin.py [None][fix] Refactoring to avoid circular import when importing torch models (#6720) 2025-08-11 18:00:42 -04:00
version.py [None][chore] Bump version to 1.1.0rc2 (#7167) 2025-08-22 22:02:28 +08:00