Commit Graph

3821 Commits

Author SHA1 Message Date
fredricz-20070104
6a64cb4c71
[TRTLLM-8936][test] Add disagg and wideep multi-node multi-gpu test cases (#9356)
Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com>
2025-11-26 10:34:49 +08:00
Yiqing Yan
1b9edf62c9
[None][chore] Bump version to 1.2.0rc5 (#9455)
Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
2025-11-26 08:37:53 +08:00
Chuang Zhu
0e9c7f8c07
[https://nvbugs/5685143][fix] avoid cudaFree overlap with cuda graph (#9438)
Signed-off-by: Chuang Zhu <111838961+chuangz0@users.noreply.github.com>
2025-11-25 16:20:29 -08:00
Suyog Gupta
e484bec82f
[None][chore] AutoDeploy add multi stream moe pass to default.yaml (#9430)
Signed-off-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com>
2025-11-25 14:16:13 -08:00
Robin Kobus
32f53910ef
[TRTLLM-909][feat] Overlap context chunks in pipeline parallel mode (#9308)
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
2025-11-25 22:11:51 +01:00
Eran Geva
afc52d7b93
[https://nvbugs/5647400] [fix] Enlarged the AllReduce workspace size to 64MB. Added AllReduce strategy to AD config. (#9145)
Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com>
2025-11-25 10:56:07 -08:00
mpikulski
899fda9e47
[TRTLLM-9490][feat] use FlashInfer's top_k_sampling_from_probs (#9457)
Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>
2025-11-25 18:53:53 +01:00
mpikulski
c5f52ab304
[TRTLLM-8376][feat] top-p optimization (removes redundant softmax) (#9411)
Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>
2025-11-25 18:46:48 +01:00
Fanrong Li
8da59103d6
[https://nvbugs/5680905][fix] Relax the MMLU accuracy requirement for DS-v3.2 (#9439)
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
2025-11-26 00:32:20 +08:00
Yan Chunwei
1f43dc8174
[None][ci] waive a test (#9458)
Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com>
2025-11-25 07:04:20 -08:00
YueWeng
cc336c4abd
[TRTLLM-8160][feat] Add draft token tree runtime on CDL (#8586)
Signed-off-by: Yue Weng <25103990+yweng0828@users.noreply.github.com>
2025-11-25 09:40:55 -05:00
Pengyun Lin
fa61825c74
[None][feat] Support custom chat template for tool calling (#9297)
Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
2025-11-25 22:07:04 +08:00
Tailing Yuan
51ef0379d2
[None][feat] Add a parser to layer-wise benchmarks (#9440)
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
2025-11-25 05:45:16 -08:00
Fanrong Li
c36f144591
[None][chore] Fix trtllm-eval for PyTorchLLM (#9427)
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
2025-11-25 04:49:03 -08:00
Shi Xiaowei
60786574db
[None][fix] Mitigate test timeout issues (#9445)
Signed-off-by: Shixiaowei02 <39303645+Shixiaowei02@users.noreply.github.com>
2025-11-25 20:17:54 +08:00
Chao Ni
a2d9e6250a
[https://nvbugs/5667922][fix] Update long context evaluation config (#9426)
Signed-off-by: mni <125171826+baize97@users.noreply.github.com>
2025-11-25 19:33:38 +08:00
Yueh-Ting (eop) Chen
a38d91aae2
[https://nvbugs/5537996][fix] Let KV cache manager block initialization be aware whether it is doing a dry run or not (#9093)
Before this commit, the kv cache manager does the same regardless, which causes a mis-calculation in free memory available to allocate for the KV cache manager, hence causing a crash.

This commit fixes this by letting KV cache manager initialization be aware whether it is doing the dry run or not. If it is a dry run, use the max_tokens setting that is already pre-calculated and filled into kv_cache_config.max_tokens.

Signed-off-by: eopXD <yuehtingc@nvidia.com>
2025-11-25 17:27:11 +08:00
Anthony Chang
4742c130db
[None][feat] Improve TRTLLM MoE in small hidden size throughput cases (#9377)
Signed-off-by: Anthony Chang <27950904+rosenrodt@users.noreply.github.com>
2025-11-25 09:09:27 +01:00
Yanchao Lu
ff02e0f05c
[None][ci] Move more test stages to use OCI machines (#9395)
Signed-off-by: Yanchao Lu <yanchaol@nvidia.com>
Co-authored-by: Matt Lefebvre <matthewelefebvre@gmail.com>
2025-11-25 15:59:13 +08:00
Eran Geva
6af01dc664
[#8391][chore] test_perf.py to lock clocks read from gpu_configs.yml instead of max freq (#9409)
Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com>
2025-11-25 09:20:33 +02:00
Emma Qiao
15616e3ee5
[None][infra] Waive failed cases for main branch on 11/25 (#9429)
Signed-off-by: qqiao <qqiao@nvidia.com>
2025-11-24 23:18:15 -08:00
Yukun He
e580da4155
[TRTLLM-7963][feat] Cold L2 cache when doing autotune benchmarking. (#8779)
The performance results of some kernels could be easily affected by the warm/cold L2 cache status. To achieve more precise profiling results, the L2 cache is cleared for every execution by the circular buffer method for better benchmarking during autotuning.

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
2025-11-25 15:06:22 +08:00
William Zhang
a4049fc557
[#9413][fix] Minor fixes to nemotron H and custom models in AD (#9416)
* Why?

There were a couple of issues with the recently merged custom model
injection for AutoDeploy + the reference implementation of nemotron
H:
- `d_mlp` was left in despite being mathematically always null (could
  lead to runtime issues during sharding).
- the custom model mapping was inherited by children factories.

* What?

This commit fixes these issues, and refactors the key of the custom
implementation to be based on the name of the configuration class as
well.

Signed-off-by: William Zhang <133824995+2ez4bz@users.noreply.github.com>
2025-11-24 20:17:33 -08:00
Suyog Gupta
efd503751f
[#9271][perf] Enable multi-stream MOE optimization in AutoDeploy (#9322)
Signed-off-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com>
2025-11-24 19:50:10 -08:00
kris1025
d1c724958d
[None][chore] unwaive ampere kernels test (#9389)
Signed-off-by: linquanh <linquanh@nvidia.com>
2025-11-25 11:28:43 +08:00
TensorRT LLM
bf0d1dc6a8 [None][infra] Check in most recent lock file from nightly pipeline
Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com>
2025-11-25 03:21:14 +00:00
xinhe-nv
0a9ae2e3e6
[None][chore] Remove closed bugs (#9381)
Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com>
2025-11-24 18:49:57 -08:00
Yuxian Qiu
8a0295015f
[None][chore] Reduce nested nvtx ranges. (#9347)
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
2025-11-25 09:58:41 +08:00
Fanrong Li
5a99c9734d
[TRTLLM-8777][feat] Update DeepGEMM to the latest commit to include optimizations for DeepSeek-v3.2 (#9380)
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
2025-11-25 08:58:08 +08:00
QI JUN
786d308b88
[https://nvbugs/5685428][fix] fix test_openai_chat_multimodal.py (#9406)
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
2025-11-24 16:56:33 -08:00
bhsueh_NV
1a93583438
[None][feat] Support Yarn on QwQ-32B model (#9059)
Signed-off-by: bhsueh <11360707+byshiue@users.noreply.github.com>
Signed-off-by: Jiang Shao <91270701+StudyingShao@users.noreply.github.com>
Co-authored-by: NVJiangShao <91270701+StudyingShao@users.noreply.github.com>
2025-11-25 07:27:28 +08:00
Yibin Li
1ce483c999
[TRTLLM-7967][feat] Adding Starcoder2 PyTorch Backend Support (#8923)
Signed-off-by: Yibin Li <109242046+yibinl-nvidia@users.noreply.github.com>
2025-11-24 11:23:22 -08:00
YueWeng
336593cac5
[None][fix] Fix topk outIndices when using vectorized_process (#9404)
Signed-off-by: Yue Weng <25103990+yweng0828@users.noreply.github.com>
2025-11-24 09:08:00 -08:00
Chuang Zhu
f95edb53e1
[None][fix] enhance warning in cacheTransBuffer (#9390)
Signed-off-by: Chuang Zhu <111838961+chuangz0@users.noreply.github.com>
2025-11-24 02:17:54 -08:00
Emma Qiao
2c869f2bda
[None][infra] Waive failed cases for main (#9400)
Signed-off-by: qqiao <qqiao@nvidia.com>
2025-11-24 17:42:19 +08:00
cheshirekow
6e5384d03c
[TRTLLM-9299][infra] Add third-party docs for python (#9366)
In this change we rename 3rdparty/README.md (which contains the process
playboook for C++ dependencies) to 3rdparty/cpp-thirdparty.md and add a
new 3rdparty/py-thirdparty.md file which contains the process playbook
for python dependencies.

We also update the main 3rdparty/README.md file to serve as a
starting-point referring to both of these files.

Signed-off-by: Josh Bialkowski <1309820+cheshirekow@users.noreply.github.com>
Co-authored-by: Josh Bialkowski <1309820+cheshirekow@users.noreply.github.com>
2025-11-23 22:58:25 -08:00
cheshirekow
2810be7b3b
[TRTLLM-9211][infra] Minor fixes to 3rdparty/CMakelists (#9365)
This change addresses the nitpick comments from coderabbit on the
previous pull request !8986. None of the changes appear to be critical
as the build is healthy without them, but they should provide some
protection against future breakages if we change CMake version or
or modify other build logic.

This change consists of the following:
1. Add GIT_SUBMODULE_RECURSE ON to FetchContent_Declare calls for
   deepgemm and flashmla to ensure submodules are initialized in
   cmake versions where it is not the default.
2. Modify error messages in deep_gemm and flash_mla CMakeLists to
   indicate that submodule initialization failed if the expected
   submodule directories are not present.
3. Remove the NVTX include directories if the build is configured
   with NVTX_DISABLE off, to avoid potential confusions if NVTX is
   included on the compile commands when disabled.
4. Fix a minor CMake syntax issue in cpp/CMakeLists.txt where a
   message() call was missing parentheses around a string.

Signed-off-by: Josh Bialkowski <1309820+cheshirekow@users.noreply.github.com>
Co-authored-by: Josh Bialkowski <1309820+cheshirekow@users.noreply.github.com>
2025-11-23 22:57:02 -08:00
Emma Qiao
af72d93fa9
[None][infra] Waive failed cases on main branch (#9384)
Signed-off-by: qqiao <qqiao@nvidia.com>
2025-11-23 22:53:02 -08:00
Yukun He
960851f419
[None][chore] Remove unnecessary log in the short tuning profile (#9387)
Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
2025-11-24 12:31:26 +08:00
Yukun He
39076410a8
[https://nvbugs/5676748][fix] Fix mismatched nvfp4 gemm sf shape. (#9336)
Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
2025-11-24 12:16:32 +08:00
brb-nv
c045e359a7
[https://nvbugs/5637012][fix] Fix helix unit tests (#9369)
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
2025-11-23 19:34:22 -08:00
TensorRT LLM
5a44994d05 [None][infra] Check in most recent lock file from nightly pipeline
Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com>
2025-11-24 03:17:53 +00:00
QI JUN
34a6d2d28f
[TRTLLM-9302][chore] Move build config from BaseLlmArgs to TrtLlmArgs (#9249)
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
2025-11-24 10:54:41 +08:00
Yukun He
c3acf965a6
[TRTLLM-7963][fix] Several improvements of autotuning quality (#9348)
* Skip the shape profile generating process if the profile has already been found in the cache under tuning mode. This is a prerequisite for nested autotuning because host overhead might be included during the profiling of the high-level op.
* Enable the profiling with CUDA graph as the default profiling method.
* Apply a heuristic method to cut off the number of repeat times of profiling according to a few-run time measurement.
2025-11-24 10:38:45 +08:00
Bo Li
fcfec93cad
[TRTLLM-9389][chore] Rename AlltoAll backend names (#9329)
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
2025-11-23 13:52:57 -08:00
Chenghao Zhang
e1c9aa7d6a
[None][chore] AutoDeploy: Add the Nemotron MOE to CI (#9328)
Signed-off-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com>
Co-authored-by: Suyog Gupta <41447211+suyoggupta@users.noreply.github.com>
2025-11-23 12:12:12 -08:00
JadoTu
0582e54b61
[None][fix] modify qwen3-next sampling stop_tokens (#9331)
Signed-off-by: jiant <107457950+JadoTu@users.noreply.github.com>
2025-11-23 21:10:09 +08:00
William Zhang
11a0b276fb
[#9230][feat] Slimmed down implementation of nemotron H (#9235)
* Why?

The reference nemotron H code on HuggingFace is out of date,
and therefore bugged, and has several untested code paths.
This makes an already hairy patching system even hairier.

The proposal is to do away with those patches, and replace the
original implementation with one that is heavily slimmed down.

* What?

This PR sets the basis for an alternative path with such a 
slimmed down implementation that:
- fixes bugs in the current HF implementation
- adds no new dependencies to TensorRT-LLM
- does away with unnecessary features for TensorRT-LLM/
  AutoDeploy:
- no training related code (dropout, gradient checkpointing, etc.)
- no caching logic (we want to replace it with our own anyway)
- no attention masking where possible
- reuses existing AD custom ops for mamba SSM update /
   causal conv1d / attention

In order for the above to be usable in the AD apparatus,
`AutoModelForCausalLMFactory` is extended to allow registrations
of custom model implementations.

Signed-off-by: William Zhang <133824995+2ez4bz@users.noreply.github.com>
2025-11-23 03:13:32 -08:00
Yan Chunwei
1ef69ecbb1
[None][ci] waive two ray tests (#9375)
Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com>
2025-11-23 15:39:01 +08:00
TensorRT LLM
a761585d9c [None][infra] Check in most recent lock file from nightly pipeline
Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com>
2025-11-23 03:06:43 +00:00