Emma Qiao
865992b86b
[None][infra] Waive failed cases on 1/3 ( #10391 )
...
Signed-off-by: qqiao <qqiao@nvidia.com>
2026-01-03 05:54:09 -05:00
Bo Deng
9e7b50aefb
[TRTLLM-9752][fix] WAR: Disable PDL for quant kernels to fix accuracy issues ( #10285 )
...
Signed-off-by: Bo Deng <deemod@nvidia.com>
2026-01-03 14:34:55 +08:00
TensorRT LLM
45ffbf1f21
[None][infra] Check in most recent lock file from nightly pipeline
...
Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com>
2026-01-03 03:07:50 +00:00
Lucas Liebenwein
937f8f78a1
[None][doc] promote AutoDeploy to beta feature in docs ( #10372 )
...
Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
2026-01-02 18:46:31 -05:00
Izzy Putterman
bdf6953ddc
[None][feat] Eagle: MLA Based Eagle ( #9677 )
...
Signed-off-by: Izzy Putterman <iputterman@nvidia.com>
2026-01-02 13:45:07 -05:00
Gal Hubara-Agam
f3dd6da080
[ #10056 ][chore] AutoDeploy: Enable Nemo SuperV3 accuracy test ( #10308 )
...
Signed-off-by: Gal Hubara Agam <96368689+galagam@users.noreply.github.com>
2026-01-02 11:20:19 +02:00
chenfeiz0326
5e0e48144f
[None][fix] Minor updates on Perf Test System ( #10375 )
...
Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
2026-01-02 17:17:42 +08:00
TensorRT LLM
098251648d
[None][infra] Check in most recent lock file from nightly pipeline
...
Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com>
2026-01-02 03:11:08 +00:00
fredricz-20070104
f631b25c85
[None][test] Unified slurm extra args management and session collection logic ( #10332 )
...
Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com>
Signed-off-by: yingguo-trt <244492186+yingguo-trt@users.noreply.github.com>
Co-authored-by: yingguo-trt <244492186+yingguo-trt@users.noreply.github.com>
2026-01-01 21:10:51 -05:00
Balaram Buddharaju
4a1b742aa0
[TRTLLM-9467][fix] Fix PP+CP combination with helix parallelism ( #10312 )
...
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
2026-01-01 13:42:53 -05:00
Gal Hubara-Agam
5845951538
[ #10056 ][fix] AutoDeploy: Handle deletion of nested params in sharding ( #10376 )
...
Signed-off-by: Gal Hubara Agam <96368689+galagam@users.noreply.github.com>
2026-01-01 08:11:11 -05:00
tcherckez-nvidia
4868772ad7
[None][feat] Add export data to build and run script for AD ( #10299 )
...
Signed-off-by: Tal Cherckez <127761168+tcherckez-nvidia@users.noreply.github.com>
2026-01-01 04:54:47 -05:00
Balaram Buddharaju
9f5b750a93
[None][chore] Waive tests blocking pre-merge 12/31 ( #10373 )
...
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
2026-01-01 03:00:24 -05:00
Balaram Buddharaju
0b75340223
[ https://nvbugs/5744427 ][fix] Make Gemma3 multimodal test fp8 ( #10368 )
...
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
2026-01-01 01:11:34 -05:00
TensorRT LLM
edbcff0257
[None][infra] Check in most recent lock file from nightly pipeline
...
Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com>
2026-01-01 03:08:31 +00:00
Yuxian Qiu
ff836d4f41
[ https://nvbugs/5740359 ][chore] Unwaive tests. ( #10260 )
...
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
2026-01-01 09:53:34 +08:00
Lucas Liebenwein
1bbe71b3ed
[ #10244 ][feat] AutoDeploy: separate prefill/decode in flashinfer ( #10252 )
...
Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
2025-12-31 17:01:24 -05:00
Mike Iovine
9085021aa4
[None][feat] Implement sampling for MTP 1-model ( #10019 )
...
Signed-off-by: Mike Iovine <miovine@nvidia.com>
2025-12-31 13:48:34 -05:00
Simeng Liu
84d107b2f0
[ https://nvbugs/5717993 ][fix] Add execution_stream across PyExecutor, KVCacheManager, PeftCacheManager to ensure proper CUDA stream synchronization between KV cache transfer operations and model forward kernels. ( #10060 )
...
Signed-off-by: SimengLiu-nv <simengl@nvidia.com>
2025-12-31 09:22:54 -08:00
xinhe-nv
0d2e2718ce
[None][chore] Add failed cases into waives.txt ( #10354 )
...
Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com>
2025-12-31 09:30:22 -05:00
chenfeiz0326
a23c6f1092
[TRTLLM-9834][feat] Transfer to TRTLLM-INFRA Database and Fail post-merge tests if regression ( #10282 )
...
Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
2025-12-31 21:44:59 +08:00
tcherckez-nvidia
464847c6be
[ #9717 ][chore] Standardize MoE weights interface ( #10295 )
...
Signed-off-by: Tal Cherckez <127761168+tcherckez-nvidia@users.noreply.github.com>
2025-12-31 07:37:18 -05:00
Jin Li
ef1d4a40b5
[ https://nvbugs/5727475 ][fix] Avoid use property with setter in nn.Mo… ( #10212 )
...
Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com>
2025-12-31 06:21:36 -05:00
Emma Qiao
d944430f96
[None][infra] Waive failed cases on 12/31 ( #10353 )
...
Signed-off-by: qqiao <qqiao@nvidia.com>
Signed-off-by: Yanchao Lu <yanchaol@nvidia.com>
Co-authored-by: Yanchao Lu <yanchaol@nvidia.com>
2025-12-31 17:39:49 +08:00
Necofish
73870ae4ad
[None][feat] support Qwen3-VL dense model in pytorch backend ( #9060 )
...
Signed-off-by: Nekofish-L <liuxiangyang@mail.ustc.edu.cn>
2025-12-31 17:54:26 +09:00
xinhe-nv
827d12caaf
[ https://nvbugs/5558516 ][test] add disaggregated stress test ( #9354 )
...
Signed-off-by: Xin He (SW-GPU) <200704525+xinhe-nv@users.noreply.github.com>
2025-12-31 16:47:36 +08:00
Yuxian Qiu
910a633066
[ https://nvbugs/5774869 ][chore] waive tests. ( #10356 )
...
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
2025-12-31 03:00:52 -05:00
Yiqing Yan
fdc03684cc
[TRTLLM-10016][infra] Use SlurmPatition attribute time as timeout threshold ( #10254 )
...
Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
Co-authored-by: Yanchao Lu <yanchaol@nvidia.com>
2025-12-31 15:02:24 +08:00
Pengyun Lin
fad000589d
[None][chore] Unify DS tool parser names ( #10239 )
...
Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
2025-12-31 14:40:07 +08:00
xinhe-nv
1e9c153b4c
[None][fix] disable thread leak check for kimi ( #10337 )
...
Signed-off-by: Xin He (SW-GPU) <200704525+xinhe-nv@users.noreply.github.com>
2025-12-31 01:31:37 -05:00
xinhe-nv
6c1abf2d45
[None][chore] Add failed cases into waives.txt ( #10344 )
...
Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com>
2025-12-31 00:11:54 -05:00
TensorRT LLM
ed3a3097a4
[None][infra] Check in most recent lock file from nightly pipeline
...
Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com>
2025-12-31 03:11:56 +00:00
Jin Li
34c2fd50a9
[ https://nvbugs/5707359 ][fix] Unwaive OOM case that should be fixed by #9446 ( #10334 )
...
Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com>
2025-12-31 10:41:39 +08:00
Yuxian Qiu
1f3afb8e6f
[None][feat] Implement send_object for TorchDist. ( #10213 )
...
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
2025-12-31 10:40:52 +08:00
Yuxian Qiu
ec8a388c25
[ https://nvbugs/5769890 ][fix] Import get_free_port. ( #10341 )
...
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
2025-12-31 09:47:27 +08:00
Eran Geva
74832a1895
[ https://nvbugs/5766986 ][fix] fixed the shard_all_unprocessed default value to align with the default.yml ( #10271 )
...
Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com>
2025-12-30 08:54:13 -05:00
Bo Li
1f0365da36
[None][infra] Add LongBenchV1 to trtllm-eval. ( #10265 )
...
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
2025-12-30 21:39:34 +08:00
Emma Qiao
6732c76414
[None][infra] Waive failed cases for main on 12/30 ( #10338 )
...
Signed-off-by: qqiao <qqiao@nvidia.com>
2025-12-30 05:17:43 -05:00
Emma Qiao
fb05cd769a
[None][infra] Enable single-gpu CI on spark ( #9304 )
...
Signed-off-by: qqiao <qqiao@nvidia.com>
Signed-off-by: Emma Qiao <qqiao@nvidia.com>
Signed-off-by: Jenny Liu <JennyLiu-nv+JennyLiu@users.noreply.github.com>
Co-authored-by: Yanchao Lu <yanchaol@nvidia.com>
2025-12-30 17:22:14 +08:00
Emma Qiao
cce7247815
[ https://nvbugs/5594703 ][infra] Unwaive the failed case to test ( #10275 )
...
Signed-off-by: qqiao <qqiao@nvidia.com>
2025-12-30 16:38:54 +08:00
xinhe-nv
6accdbc6a6
[None][chore] Add failed cases into waives.txt ( #10302 )
...
Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com>
2025-12-30 03:11:52 -05:00
ruodil
0f4ed90560
[TRTLLM-9965][test] add long-context disagg test for GB300/GB200 and remove config_index in yaml ( #10225 )
...
Signed-off-by: Ruodi Lu <ruodil@users.noreply.github.com>
Co-authored-by: Ruodi Lu <ruodil@users.noreply.github.com>
2025-12-30 02:39:50 -05:00
binghanc
692d8f2023
[TRTLLM-9455][feat] support for new checkpoint ( #10082 )
...
Signed-off-by: binghanc <176802681+binghanc@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-12-30 14:46:39 +08:00
xinhe-nv
3e0344a53d
[None][chore] Add failed cases into waives.txt ( #10301 )
...
Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com>
Signed-off-by: Xin He (SW-GPU) <200704525+xinhe-nv@users.noreply.github.com>
2025-12-30 14:04:28 +08:00
xinhe-nv
48fee8d0f6
[None][chore] Add failed cases into waives.txt ( #10321 )
...
Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com>
2025-12-30 00:11:49 -05:00
Emma Qiao
f396ad83b0
[None][infra] Remove duplicates in waives.txt ( #10333 )
...
Signed-off-by: qqiao <qqiao@nvidia.com>
2025-12-29 22:32:52 -05:00
TensorRT LLM
fa4c7997c5
[None][infra] Check in most recent lock file from nightly pipeline
...
Signed-off-by: TensorRT LLM <90828364+tensorrt-cicd@users.noreply.github.com>
2025-12-30 03:07:48 +00:00
Balaram Buddharaju
4944192eae
[None][chore] Waive tests failing in pre-merge 12/28 ( #10311 )
...
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
2025-12-29 20:53:49 -05:00
Neta Zmora
966231d29c
[ #9626 ][feat] Add an auto-deploy transform for using cutlass FP4 MoE kernels ( #10304 )
...
Add a transform to relace torch.ops.auto_deploy.torch_quant_nvfp4_moe
with the optimized torch.ops.auto_deploy.trtllm_quant_nvfp4_moe_fused.
Currently generates the wrong results when the number of rows in MoE FC1 weights is not divisible by 128,
so torch.ops.auto_deploy.trtllm_quant_nvfp4_moe_fused is not set as the default FP4 MoE implementation (i.e. the transform is disabled).
Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com>
2025-12-29 23:18:15 +02:00
Yanchao Lu
965578ca21
[None][infra] Some improvements for Slurm execution path in the CI ( #10316 )
...
Signed-off-by: Yanchao Lu <yanchaol@nvidia.com>
2025-12-29 06:49:44 -05:00