xiweny
48e779ae8c
[ https://nvbugs/5541494 ] [fix] add back missing sm100f bmm kernels ( #8051 )
...
Signed-off-by: Xiwen Yu <13230610+VALLIS-NERIA@users.noreply.github.com>
2025-09-29 05:35:44 -04:00
yufeiwu-nv
3ba6727a68
[None][test] Update get_sysinfo.py to avoid UnboundLocalError ( #7982 )
...
Signed-off-by: yufeiwu <230315618+yufeiwu-nv@users.noreply.github.com>
Signed-off-by: yufeiwu-nv <230315618+yufeiwu-nv@users.noreply.github.com>
2025-09-29 05:14:38 -04:00
Gal Hubara-Agam
b2095aa074
[ #4674 ][bugfix] AutoDeploy Fix memory leak in fuse_moe ( #7844 )
...
Delete the unstacked weights immediately to save GPU memory, cleanup occurs automatically after the transformation, but for large models we'll run out of memory during the transformation itself.
Signed-off-by: Gal Hubara Agam <96368689+galagam@users.noreply.github.com>
2025-09-29 11:01:07 +03:00
xinhe-nv
20e6cd39f1
[None][chore] Add failed cases into waives.txt ( #8043 )
...
Signed-off-by: Xin He (SW-GPU) <200704525+xinhe-nv@users.noreply.github.com>
2025-09-29 03:37:39 -04:00
Emma Qiao
ce381d6813
[None][infra] Waive failed cases for main on 0929 ( #8053 )
...
Signed-off-by: qqiao <qqiao@nvidia.com>
2025-09-29 02:46:02 -04:00
Void
7f1e2dba92
[None][fix] only support deepep post quant all2all on nvfp4 ( #8041 )
...
Signed-off-by: Yilin Zhang <18275976+yilin-void@users.noreply.github.com>
2025-09-29 14:37:50 +08:00
HuiGao-NV
1339beb04e
[None][ci] Disable tensorRT cases in post-merge ( #8028 )
...
Signed-off-by: Hui Gao <huig@nvidia.com>
2025-09-29 14:21:52 +08:00
HuiGao-NV
7ac932d45e
[ https://nvbugs/5532087 ][CI] Enable test case ( #8029 )
...
Signed-off-by: Hui Gao <huig@nvidia.com>
2025-09-29 01:46:28 -04:00
Tailing Yuan
985b79ca82
[TRTLLM-8348][feat] Speed up concat k and copy k_nope in context phase using torch.compile ( #8044 )
...
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
2025-09-29 13:28:12 +08:00
Ivy Zhang
1e2e851db8
[None][chore] update test case constraint ( #8020 )
...
Signed-off-by: Ivy Zhang <25222398+crazydemo@users.noreply.github.com>
2025-09-29 13:25:09 +08:00
Eran Geva
9cea6bfb30
[ #7288 ][feat] Added AutoDeploy backend support to test_perf.py ( #7588 )
...
Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com>
2025-09-28 21:21:27 -07:00
Kaiyu Xie
030254f88a
[None] [doc] Document hang issue caused by UnpicklingError ( #8049 )
...
Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
2025-09-28 23:40:35 -04:00
Zhenhua Wang
4be533183f
[None][chroe] Update cron schedule for closing inactive issues ( #8048 )
...
Signed-off-by: Zhenhua Wang <zhenhuaw@nvidia.com>
Signed-off-by: Zhenhua Wang <4936589+zhenhuaw-me@users.noreply.github.com>
2025-09-28 22:52:03 -04:00
Ivy Zhang
0ecafd84da
[None][chore] Update chunked prefill test case configs ( #7868 )
...
Signed-off-by: Ivy Zhang <25222398+crazydemo@users.noreply.github.com>
2025-09-29 10:37:34 +08:00
Zongfei Jing
e9f26feeb6
[None][chore] Cherry-pick from ( #7598 ) Make low_precision_combine as a llm arg ( #7898 )
...
Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>
2025-09-28 22:32:33 -04:00
Yukun He
28b9a81c58
[TRTLLM-4500][feat] Add serialization/deserialization options for AutoTuner profiling cache ( #7738 )
...
To achieve determinism for the AutoTuner profiling cache, serialization and deserialization are introduced to store the cache on disk in JSON format. Use TLLM_AUTOTUNER_CACHE_PATH to indicate the path where the cache file should be stored:
Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
2025-09-29 07:40:51 +08:00
WeiHaocheng
563e588e56
[None][doc] Scaffolding tech blog fix a typo ( #8042 )
...
Signed-off-by: Fred Wei <20514172+WeiHaocheng@users.noreply.github.com>
2025-09-28 10:29:01 -04:00
Guoming Zhang
3ba4bf6e70
[None][chore] Disable concurrent weights loading for _load_weights_im… ( #8034 )
...
Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com>
2025-09-28 07:11:16 -04:00
Emma Qiao
2be05cbd6e
[None][infra] Skip failed test for main branch on 9/28 ( #8040 )
...
Signed-off-by: qqiao <qqiao@nvidia.com>
2025-09-28 07:00:55 -04:00
Guoming Zhang
51aefd1bac
[None][doc] Refine perf overview.md and correct the error link in per… ( #8035 )
...
Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com>
2025-09-28 16:14:42 +08:00
ChristinaZ
95eac2cda7
[ https://nvbugs/5537738 ][fix] Add fp8 post-quant allgather support ( #8008 )
...
Signed-off-by: Christina Zhang <83400082+ChristinaZ@users.noreply.github.com>
2025-09-28 15:32:45 +08:00
Aurelien Chartier
77b68d9d7d
[ https://nvbugs/5461712 ] [fix] Use DG for Qwen3 Linear layers ( #8030 )
...
Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com>
2025-09-28 10:33:36 +08:00
Xianjie Qiao
c8f98b3065
[None] [feat] Update disagg gen-only benchmark. ( #7917 )
...
Signed-off-by: Xianjie <5410381+qiaoxj07@users.noreply.github.com>
2025-09-28 09:56:56 +08:00
Iman Tabrizian
33282351a2
[TRTLLM-6106][feat] Add support for KVCache transfer from KVCache reuse path ( #6348 )
...
Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>
2025-09-27 19:29:30 -04:00
Frida Hou
a36b48bcab
[ #5860 ][autodeploy] GPT-OSS MXFP4 support ( #7451 )
...
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
Signed-off-by: Fridah-nv <201670829+Fridah-nv@users.noreply.github.com>
2025-09-26 15:36:06 -07:00
Jhao-Ting Chen
c33f43e13a
[ https://nvbugs/5518713 ][fix] Trtllm-gen moe backend for blockwise fp8 ckpt (Qwen3-235B-A22B-FP8) ( #7856 )
...
Signed-off-by: Jhao-Ting Chen <jhaotingc@nvidia.com>
2025-09-26 14:29:32 -07:00
Mike Iovine
d7087015f1
[TRTLLM-8271][fix] Fix CDL overlap scheduling performance ( #7971 )
...
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
2025-09-26 16:05:10 -04:00
Emma Qiao
c8bef27ebb
[None][infra] Waive failed cases in post-merge 2305 ( #8019 )
...
Signed-off-by: qqiao <qqiao@nvidia.com>
2025-09-26 10:20:12 -07:00
YueWeng
a4243f0da5
[TRTLLM-6393][feat] add static tree sampling and verification ( #7161 )
...
Signed-off-by: Yue Weng <25103990+yweng0828@users.noreply.github.com>
2025-09-26 13:16:16 -04:00
HuiGao-NV
f4d3be4bbc
[None][feat] Add a standalone buffer cache class and reuse buffers between cduagraph and no-graph flow ( #7669 )
...
Signed-off-by: Hui Gao <huig@nvidia.com>
2025-09-26 07:28:06 -07:00
Tailing Yuan
b11ee868c5
[ https://nvbugs/5495789 ][feat] Optionally disable server GC and worker GC ( #7995 )
...
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
2025-09-26 21:39:24 +08:00
Martin Marciniszyn Mehringer
6dc50ebcdd
[None][chore] Require NVIDIA developers to use their full name or NVIDIA account in GitHub profiles ( #8022 )
...
Signed-off-by: Martin Marciniszyn Mehringer <11665257+MartinMarciniszyn@users.noreply.github.com>
2025-09-26 21:16:58 +08:00
WeiHaocheng
35edad37f9
[None][doc] Add scaffolding tech blog to cover ( #8021 )
...
Signed-off-by: Fred Wei <20514172+WeiHaocheng@users.noreply.github.com>
2025-09-26 02:22:11 -07:00
xinhe-nv
ba6ab62bd1
[None][chore] Add failed cases into waives.txt ( #8004 )
...
Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com>
2025-09-26 00:41:02 -07:00
xinhe-nv
f32f5730b2
[None][chore] Add failed cases into waives.txt ( #7986 )
...
Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com>
2025-09-25 23:50:09 -07:00
Yueh-Ting (eop) Chen
2db22fb4e5
[None][feature] Add environment variable to adjust block pool allocation ration under kv cache manager ( #7923 )
...
By default, we allocate equal proportion shares of memory for all
window sizes (see the else case). With TRTLLM_WINDOW_SIZE_SHARES,
we can override this behavior to adjust the memory share of each
window size. For example, if we have window size of [512, 32768],
then setting TRTLLM_WINDOW_SIZE_SHARES=0.4,0.6 will be allocating
40% of the memory to window size 512 and 60% of the memory to window
size 32768.
Signed-off-by: eopXD <yuehtingc@nvidia.com>
2025-09-26 14:09:01 +08:00
HuiGao-NV
a9965d84e0
[None][chore] Report NCCL error message but not OOM when NCCL error happens ( #8009 )
...
Signed-off-by: Hui Gao <huig@nvidia.com>
2025-09-25 23:07:32 -07:00
peaceh-nv
55ce70060e
[ https://nvbugs/5451740 ][fix] Add DP padding back on SM120 ( #7965 )
...
Signed-off-by: peaceh <103117813+peaceh-nv@users.noreply.github.com>
2025-09-26 13:59:54 +08:00
Lucas Liebenwein
3a96d75a3c
[ https://nvbugs/5527956 ][fix] AutoDeploy: fix IMA due to outdated metadata ( #8002 )
...
Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
2025-09-25 22:05:55 -07:00
sunnyqgg
2e5850c28a
[TRTLLM-7330][feat] Eagle3 cuda graph support for the first draft model inference ( #7363 )
...
Signed-off-by: qgai <qgai@nvidia.com>
2025-09-26 11:28:05 +08:00
Chuang Zhu
f98fa0cf8b
[None][feat] Optimize kv cache transfer TEP ( #7613 )
...
Signed-off-by: Chuang Zhu <111838961+chuangz0@users.noreply.github.com>
2025-09-25 20:20:04 -07:00
QI JUN
4c0f8482f1
[None][ci] Waive test_mm_encoder_standalone.py::test_multi_request_batch_chat[llava-v1.6-mistral-7b-hf] ( #8010 )
...
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
2025-09-26 11:07:54 +08:00
Yuan Tong
fae83c387b
[ #6102 ][fix] support non-system python installation ( #7763 )
...
Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com>
2025-09-26 10:16:15 +08:00
Enwei Zhu
d650320de4
[None][infra] Improve the failure message for accuracy test suite ( #7994 )
...
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
2025-09-26 10:04:47 +08:00
Yiqing Yan
108248ece1
[TRTLLM-7999][infra] Add B300/GB300 single gpu test ( #7951 )
...
Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
2025-09-26 09:59:11 +08:00
Yanchao Lu
7e2521a7f0
[None][chore] Some clean-ups for CUDA 13.0 dependencies ( #7979 )
...
Signed-off-by: Yanchao Lu <yanchaol@nvidia.com>
2025-09-26 08:46:11 +08:00
dongfengy
1eb653146a
[ https://nvbugs/5525951 ][fix] Clarify that PP is not supported for GPTOSS ( #7911 )
...
Signed-off-by: Dongfeng Yu <dongfengy@nvidia.com>
2025-09-25 12:54:18 -07:00
QI JUN
1529a6f22d
[None][chore] extract weights loading related logic to model loader ( #7579 )
...
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
2025-09-25 10:19:22 -07:00
Emma Qiao
2dc93c6371
[None][infra] Waive failed tests on main ( #8001 )
...
Signed-off-by: qqiao <qqiao@nvidia.com>
2025-09-25 08:13:39 -07:00
WeiHaocheng
4b0570a0d6
[None][doc] Add acknowledgements in scaffolding tech blog ( #7983 )
...
Signed-off-by: Fred Wei <20514172+WeiHaocheng@users.noreply.github.com>
2025-09-25 08:07:13 -07:00