Tian Zheng
5efee01da1
[None][feat] Add Skip Softmax MLA kernels for Blackwell and Fix an accuracy bug of NVFP4 KV ( #10813 )
...
Signed-off-by: Tian Zheng <29906817+Tom-Zheng@users.noreply.github.com>
2026-01-26 16:46:33 +08:00
Enwei Zhu
72ef732bcf
[TRTLLM-10147][perf] Balanced random MoE workload generator for CuteDSL kernel UT, autotuner and layerwise benchmark ( #10279 )
...
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
2026-01-25 21:02:30 +08:00
Yanchao Lu
ae58a7ed20
[None][chore] Revert NVIDIA/TensorRT-LLM#10819 ( #10870 )
...
Signed-off-by: Yanchao Lu <yanchaol@nvidia.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
2026-01-25 18:12:21 +08:00
Yanchao Lu
18f63dfcec
[None][chore] Reduce tedious logs ( #10819 )
...
Signed-off-by: Yanchao Lu <yanchaol@nvidia.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
2026-01-25 18:12:21 +08:00
mpikulski
0f7ec033f7
[ https://nvbugs/5791242 ][fix] workaround for flashinfer.sampling.sampling_from_logits ( #10713 )
...
Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
2026-01-25 18:12:21 +08:00
Yukun He
25bdc30162
[ https://nvbugs/5782112 ][fix] Cherry-pick #10633 : Fix hanging issue for MNNVL Allreduce under PP ( #10750 )
...
Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
2026-01-25 18:12:21 +08:00
Yuxian Qiu
2b3bb2e9b0
[ https://nvbugs/5811697 ][fix] Fix buffer reuse. ( #10716 )
...
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
2026-01-25 18:12:21 +08:00
Mike Iovine
f02948d956
[ https://nvbugs/5803813 ][fix] Fix llama 4 min latency ( #10724 )
...
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
2026-01-25 18:12:21 +08:00
Yao Yao
6f07fa81d7
[TRTLLM-7738][feat] Adding implementation of KVCacheManagerV2 ( #10736 )
...
Signed-off-by: Yao Yao <lowsfer@users.noreply.github.com>
KVCacheManagerV2 is a new python-based implementation of the KV cache manager, featuring cleaner API, better abstraction and better code quality without the accumulated legacy.
2026-01-24 04:48:39 -05:00
Yuxian Qiu
9fcc93ea7b
[ https://nvbugs/5829097 ][fix] Re-init TRTLLM sampler to use sample stream in multi-stream cases. ( #10918 )
...
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
2026-01-24 14:04:10 +08:00
Kaiyu Xie
da967d0bd7
[TRTLLM-10334] [feat] Support overlap scheduler for disagg ctx instances ( #10755 )
...
Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
2026-01-23 22:29:37 -05:00
jthomson04
cf88da7eca
[None][feat] KV Connector Support for MTP ( #10932 )
...
Signed-off-by: jthomson04 <jwillthomson19@gmail.com>
Co-authored-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com>
2026-01-23 18:58:26 -05:00
Taylor Yeonbok Lee
1fbbb1f3cd
[None][feat] AutoDeploy: Enhance memory consumption for MoE fusion transform ( #10772 )
...
Signed-off-by: Taylor Yeonbok Lee <249374542+taylor-yb-lee@users.noreply.github.com>
2026-01-23 15:22:54 -08:00
Yan Chunwei
54768f3f2c
[None][chore] refine placement group in ray executor ( #10235 )
...
Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com>
2026-01-23 19:31:20 +08:00
Leslie Fang
31d04dfa12
[TRTLLM-9108][feat] Add test configurable moe module multi gpu ( #10699 )
...
Signed-off-by: leslie-fang25 <leslief@nvidia.com>
2026-01-23 10:16:58 +08:00
William Zhang
2146c23786
[ #9306 ][refactor] Refactor AutoDeployConfig into LlmArgs ( #10613 )
...
Signed-off-by: William Zhang <133824995+2ez4bz@users.noreply.github.com>
2026-01-22 16:02:49 -05:00
Grzegorz Kwasniewski
d8e6e22060
[ https://nvbugs/5819002 ][fix] fix sharding tests ( #10775 )
...
Signed-off-by: greg-kwasniewski1 <213329731+greg-kwasniewski1@users.noreply.github.com>
2026-01-22 20:02:48 +01:00
Yi Zhang
d43be7b65e
[None][fix] Avoid Double update for previous batch ( #9888 )
...
Signed-off-by: Yi Zhang <187001205+yizhang-nv@users.noreply.github.com>
2026-01-22 13:15:06 -05:00
Shi Xiaowei
944c304bbb
[TRTLLM-9527][feat] Python transceiver components (step 2) ( #10494 )
...
Signed-off-by: Shixiaowei02 <39303645+Shixiaowei02@users.noreply.github.com>
2026-01-22 10:14:50 -08:00
Venky
b3146d095d
[TRTC-122][feat] Eagle3 Specdec UX improvements ( #10124 )
...
Signed-off-by: Venky Ganesh <23023424+venkywonka@users.noreply.github.com>
2026-01-22 07:24:11 -08:00
Yan Chunwei
30ffa58b54
[ https://nvbugs/5783876 ][fix] fix hmac launch ( #10434 )
...
Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>
2026-01-22 23:20:53 +08:00
Pengyun Lin
5e34112b27
[TRTLLM-10388][feat] Support logprobs for Completions API ( #10809 )
...
Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
2026-01-22 21:25:24 +08:00
彭晋韬(jtao peng)
9beb971827
[None][fix] Update RMSNorm custom op plumbing ( #10843 )
...
Signed-off-by: jintaop <jintaop@nvidia.com>
2026-01-22 21:03:22 +08:00
Jiayu Chang
1dc49b266e
[ https://nvbugs/5322131 ][feat] Multi-LoRA serving with CUDA Graph ( #8279 )
...
Signed-off-by: Jiayu Chang <jiayuc@nvidia.com>
2026-01-22 14:01:18 +01:00
tcherckez-nvidia
128d4ac5be
[None][chore] NVFP4 MoE - Move weights transformation to fusion phase… ( #10803 )
...
Signed-off-by: Tal Cherckez <tcherckez@nvl72070-T11.cm.cluster>
Signed-off-by: Tal Cherckez <tcherckez@nvl72039-T03.cm.cluster>
Signed-off-by: Tal Cherckez <tcherckez@nvl72098-T11.cm.cluster>
Signed-off-by: tcherckez-nvidia <127761168+tcherckez-nvidia@users.noreply.github.com>
Co-authored-by: Tal Cherckez <tcherckez@nvl72070-T11.cm.cluster>
Co-authored-by: Tal Cherckez <tcherckez@nvl72039-T03.cm.cluster>
Co-authored-by: Tal Cherckez <tcherckez@nvl72098-T11.cm.cluster>
2026-01-22 13:08:05 +02:00
Yiqing Yan
0243abee22
[None][chore] Bump version to 1.3.0rc1 ( #10923 )
...
Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
2026-01-22 18:45:40 +08:00
tcherckez-nvidia
6e72aff866
[ #10838 ][fix] Add missing dist strategy param. fix typo for ad_logger… ( #10892 )
...
Signed-off-by: Tal Cherckez <127761168+tcherckez-nvidia@users.noreply.github.com>
2026-01-22 10:38:31 +02:00
Pengbo Wang
9462d90ec7
[None][feat] Add KV cache cleanup ( #7439 )
...
Signed-off-by: Pengbo Wang <221450789+pengbowang-nv@users.noreply.github.com>
2026-01-22 15:14:17 +08:00
shuyixiong
fd2af8d58a
[TRTLLM-9771][feat] Support partial update weight for fp8 ( #10456 )
...
Signed-off-by: Shuyi Xiong <219646547+shuyixiong@users.noreply.github.com>
Signed-off-by: shuyixiong <219646547+shuyixiong@users.noreply.github.com>
2026-01-22 14:46:05 +08:00
Enwei Zhu
be4a431ffd
[TRTLLM-10154][feat] Enable guided decoding with reasoning parsers ( #10890 )
...
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
2026-01-22 14:14:28 +08:00
Taylor Yeonbok Lee
895bb94b3d
[ #8241 ][feat] Support model_kwargs for pytorch backend ( #10351 )
...
Signed-off-by: Taylor Yeonbok Lee <249374542+taylor-yb-lee@users.noreply.github.com>
2026-01-21 20:51:38 -08:00
Yechan Kim
70caa779a4
[None][feat] K-EXAONE MTP support ( #10796 )
...
Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>
2026-01-22 13:43:00 +09:00
Lizhi Zhou
f3a41c8d94
[TRTLLM-10059][feat] Use global unique id as disagg request id ( #10187 )
...
Signed-off-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com>
2026-01-21 22:52:34 -05:00
Daniil
0434db5bf7
[None][feat] GLM-4.5-Air support ( #10653 )
...
Signed-off-by: Daniil Kulko <kulkodaniil@gmail.com>
2026-01-22 11:42:09 +08:00
dongxuy04
635cbf01ba
[ https://nvbugs/5816267 ][fix] Remove weight tensor holder to release memory earlier ( #10876 )
...
Signed-off-by: Dongxu Yang <78518666+dongxuy04@users.noreply.github.com>
2026-01-21 16:42:52 -08:00
Guiju Zhang
8cf8fbbe16
[TRTLLM-10325][feat] Refactor speculative decoding workers ( #10768 )
...
Signed-off-by: Guiju Zhang <7135567+cascade812@users.noreply.github.com>
2026-01-21 13:05:29 -05:00
xxi
9feebb3a27
[None][chore] switch to ConfigurableMoE as the default path ( #10792 )
...
Signed-off-by: xxi <xxi@nvidia.com>
2026-01-21 15:57:38 +08:00
Yukun He
a4152c80f6
[ https://nvbugs/5814253 ][fix] unwaive test_autotuner_distributed_strategy tests ( #10793 )
...
Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
2026-01-21 15:37:11 +08:00
Yukun He
d60d6ff6fd
[None][fix] Cherry-pick #10715 : Disable short profile for tunable ops with MERGE strategy ( #10844 )
...
Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
2026-01-21 13:53:04 +08:00
Yibin Li
9116dfbacd
[ https://nvbugs/5775021 ] [fix] Replace pickle.load with restricted Unpickler ( #10622 )
...
Signed-off-by: Yibin Li <109242046+yibinl-nvidia@users.noreply.github.com>
2026-01-21 11:42:54 +08:00
Yanchao Lu
ccf4d79c6c
[None][chore] Revert NVIDIA/TensorRT-LLM#10847 ( #10869 )
2026-01-21 11:08:40 +08:00
Simeng Liu
3c8ed19440
[ https://nvbugs/5670108 ][fix] Fix overlap scheduler race condition in… ( #10610 )
...
Signed-off-by: SimengLiu-nv <simengl@nvidia.com>
2026-01-20 10:56:56 -08:00
Izzy Putterman
864b61cadd
[None][feat] Speculative One Model: FlashInfer sampling ( #10284 )
...
Signed-off-by: Izzy Putterman <iputterman@nvidia.com>
2026-01-20 12:56:43 -05:00
jthomson04
2db3d7eeba
[None][chore] Async Transfer Manager ( #9891 )
...
Signed-off-by: jthomson04 <jwillthomson19@gmail.com>
2026-01-20 12:12:47 -05:00
Yanchao Lu
ae8f74b620
[None][chore] Reduce tedious logs ( #10847 )
...
Signed-off-by: Yanchao Lu <yanchaol@nvidia.com>
2026-01-20 22:56:24 +08:00
Grzegorz Kwasniewski
eb326073d8
[TRTLLM-10785][feat] Fix sharding dashboard errors ( #10786 )
...
Signed-off-by: greg-kwasniewski1 <213329731+greg-kwasniewski1@users.noreply.github.com>
2026-01-20 09:25:36 +01:00
Yi Zhang
58311b2345
[None][fix] Remove unused params in attn ( #10652 )
...
Signed-off-by: yizhang-nv <187001205+yizhang-nv@users.noreply.github.com>
2026-01-20 03:08:59 -05:00
benzh-2025
4c8468c5d3
[None][fix] default disable gemm+allreduce fusion ( #10656 )
2026-01-20 12:31:17 +08:00
Bo Li
f3a985ce27
[TRTLLM-10296][fix] Fix the potential misaligned access due to vectorized ld/st instructions in NVLinkOneSided A2A. ( #10539 )
...
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
2026-01-20 11:08:04 +08:00
Liao Lanyu
dbb858ae0c
[TRTLLM-10029][scheduler] Re-implement MicroBatchScheduler and CapacityScheduler in Python ( #10273 )
...
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
Signed-off-by: Lanyu Liao <lancelly@users.noreply.github.com>
Signed-off-by: Lance Liao <108499334+lancelly@users.noreply.github.com>
Co-authored-by: junq <22017000+QiJune@users.noreply.github.com>
Co-authored-by: Lanyu Liao <lancelly@users.noreply.github.com>
2026-01-20 10:31:13 +08:00
SamareshSingh
64ff5cac52
[None][chore] docs: clarify LoRA is not supported with --use_fp8_rowwise in Fp8RowwiseAttention (see #2603 ) ( #10320 )
...
Signed-off-by: Samaresh Kumar Singh <ssam3003@gmail.com>
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
Co-authored-by: Kanghwan <861393+karljang@users.noreply.github.com>
Co-authored-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
2026-01-19 04:38:00 -05:00
Lucas Liebenwein
9879400479
[ #10642 ][feat] AutoDeploy: optimized canonicalize_graph utilities [1/2] ( #10675 )
...
Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
2026-01-18 13:42:30 -05:00
Eran Geva
4d2916d683
[ #10688 ][fix] AutoDeploy Fix CUDA graph batch sizes exceeding max_batch_size ( #10687 )
...
Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com>
2026-01-18 13:31:01 -05:00
Eran Geva
a11f0dbd61
[ #10696 ][fix] AutoDeploy prevent torch.export from specializing batch dimension when max_batch_size=1 ( #10697 )
...
Signed-off-by: Eran Geva <19514940+MrGeva@users.noreply.github.com>
2026-01-18 10:42:49 +02:00
Grzegorz Kwasniewski
7bf4dd9f63
[TRTLLM-10318][feat] Fixing Nemotron sharding: support for sharding buffers ( #10319 )
...
Signed-off-by: greg-kwasniewski1 <213329731+greg-kwasniewski1@users.noreply.github.com>
Signed-off-by: Lucas <11156568+lucaslie@users.noreply.github.com>
Signed-off-by: Grzegorz Kwasniewski <213329731+greg-kwasniewski1@users.noreply.github.com>
Co-authored-by: Lucas <11156568+lucaslie@users.noreply.github.com>
2026-01-17 04:02:06 -05:00
Yuxian Qiu
cef67b4f8d
[None][fix] convert to CUDA tensor before calling _resmooth_kernel. ( #10770 )
...
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
2026-01-17 16:18:34 +08:00
Chenghao Zhang
0b748d5bba
[None][chore] update flashinfer to 0.6.0 ( #10522 )
...
Signed-off-by: Chenghao Zhang <211069071+nvchenghaoz@users.noreply.github.com>
2026-01-16 16:22:06 -05:00
Chenghao Zhang
b6acd96616
[None][fix] AutoDeploy: Fix the nvfp4 fused_moe ( #10727 )
...
Signed-off-by: nvchenghaoz <211069071+nvchenghaoz@users.noreply.github.com>
2026-01-16 12:04:40 -08:00
Stefan Niebler
0cfd08745c
[TRTLLM-9735][feat] Add processed logprobs functionality to TorchSampler ( #9675 )
...
Signed-off-by: Stefan Niebler <82932102+stnie@users.noreply.github.com>
Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com>
Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>
Co-authored-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com>
Co-authored-by: Erin Ho <14718778+hchings@users.noreply.github.com>
2026-01-16 10:52:41 -08:00
Wanli Jiang
722978b837
[TRTLLM-10305][feat] Support customized seq len larger than model config ( #10600 )
...
Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
2026-01-16 16:07:36 +08:00
dongfengy
6dfb8d7084
[None][fix] Fix Piecewise Cuda Graph for GPTOSS ( #10631 )
...
Signed-off-by: Dongfeng Yu <dongfengy@nvidia.com>
2026-01-16 15:47:34 +08:00
Necofish
03cdf5804f
[None][fix] impl fused triton kernel for e8m0 resmooth to reduce memory footprint ( #10327 )
...
Signed-off-by: Nekofish-L <liuxiangyang@mail.ustc.edu.cn>
Co-authored-by: Kanghwan <861393+karljang@users.noreply.github.com>
2026-01-15 22:13:18 -08:00
Yukun He
f001c4946d
[ https://nvbugs/5782112 ][fix] Fix hanging issue for MNNVL Allreduce under PP ( #10633 )
...
Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
2026-01-16 13:03:36 +08:00
Enwei Zhu
7b8b9ccbaf
[ https://nvbugs/5669671 ][fix] Support GuidedDecoder with sharded logits ( #10698 )
...
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
2026-01-16 11:04:26 +08:00
Lucas Liebenwein
49c6f73554
[None][bug] AutoDeploy: fix regression in kv cache resize memory estimation ( #10726 )
...
Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
2026-01-16 09:52:03 +08:00
heyuhhh
dfac07c045
[None][feat] Support to export data in trtllm-eval ( #10075 )
...
Signed-off-by: yuhangh <58161490+heyuhhh@users.noreply.github.com>
2026-01-15 23:27:08 +08:00
Lizhi Zhou
93db0d5e18
[TRTLLM-9942][feat] new request states and kvcache transceiver APIs in generation-first disagg ( #10406 )
...
Signed-off-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com>
2026-01-15 19:18:21 +08:00
Lizhi Zhou
ff277b591e
[ https://nvbugs/5791830 ][fix] fix pp loop hang caused by i-sending new requests ( #10665 )
...
Signed-off-by: Lizhi Zhou <1432185+reasonsolo@users.noreply.github.com>
2026-01-15 16:33:55 +08:00
Yiqing Yan
f4ace99218
[None][chore] Bump version to 1.3.0rc0 ( #10681 )
...
Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
2026-01-15 13:55:44 +08:00
Anish Shanbhag
faa80e73fd
[None][feat] Auto download speculative models from HF for pytorch backend, add speculative_model field alias ( #10099 )
...
Signed-off-by: Anish Shanbhag <ashanbhag@nvidia.com>
2026-01-14 21:06:07 -08:00
Void
f7de285a82
[None][fix] add quantization check for DeepEP LL low precision combine in new moe comm api ( #10072 )
...
Signed-off-by: Yilin Zhang <18275976+yilin-void@users.noreply.github.com>
2026-01-14 22:15:29 -05:00
彭晋韬(jtao peng)
211c44b951
[None][feat] Adding torch ext API for FusedAddRMSNormQuant kernel ( #9905 )
...
Signed-off-by: jintaop <jintaop@nvidia.com>
2026-01-15 07:29:15 +08:00
Tzu-Ling Kan
c99faaed06
[ #9760 ][fix] Use RequestError for validation errors to prevent engine shutdown ( #9761 )
...
Signed-off-by: tzulingk@nvidia.com <tzulingk@nvidia.com>
2026-01-14 10:22:36 -05:00
Emma Qiao
01083b56bf
[TRTLLM-9849][infra] Update dependencies to 25.12 ( #9818 )
...
Signed-off-by: qqiao <qqiao@nvidia.com>
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Emma Qiao <qqiao@nvidia.com>
Signed-off-by: xxi <xxi@nvidia.com>
Signed-off-by: xxi <95731198+xxi-nv@users.noreply.github.com>
Co-authored-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Co-authored-by: xxi <xxi@nvidia.com>
Co-authored-by: xxi <95731198+xxi-nv@users.noreply.github.com>
Co-authored-by: Yanchao Lu <yanchaol@nvidia.com>
2026-01-14 21:54:04 +08:00
HuiGao-NV
b10704428d
[ https://nvbugs/5787566 ][fix] Only keep a limited number of performance statistic data ( #10569 )
...
Signed-off-by: Hui Gao <huig@nvidia.com>
2026-01-14 07:53:01 -05:00
Kyungmin Lee
25148d3fee
[None][feat] Support new Transformers RoPE configuration format ( #10636 )
...
Signed-off-by: lkm2835 <lkm2835@gmail.com>
2026-01-14 19:41:27 +09:00
xxi
e9817461ba
[None][chore] improve the readability of log for cutlass can only sup… ( #10630 )
...
Signed-off-by: xxi <xxi@nvidia.com>
2026-01-14 05:33:45 -05:00
xxi
d8862505b9
[None][chore] enable EPLB for DEEPGEMM ( #10617 )
...
Signed-off-by: xxi <xxi@nvidia.com>
2026-01-14 05:28:08 -05:00
jmydurant
e7882d5c74
[None][feat] MiniMax M2 support ( #10532 )
...
Signed-off-by: Mingyang Jiang <13463932+jmydurant@users.noreply.github.com>
2026-01-14 17:38:58 +08:00
mpikulski
052c36ddd2
[TRTLLM-9522][feat] support image_embeds in OpenAI API ( #9715 )
...
Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>
2026-01-14 10:31:03 +01:00
Zhenhuan Chen
287f6c2e0f
[None][test] add log_samples and output_path for trtllm_eval ( #10629 )
...
Signed-off-by: Zhenhuan Chen <zhenhuanc@nvidia.com>
2026-01-14 16:01:38 +08:00
Yukun He
15281de799
[None][fix] Reduce host overhead for unified nvfp4 gemm tuning path. ( #10503 )
...
Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
2026-01-14 14:26:18 +08:00
Yuxian Qiu
39cefd6125
[None][refactor] Unify the usage of MPIDist and TorchDist. ( #10380 )
...
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
2026-01-14 14:05:47 +08:00
Leslie Fang
795e690bca
[ https://nvbugs/5753788 ][chore] Padding empty chunk for configurable moe ( #10451 )
...
Signed-off-by: leslie-fang25 <leslief@nvidia.com>
2026-01-14 10:42:17 +08:00
Yuxian Qiu
d3f4fbb742
[None][fix] Avoid write-write race for async pp send. ( #10488 )
...
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
2026-01-14 09:39:36 +08:00
Yuxian Qiu
2acd03030a
[ https://nvbugs/5781589 ][fix] Implement pp skip forward for all spec workers. ( #10578 )
...
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
2026-01-14 09:36:35 +08:00
Balaram Buddharaju
ccdfa43a6e
[ https://nvbugs/5791900 ][fix] Fix HelixCpMnnvlMemory init with PP ( #10533 )
...
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
2026-01-13 15:48:42 -05:00
Frida Hou
bf16fbd86c
[ #9283 ][feat] AutoDeploy: separate rms pattern detection from fusion ( #9969 )
...
Signed-off-by: Fridah-nv <201670829+Fridah-nv@users.noreply.github.com>
2026-01-13 14:57:27 -05:00
Neta Zmora
7b7f1e2ba1
[None][feat] AutoDeploy: refactor memory usage logging ( #8505 )
...
Signed-off-by: Neta Zmora <96238833+nzmora-nvidia@users.noreply.github.com>
Signed-off-by: Gal Hubara-Agam <96368689+galagam@users.noreply.github.com>
Co-authored-by: Gal Hubara-Agam <96368689+galagam@users.noreply.github.com>
2026-01-13 21:03:09 +02:00
benzh-2025
6df2c8a074
[None][feat] add fp4 gemm + allreduce ( #9729 )
...
Signed-off-by: benzh
Signed-off-by: benzh-2025
2026-01-13 21:11:13 +08:00
Tailing Yuan
38296a472b
[None][feat] Layer-wise benchmarks: make model init more general and support weights loading ( #10562 )
...
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
2026-01-13 19:17:03 +08:00
Void
7d16f3a28b
[ https://nvbugs/5788127 ][fix] Use uint64_t as the dtype of lamport_buffer_size to avoid overflow ( #10499 )
...
Signed-off-by: Yilin Zhang <18275976+yilin-void@users.noreply.github.com>
2026-01-13 17:16:22 +08:00
Guoming Zhang
bdaee87895
[TRTLLM-10060][feat] Enable attention dp for Nemotron Super v3. ( #10347 )
...
Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com>
2026-01-13 17:13:55 +08:00
JunyiXu-nv
e291a834db
[TRTLLM-8462][feat] Support GET/DELETE v1/responses/{response_id} ( #9937 )
...
Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
2026-01-13 03:57:14 -05:00
Yuxian Qiu
04b112651b
[None][feat] Hang detection for executor loop and worker. ( #10480 )
...
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
2026-01-13 02:34:32 -05:00
xxi
ba1037ca4a
[ https://nvbugs/5762336 ][fix] support to parse the keyword modules_to_not_convert of the HF model config" ( #10527 )
...
Signed-off-by: xxi <xxi@nvidia.com>
2026-01-12 20:21:01 -05:00
Iman Tabrizian
48b09e5a25
[ https://nvbugs/5689235 ][fix] Fix cancellation+chunked prefill+disagg ( #10111 )
...
Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>
2026-01-12 18:23:26 -05:00
Gal Hubara-Agam
18a33764b5
[None][chore] Print correct backend name in benchmark report ( #10597 )
...
Signed-off-by: Gal Hubara Agam <96368689+galagam@users.noreply.github.com>
2026-01-12 14:46:00 -05:00
Xianjie Qiao
3a9a00b544
[None][feat] Add ExpertStatistic and DUMMY_ALLREDUCE for configurable_moe ( #10401 )
...
Signed-off-by: Xianjie <5410381+qiaoxj07@users.noreply.github.com>
2026-01-12 14:10:31 +08:00
Yechan Kim
8e0d20d901
[TRTLLM-10195][feat] K-EXAONE support ( #10355 )
...
Signed-off-by: Jaedeok Kim <jaedeokk@nvidia.com>
Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>
Co-authored-by: Jaedeok Kim <jaedeokk@nvidia.com>
2026-01-12 00:29:51 +09:00