Yuxian Qiu
|
b85c447ceb
|
[https://nvbugs/5784543][fix] Setup dist before using autotuner. (#10491)
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
|
2026-01-08 10:32:50 +08:00 |
|
dongfengy
|
afc533193d
|
[None][feat] Support nvfp4 for gptoss (#8956)
Signed-off-by: Dongfeng Yu <dongfengy@nvidia.com>
|
2026-01-04 08:57:44 -05:00 |
|
ZhichenJiang
|
46e4af5688
|
[TRTLLM-9831][perf] Enable 2CTA with autotune for CuteDSL MoE and Grouped GEMM optimizations (#10201)
Signed-off-by: zhichen jiang <zhichenj@NVIDIA.com>
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
Co-authored-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
|
2025-12-25 09:04:20 -05:00 |
|
Balaram Buddharaju
|
8c1cfc872b
|
[TRTLLM-9493][feat] Custom AllToAll for helix parallelism (#9986)
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
|
2025-12-23 18:14:30 -08:00 |
|
Balaram Buddharaju
|
5266475014
|
[None][feat] Cudagraph updates for helix parallelism (#10141)
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
|
2025-12-21 15:21:52 -05:00 |
|
xxi
|
5ae154022a
|
[TRTLLM-9872][fix] clear the failed test at CI when enalbe_configurab… (#10067)
Signed-off-by: xxi <xxi@nvidia.com>
|
2025-12-21 08:14:50 -05:00 |
|
Yuxian Qiu
|
bec864a78c
|
[None][fix] avoid ID conversion for non enable_configurable_moe cases. (#10003)
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
|
2025-12-18 13:29:52 +08:00 |
|
Wanli Jiang
|
8af51211c1
|
[FMDL-1222][feat] Support weight and weight_scale padding for NVFP4 MoE cutlass (#9358)
Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
|
2025-12-16 12:41:17 +08:00 |
|
xxi
|
f5696df285
|
[TRTLLM-8961][feat] ConfigurableMoE support DeepGemm (#9858)
|
2025-12-15 10:47:15 +08:00 |
|
xxi
|
488d38f88d
|
[TRTLLM-8959][feat] ConfigurableMoE support CUTLASS (#9772)
|
2025-12-12 00:22:13 +08:00 |
|
xxi
|
8e27ce7084
|
[TRTLLM-9603][feat] Enable ConfigurableMoE test in the CI (#9645)
|
2025-12-08 10:19:40 +08:00 |
|
Enwei Zhu
|
7cd5a67e25
|
[TRTLLM-9372][feat] Enable CuteDSL MoE with Large EP (#9592)
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
|
2025-12-05 22:08:52 -08:00 |
|
Jin Li
|
e5d4305c04
|
[https://nvbugs/5467531][fix] Unwaive fused_moe all to all test with … (#9617)
Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com>
|
2025-12-04 18:17:24 +08:00 |
|
Wei-Ming Chen
|
d9fba85396
|
[OMNIML-2932] [feat] nvfp4 awq support (#8698)
Signed-off-by: weimingc <17592131+meenchen@users.noreply.github.com>
|
2025-12-03 19:47:13 +02:00 |
|
brb-nv
|
43f6ad7813
|
[https://nvbugs/5708475][fix] Fix e2e eval accuracy for helix parallelism (#9647)
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
|
2025-12-03 15:13:59 +08:00 |
|
xxi
|
c12e67bb66
|
[TRTLLM-8958][feat] and [TRTLLM-8960]: create ConfigurableMoE and support TRTLLMGenFusedMoE as backend (#9486)
|
2025-12-01 08:37:07 +08:00 |
|
brb-nv
|
b77f4ffe54
|
[TRTLLM-5971][feat] Integrate helix parallelism (#9342)
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
|
2025-11-29 15:17:30 -08:00 |
|
dominicshanshan
|
6345074686
|
[None][chore] Weekly mass integration of release/1.1 -- rebase (#9522)
Signed-off-by: yunruis <205571022+yunruis@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Mike Iovine <miovine@nvidia.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
Signed-off-by: qgai <qgai@nvidia.com>
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
Signed-off-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com>
Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
Signed-off-by: Simeng Liu <simengl@nvidia.com>
Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com>
Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com>
Signed-off-by: Ivy Zhang <25222398+crazydemo@users.noreply.github.com>
Signed-off-by: Vincent Zhang <vinczhang@nvidia.com>
Signed-off-by: peaceh <103117813+peaceh-nv@users.noreply.github.com>
Signed-off-by: Michal Guzek <mguzek@nvidia.com>
Signed-off-by: Michal Guzek <moraxu@users.noreply.github.com>
Signed-off-by: Chang Liu (Enterprise Products) <9713593+chang-l@users.noreply.github.com>
Signed-off-by: leslie-fang25 <leslief@nvidia.com>
Signed-off-by: Shunkang <182541032+Shunkangz@users.noreply.github.co>
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
Co-authored-by: yunruis <205571022+yunruis@users.noreply.github.com>
Co-authored-by: sunnyqgg <159101675+sunnyqgg@users.noreply.github.com>
Co-authored-by: brb-nv <169953907+brb-nv@users.noreply.github.com>
Co-authored-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com>
Co-authored-by: JunyiXu-nv <219237550+JunyiXu-nv@users.noreply.github.com>
Co-authored-by: Simeng Liu <109828133+SimengLiu-nv@users.noreply.github.com>
Co-authored-by: Guoming Zhang <137257613+nv-guomingz@users.noreply.github.com>
Co-authored-by: Jin Li <59594262+liji-nv@users.noreply.github.com>
Co-authored-by: Ivy Zhang <25222398+crazydemo@users.noreply.github.com>
Co-authored-by: Vincent Zhang <vcheungyi@163.com>
Co-authored-by: peaceh-nv <103117813+peaceh-nv@users.noreply.github.com>
Co-authored-by: Michal Guzek <moraxu@users.noreply.github.com>
Co-authored-by: Chang Liu <9713593+chang-l@users.noreply.github.com>
Co-authored-by: Leslie Fang <leslief@nvidia.com>
Co-authored-by: Shunkangz <182541032+Shunkangz@users.noreply.github.com>
Co-authored-by: Shunkang <182541032+Shunkangz@users.noreply.github.co>
Co-authored-by: QI JUN <22017000+QiJune@users.noreply.github.com>
|
2025-11-29 21:48:48 +08:00 |
|
Bo Li
|
19f3f4e520
|
[https://nvbugs/5637037][chore] Update waive lists. (#9386)
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
Co-authored-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
|
2025-11-28 10:45:22 +08:00 |
|
Bo Li
|
62b771877c
|
[TRTLLM-9389][chore] Refactor AlltoallMethodType. (#9388)
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
|
2025-11-27 21:09:29 +08:00 |
|
Enwei Zhu
|
13fbd4366a
|
[TRTLLM-9370][feat] Integration of CuteDSL NVFP4 grouped GEMM (Part 2: SwiGLU Fusion and Finalize Fusion) (#9288)
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
|
2025-11-21 14:03:38 -08:00 |
|
Anthony Chang
|
86cfb3ea7e
|
[None][feat] Update TRTLLM MoE cubins; reduce mxfp4 weight padding requirement; tighten TMA bound (#9025)
Signed-off-by: Anthony Chang <27950904+rosenrodt@users.noreply.github.com>
|
2025-11-17 10:04:29 +08:00 |
|
xiweny
|
ce23e24123
|
[https://nvbugs/5565565] [fix] Remove waiver (#8450)
Signed-off-by: Xiwen Yu <13230610+VALLIS-NERIA@users.noreply.github.com>
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
|
2025-11-04 16:42:31 +08:00 |
|
Matthias Jouanneaux
|
d0f107e4dd
|
[TRTLLM-5966][feat] Helix: add full MLA support for Helix (#8104)
Signed-off-by: Matthias Jouanneaux <mjoux@nvidia.com>
|
2025-11-04 09:06:58 +08:00 |
|
Anthony Chang
|
f666ad2f6b
|
[None][feat] Autotuner can iterate through all tactics for test purposes (#8663)
Signed-off-by: Anthony Chang <27950904+rosenrodt@users.noreply.github.com>
|
2025-10-30 13:11:25 +01:00 |
|
Kaiyu Xie
|
227c288441
|
[TRTLLM-8827] [feat] Enable low precision alltoall for Cutlass and TRTLLMGen backends (#8675)
Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
|
2025-10-29 07:56:48 +08:00 |
|
Kaiyu Xie
|
c9b08790c2
|
[None] [test] Add MNNVL AlltoAll tests to pre-merge (#8601)
|
2025-10-27 21:39:44 +08:00 |
|
Anthony Chang
|
8a3b870e09
|
[None][feat] Update TRTLLM MoE MxFP4 cubins; autotune tileN (#8156)
Signed-off-by: Anthony Chang <27950904+rosenrodt@users.noreply.github.com>
|
2025-10-23 09:14:18 +08:00 |
|
Min Yu
|
0a0159fdd8
|
[https://nvbugs/5378031] [feat] W4A8 AWQ MoE supports Per Expert Pre-quant Scale Factor for PyT backend (#7286)
Signed-off-by: Min Yu <171526537+yumin066@users.noreply.github.com>
|
2025-10-16 11:07:48 +08:00 |
|
mpikulski
|
93a4b7f1b6
|
[None][chore] update torch_dtype -> dtype in 'transformers' (#8263)
Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>
|
2025-10-15 17:09:30 +09:00 |
|
Emma Qiao
|
ccd949ea5b
|
[None][infra] Waive failed tests on main 10/09 (#8230)
Signed-off-by: qqiao <qqiao@nvidia.com>
|
2025-10-09 22:46:07 +08:00 |
|
xxi
|
e98616512f
|
[https://nvbugs/5550283][fix] update test case to the latest MoE API (#8165)
|
2025-10-07 22:54:34 -07:00 |
|
sychen52
|
ba8abeab10
|
[OMNIML-2336][feat] add W4A8 NVFP4 FP8 fused moe (#7968)
Signed-off-by: Shiyang Chen <shiychen@nvidia.com>
|
2025-10-01 02:39:33 -04:00 |
|
Emma Qiao
|
b1e3fef8aa
|
[None][infra] Skip failed tests in post-merge for main (#8102)
Signed-off-by: qqiao <qqiao@nvidia.com>
|
2025-10-01 10:12:10 +08:00 |
|
brb-nv
|
84aa3c981e
|
[None][chore] Waive failing MNNVL alltoall multi-gpu test (#8106)
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
|
2025-09-30 20:05:42 -04:00 |
|
Kaiyu Xie
|
b0cb9ca50e
|
[None] [test] Add MNNVL AlltoAll tests to pre-merge (#7466)
Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
Signed-off-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>
Co-authored-by: Zongfei Jing <20381269+zongfeijing@users.noreply.github.com>
|
2025-09-29 23:12:24 -04:00 |
|
xxi
|
57ff5f4c0d
|
[None][fix] fix a bug in wideEp use DeepEP with num_chunks > 1 (#7954)
Signed-off-by: xxi <xxi@nvidia.com>
|
2025-09-25 07:53:42 -07:00 |
|
Yuxian Qiu
|
48fda86c56
|
[None][fix] Fix dummy load format for DeepSeek. (#7874)
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
|
2025-09-24 23:03:16 +08:00 |
|
xxi
|
d471655242
|
[TRTLLM-7831][feat] Cherry-pick from #7423 Support fp8 block wide ep cherry pick (#7712)
|
2025-09-23 08:41:38 +08:00 |
|
Yechan Kim
|
f77aca9f2c
|
[TRTLLM-7385][feat] Optimize Qwen2/2.5-VL performance (#7250)
Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>
|
2025-09-22 03:40:02 -07:00 |
|
xiweny
|
c076a02b38
|
[TRTLLM-4629] [feat] Add support of CUDA13 and sm103 devices (#7568)
Signed-off-by: Xiwen Yu <13230610+VALLIS-NERIA@users.noreply.github.com>
Signed-off-by: Tian Zheng <29906817+Tom-Zheng@users.noreply.github.com>
Signed-off-by: Daniel Stokes <dastokes@nvidia.com>
Signed-off-by: Zhanrui Sun <zhanruis@nvidia.com>
Signed-off-by: Xiwen Yu <xiweny@nvidia.com>
Signed-off-by: Jiagan Cheng <jiaganc@nvidia.com>
Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
Signed-off-by: Bo Deng <deemod@nvidia.com>
Signed-off-by: ZhanruiSunCh <184402041+ZhanruiSunCh@users.noreply.github.com>
Signed-off-by: xiweny <13230610+VALLIS-NERIA@users.noreply.github.com>
Co-authored-by: Tian Zheng <29906817+Tom-Zheng@users.noreply.github.com>
Co-authored-by: Daniel Stokes <dastokes@nvidia.com>
Co-authored-by: Zhanrui Sun <zhanruis@nvidia.com>
Co-authored-by: Jiagan Cheng <jiaganc@nvidia.com>
Co-authored-by: Yiqing Yan <yiqingy@nvidia.com>
Co-authored-by: Bo Deng <deemod@nvidia.com>
Co-authored-by: Zhanrui Sun <184402041+ZhanruiSunCh@users.noreply.github.com>
|
2025-09-16 09:56:18 +08:00 |
|
Jin Li
|
d49374bc45
|
[TRTLLM-7408][feat] Wrap MOE with custom op. (#7277)
Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com>
|
2025-09-09 12:18:56 -04:00 |
|
amitz-nv
|
a1e03af0f4
|
[TRTLLM-7346][fix] Improve performance of PyTorchModelEngine._get_lora_params_from_requests (#7033)
Signed-off-by: Amit Zuker <203509407+amitz-nv@users.noreply.github.com>
|
2025-08-25 10:37:40 +03:00 |
|
Emma Qiao
|
f84dd64250
|
[None][infra] Waive failed tests on main branch 8/20 (#7092)
Signed-off-by: qqiao <qqiao@nvidia.com>
|
2025-08-20 06:33:44 -04:00 |
|
Robin Kobus
|
b95cab2a7c
|
[None][ci] move unittests to sub-directories (#6635)
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
|
2025-08-20 05:42:22 -04:00 |
|
Yi Zhang
|
a15af879ec
|
[None][refactor] Refactor Torch Compile Backend, MoeLoadBalancer and warmup Logic (#6615)
Signed-off-by: yizhang-nv <187001205+yizhang-nv@users.noreply.github.com>
Signed-off-by: Yi Zhang <187001205+yizhang-nv@users.noreply.github.com>
|
2025-08-19 09:58:44 +08:00 |
|
Yuening Li
|
1f8ae2b2db
|
[TRTLLM-5863][feat] Support MoE INT8 Weight-Only-Quantization in PyTorch Workflow (#6629)
Signed-off-by: Yuening Li <62227368+yueningl@users.noreply.github.com>
|
2025-08-15 17:15:49 -04:00 |
|
dongfengy
|
0ad0b967bb
|
[None][fix] Make TP working for Triton MOE (in additional to EP we are using) (#6722)
Signed-off-by: Dongfeng Yu <dongfengy@nvidia.com>
|
2025-08-15 16:58:42 -04:00 |
|
NVJiangShao
|
a700646132
|
[None][fix] Add FP4 all2all unitest and fix a bug for module WideEPMoE (#6784)
Signed-off-by: Jiang Shao <91270701+StudyingShao@users.noreply.github.com>
|
2025-08-14 13:35:37 +08:00 |
|
Anthony Chang
|
2198587b35
|
[https://nvbugs/5378031] [feat] Hopper W4A8 MoE supports ModelOpt ckpt for PyT backend (#6200)
Signed-off-by: Anthony Chang <27950904+rosenrodt@users.noreply.github.com>
|
2025-08-13 21:24:40 +08:00 |
|