Commit Graph

1812 Commits

Author SHA1 Message Date
PAN
8ad6e9d69b
Merge branch 'main' into fix/internvl_exmaple_1 2025-07-14 23:17:48 +08:00
brb-nv
f5f5be9e94
enh: Bidirectional mask with multiple images for Gemma3 (#5976)
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
2025-07-14 22:39:18 +08:00
brb-nv
1a2d96919c
feat: Update Gemma3 Vision Encoder (#5973)
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
2025-07-14 22:38:10 +08:00
Alex Zhang
6c30d78b78
[TRTLLM-5653][infra] Run docs build only if PR contains only doc changes (#5184)
Signed-off-by: Alex Zhang <13271672+zhanga5@users.noreply.github.com>
Signed-off-by: Yanchao Lu <yanchaol@nvidia.com>
Co-authored-by: Alex Zhang <13271672+zhanga5@users.noreply.github.com>
Co-authored-by: Yanchao Lu <yanchaol@nvidia.com>
2025-07-14 21:40:33 +08:00
Yechan Kim
63139fdcff
feat: EXAONE4.0 support (#5696)
Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>
2025-07-14 22:28:10 +09:00
Clay
dbf29184dc
fix #4974: A thread leak issue in scaffolding unittest (#5020)
Signed-off-by: Clay <ccs96307@gmail.com>
2025-07-14 20:22:03 +09:00
Kaiyu Xie
aa97fbb2ad
[Nvbug/5383670] fix: switch test case to non-fp4 ckpt for more GPU coverage (#5882)
Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
2025-07-14 20:21:46 +09:00
Yiqing Yan
c720d7f779
Waive L0 test (#6002)
Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
2025-07-14 19:55:34 +09:00
Zhanrui Sun
3a0ef73414
infra: [TRTLLM-6242] install cuda-toolkit to fix sanity check (#5709)
Signed-off-by: ZhanruiSunCh <184402041+ZhanruiSunCh@users.noreply.github.com>
2025-07-14 18:52:13 +09:00
Zhenhuan Chen
30608a5e6d [https://nvbugs/5355316] fix: update torch.compile option to fix triton store_cubin error (#5865)
Signed-off-by: Zhenhuan Chen <chenzhh3671@gmail.com>
2025-07-14 17:17:30 +08:00
Robin Kobus
5a61d64b5b [nvbugs/5345391] fix: chunked prefill + overlap scheduling (#5761)
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
2025-07-14 17:17:30 +08:00
Pengyun Lin
3fcaa8a310 [nvbug 5327706][fix] fix mgmn postprocess error (#5835)
Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
2025-07-14 17:17:30 +08:00
ruodil
347520494b test: remove duplicate cases in perf sanity test (#5870)
Signed-off-by: ruodil <200874449+ruodil@users.noreply.github.com>
2025-07-14 17:17:30 +08:00
Yi Zhang
966e41a900 doc: Update gb200 doc (#5840)
Signed-off-by: yizhan <187001205+yizhang-nv@users.noreply.github.com>
2025-07-14 17:17:30 +08:00
Bo Li
6d79559f3e fix: [https://nvbugs/5351130][https://nvbugs/5333654] Unwaive for bug 5351130 and 5333654. (#5821)
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
2025-07-14 17:17:30 +08:00
Bo Li
2991cf4b80 fix: [https://nvbugspro.nvidia.com/bug/5345215] Unwaive for bug 5345215. (#5606)
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
2025-07-14 17:17:30 +08:00
Perkz Zheng
4a0b7a0cf1 [https://nvbugspro.nvidia.com/bug/5355054] fallback to cubins for fp8 fmha kernels on Ada. (#5779)
Signed-off-by: Qidi Sang <200703406+qsang-nv@users.noreply.github.com>
Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
Co-authored-by: qsang-nv <200703406+qsang-nv@users.noreply.github.com>
2025-07-14 17:17:30 +08:00
Yan Chunwei
3e1fd983c3 [nvbug5266240] chore: unwaive test_llm_with_dummy_weights (#5744)
Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>
2025-07-14 17:17:30 +08:00
Pengyun Lin
388b4919b8 [nvbug 5304752][fix] enhance _check_arguments to filter illegal requests for pytorch backend (#5541)
Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
2025-07-14 17:17:30 +08:00
Martin Marciniszyn Mehringer
c321fb8f81 Fix docker cache mount (#5763)
Signed-off-by: Martin Marciniszyn Mehringer <11665257+MartinMarciniszyn@users.noreply.github.com>
2025-07-14 17:17:30 +08:00
Pengyun Lin
6992616c1f [nvbug 5004744][fix] rewrite completion API to avoid repetitive tokens (#5201)
Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
2025-07-14 17:17:30 +08:00
ruodil
278a1a7df3 test: fix some test failure and add llama_nemotron models in perf sanity test, add more torch cases (#5693)
Signed-off-by: ruodil <200874449+ruodil@users.noreply.github.com>
Co-authored-by: Larry <197874197+LarryXFly@users.noreply.github.com>
2025-07-14 17:17:30 +08:00
Iman Tabrizian
c8874a7f94 [nvbug/5337601][fix] Fix disagg + speculative decoding (#5558)
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>
Co-authored-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
2025-07-14 17:17:30 +08:00
Yi Zhang
9cc4e5d50e [nvbugs/5336321][fix] Enable attention dp = False test case, Fix TRTLLM Gen Moe workspace allocation (#5463)
Signed-off-by: Yi Zhang <187001205+yizhang-nv@users.noreply.github.com>
Signed-off-by: yizhan <187001205+yizhang-nv@users.noreply.github.com>
2025-07-14 17:17:30 +08:00
Yi Zhang
e5e87ecf34 test: Move some of the test from post merge to pre-merge, update dgx b200 test case (#5640)
Signed-off-by: Yi Zhang <187001205+yizhang-nv@users.noreply.github.com>
2025-07-14 17:17:30 +08:00
brb-nv
869e88304a [nvbug/5341178][fix] Fix OOM in Llama 4 accuracy test (#5735)
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
2025-07-14 17:17:30 +08:00
Dom Brown
afaa388bee [TRTLLM-6100] fix: Nvbug 5356427: autotuned TRTLLM Gen fp8 block scale MoE illegal memory access (#5676)
Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>
2025-07-14 17:17:30 +08:00
WeiHaocheng
4d8920982a
fix: set allreduce strategy to model config (#5955)
Signed-off-by: Fred Wei <20514172+WeiHaocheng@users.noreply.github.com>
2025-07-14 17:59:11 +09:00
dominicshanshan
c9e7f831dc
Breaking change: perf: [TRTLLM-4662] Enable cuda graph by default (#5480)
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
2025-07-14 16:42:23 +08:00
dongxuy04
c04570a506
Use huge page mapping for host accessible memory on GB200 (#5963)
Signed-off-by: Dongxu Yang <78518666+dongxuy04@users.noreply.github.com>
2025-07-14 16:11:04 +08:00
Yan Chunwei
9c673e9707
[TRTLLM-6160] chore: add sampling examples for pytorch (#5951)
Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>
2025-07-14 15:28:32 +09:00
Enwei Zhu
ed77ef2ff4
fix: Fix MoE benchmark (#5966)
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
2025-07-14 15:17:26 +09:00
Yan Chunwei
c30eead09f
[TRTLLM-6164][TRTLLM-6165] chore: add runtime example for pytorch (#5956)
Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>
2025-07-14 14:09:39 +08:00
wili
cfcb97af0e
[BUG5388075][fix] Fix error in post-merge-tests (#5949)
Signed-off-by: wili-65535 <wili-65535@users.noreply.github.com>
Co-authored-by: wili-65535 <wili-65535@users.noreply.github.com>
2025-07-14 14:33:39 +09:00
Xianjie Qiao
c7ffadf692
Fix errors in wide-ep scripts (#5992)
Signed-off-by: Xianjie <5410381+qiaoxj07@users.noreply.github.com>
2025-07-14 14:07:27 +09:00
QI JUN
ce39409530
fix cancel request logic (#5800)
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
2025-07-14 10:23:20 +08:00
Yuan Tong
a36ac45c4d
fix: fast redux detection in trtllm gen routing kernel (#5941)
Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com>
2025-07-13 16:35:07 +08:00
wili
3dfc819849
[BUG5374319][fix] WAR for draft-target-model unit tests error (#5958)
Signed-off-by: wili-65535 <wili-65535@users.noreply.github.com>
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
Co-authored-by: wili-65535 <wili-65535@users.noreply.github.com>
Co-authored-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
2025-07-12 23:48:57 +09:00
Mike Iovine
8950223f6f
[fix] Remove SpecConfig and fix thread leak issues (#5931)
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
2025-07-12 21:03:24 +09:00
Enwei Zhu
bc1d4fb5da
[NvBug 5378370] fix: Fix alltoall for llama4 (apply_router_weight_on_input=True) (#5902)
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
2025-07-12 15:50:31 +09:00
Chang Liu
308776442a
[nvbug/5308432] fix: extend triton exit time for test_llava (#5971)
Signed-off-by: Chang Liu <9713593+chang-l@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-12 12:56:37 +09:00
juney-nvidia
63cf929188
Added code owners for LLM API (#5960)
Signed-off-by: Jun Yang <143764042+juney-nvidia@users.noreply.github.com>
2025-07-12 10:30:17 +09:00
Thor Johnsen
041f1fa513
[TRTLLM-6264] Fix flaky test_e2e.py::test_openai_lora (#5885)
Signed-off-by: thorjohnsen <41591019+thorjohnsen@users.noreply.github.com>
2025-07-11 16:20:41 -07:00
2ez4bz
6304866ce8
[refactor] Move vision parts from processor to model for Gemma3 (#5888)
Signed-off-by: William Zhang <133824995+2ez4bz@users.noreply.github.com>
2025-07-11 15:13:51 -07:00
xinhe-nv
509363d858
tests: update sanity tests & fix tests (#5906)
Signed-off-by: Xin He (SW-GPU) <200704525+xinhe-nv@users.noreply.github.com>
2025-07-11 19:48:19 +10:00
Shi Xiaowei
f4e0425a7b
doc: update the link of the diagram (#5953)
Signed-off-by: Shixiaowei02 <39303645+Shixiaowei02@users.noreply.github.com>
2025-07-11 18:02:22 +09:00
Shi Xiaowei
49359574c1
[TRTLLM-5673] Doc: ensure the disagg doc is up to date (#5938) 2025-07-11 17:39:05 +09:00
ChristinaZ
c5fb692a7d
Refactor the rest routing part for the routing kernels in the MoE TRT-LLM backend (#5771)
Signed-off-by: Christina Zhang <83400082+ChristinaZ@users.noreply.github.com>
2025-07-11 16:37:56 +08:00
Shi Xiaowei
37293e4dfd
blog: add qwen3 disagg perf metrics (#5822) 2025-07-11 16:41:45 +09:00
William Tambellini
fbb4cc7379
[TRTLLM-4770][feat] Enhance cpp executor cmake to listen to ENABLE_MU… (#5104)
...LTI_DEVICE

Signed-off-by: William Tambellini <wtambellini@sdl.com>
2025-07-11 10:59:44 +08:00