Commit Graph

539 Commits

Author SHA1 Message Date
Yukun He
ff82aef99b
Fix the issues related to fused moe path. (#3435)
* One of the tactic is not supported during dispatch.
* final_hidden_states should be unpacked if it is not min_latency_mode.

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
2025-04-11 21:41:15 +08:00
liji-nv
b168adba70
feat: Add NVFP4 UB pattern optimization pass in torch compile (#3371)
* feat: Add NVFP4 UB pattern optimization pass in torch compile

* Add an additional flag for UB fp4 pattern to avoid inverse the scale
* Add NVFP4 related UB patterns

Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com>

* Update atol, some points fails for B200 umbriel.

Signed-off-by: liji-nv <59594262+liji-nv@users.noreply.github.com>

---------

Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com>
Signed-off-by: liji-nv <59594262+liji-nv@users.noreply.github.com>
2025-04-11 21:25:29 +08:00
Shunkangz
ea050084ad
feat: Add support of chat completion in PD (#2985)
* Add support of chat completion in PD

Add support of include_usage in PD


Reformat


* Remove redundant code

Signed-off-by: Shunkang <182541032+Shunkangz@users.noreply.github.co>

* Refactor code

Signed-off-by: Shunkang <182541032+Shunkangz@users.noreply.github.co>

* Add chat completion test

Signed-off-by: Shunkang <182541032+Shunkangz@users.noreply.github.co>

* Refactor code

Signed-off-by: Shunkang <182541032+Shunkangz@users.noreply.github.co>

---------

Signed-off-by: Shunkang <182541032+Shunkangz@users.noreply.github.co>
Co-authored-by: Shunkang <182541032+Shunkangz@users.noreply.github.co>
2025-04-11 17:53:28 +08:00
Yechan Kim
5bc6f093c8
fix: mllama e2e pytorch flow fix (#3397)
Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>
2025-04-11 17:33:15 +08:00
Ivy Zhang
20e54e5c89
test: add cuda visible device constraint for phi_1gpu test (#3364)
Signed-off-by: Ivy Zhang <yanzh@nvidia.com>
2025-04-11 17:14:52 +08:00
Ivy Zhang
d998832b33
test: add torch flow test case in qa test list (#3404)
Signed-off-by: Ivy Zhang <yanzh@nvidia.com>
2025-04-11 16:57:41 +08:00
pansicheng
143edc8153
fix partialMatch (#3413)
Signed-off-by: pansicheng <sicheng.pan.chn@gmail.com>
2025-04-11 16:42:52 +08:00
Yiqing Yan
0d351317c2
Waive failure post-merge tests (#3472)
Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
Co-authored-by: QI JUN <22017000+QiJune@users.noreply.github.com>
2025-04-11 16:23:07 +08:00
Yuan Tong
a139eae425
chore: Stabilize ABI boundary for internal kernel library (#3117)
chore: Stabilize ABI boundary for internal kernel library

Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com>
2025-04-11 15:07:50 +08:00
Enwei Zhu
410f56357e
test: Waive torch compile tests (#3471)
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
2025-04-11 13:38:05 +08:00
QI JUN
16ca45747b
always trigger multi gpu test to protect modeling_llama.py and modeling_deepseekv3.py (#3434)
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
2025-04-11 13:19:23 +08:00
wili
5142c783c0
fix: Beam Search Diversity (#3375)
Signed-off-by: wili-65535 <wili-65535@user.noreply.github.com>
Co-authored-by: wili-65535 <wili-65535@user.noreply.github.com>
2025-04-11 11:58:59 +08:00
QI JUN
1e2a339642
waive unittest/_torch/multi_gpu (#3464)
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
2025-04-11 09:59:16 +08:00
QI JUN
6cef10068a
waive a test case of llama 3.1 with torch compile (#3461)
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
2025-04-11 09:15:19 +08:00
tburt-nv
5616c0d232
add precommit check to github actions (#3129)
Signed-off-by: Tyler Burt <195370667+tburt-nv@users.noreply.github.com>
2025-04-11 06:40:53 +08:00
Dom Brown
a8310b01dc
feat: trtllm-gen fp4 GEMM for pytorch workflow (#3423)
* feat: trtllm-gen fp4 GEMM

Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>

* Clean up

Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>

* Remove incorrect header

Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>

* Reviewer comment

Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>

---------

Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>
2025-04-11 02:28:07 +08:00
Iman Tabrizian
d7f45e50c6
test: disable attention DP tests for single GPU (#3395)
Signed-off-by: Iman Tabrizian <itabrizian@nvidia.com>
2025-04-11 01:38:17 +08:00
Zhihan Jiang
8300218d21
feat: support llama4 nope layers; support FP8 checkpoint loading; (#3382)
* Enable NOPE, Fix a rotary embedding bug for gptj_stype_rope, Address PR comment, Properly skip the rotary_embdding for Llama4 ROPE layers

* Add support for FP8 checkpoint, Fix ckpt weighting loading for FP8

* Temporarily disable min_latency_mode for llama4

---------

Co-authored-by: Yilin Fan <yilinf@nvidia.com>
Co-authored-by: Sharan Chetlur <116769508+schetlur-nv@users.noreply.github.com>
2025-04-10 10:16:42 -07:00
amitz-nv
a6a2ae6cc1
chore: Rename nvsmall to nemotron nas (#3447)
* Rename nvsmall to nemotron NAS

* Revert nvsmall to nemotron_nas rename in paths in tests that access llm_models_root/nvsmall/tests

* Add NemotronNAS to pytorch supported models table

Signed-off-by: Amit Zuker <203509407+amitz-nv@users.noreply.github.com>
2025-04-10 23:16:52 +08:00
wm2012011492
af05749e90
feat: add qwen2 moe to torch flow; fix wrong imported KvCacheConfig in gpqa… (#3369)
* add qwen2 moe to torch flow; fix wrong imported KvCacheConfig in gpqa_llmapi.py

Signed-off-by: mengw <12670782+wm2012011492@users.noreply.github.com>

* fix coding style

Signed-off-by: mengw <12670782+wm2012011492@users.noreply.github.com>

* add unittest

Signed-off-by: mengw <12670782+wm2012011492@users.noreply.github.com>

---------

Signed-off-by: mengw <12670782+wm2012011492@users.noreply.github.com>
Co-authored-by: mengw <12670782+wm2012011492@users.noreply.github.com>
2025-04-10 22:45:57 +08:00
QI JUN
f5281fffaa
waive some test cases of test_llm_multi_gpu.py (#3452)
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
2025-04-10 22:02:35 +08:00
Yan Chunwei
c5e803ba48
chore: code cleanup for error logging and SharedMemory in proxy.py (#3432)
* cleanup log

Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>

* remove shared-memory

Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>

* remove ExecutorResponse

Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>

* add assert for postproc

Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>

---------

Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>
2025-04-10 21:57:06 +08:00
Julien Debache
d7a0bf934c
fix: updating ucxx, which appears to avoid occasional segfaults when profiling (#3420)
Signed-off-by: jdebache <jdebache@nvidia.com>
2025-04-10 19:48:20 +08:00
HuiGao-NV
3ade9375ba
feat: Run PyExecutor's inference flow to estimate max_num_tokens for kv_cache_manager (#3092)
Signed-off-by: Hui Gao <huig@nvidia.com>
2025-04-10 18:29:40 +08:00
Yiqing Yan
10d2d16247
Waive L0 test (#3442)
Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
2025-04-10 17:43:45 +08:00
Emma Qiao
5023e0d0f4
infra: Update some test description which is out of date (#3437)
* Update some description which is out of date

Signed-off-by: EmmaQiaoCh <qqiao@nvidia.com>

* Apply suggestions from code review

Signed-off-by: Yanchao Lu <yanchaol@nvidia.com>

---------

Signed-off-by: EmmaQiaoCh <qqiao@nvidia.com>
Signed-off-by: Yanchao Lu <yanchaol@nvidia.com>
Co-authored-by: Yanchao Lu <yanchaol@nvidia.com>
2025-04-10 17:29:30 +08:00
Kefeng-Duan
67949f7c39
Update README and add benchmarking blog for DeepSeek-R1 (#3232)
- Added a new entry in the README for the published benchmarking best practices for DeepSeek-R1.
- Introduced a new blog post detailing performance benchmarking configurations and procedures for DeepSeek-R1 in TensorRT-LLM, including installation, dataset preparation, and benchmarking steps for both B200 and H200 GPUs.

Signed-off-by: taoli <litaotju@users.noreply.github.com>
Co-authored-by: taoli <litaotju@users.noreply.github.com>
2025-04-10 17:00:49 +08:00
bhsueh_NV
cec65bd09a
clean the waive.txt (#3441)
Signed-off-by: bhsueh <11360707+byshiue@users.noreply.github.com>
2025-04-10 16:20:08 +08:00
xinhe-nv
863d023fd0
test: fix memory leak of tests (#3392)
Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com>
2025-04-10 14:31:40 +08:00
tburt-nv
b331d62f98
add sqlite to rocky container (#3114)
Signed-off-by: Tyler Burt <195370667+tburt-nv@users.noreply.github.com>
Co-authored-by: Yanchao Lu <yanchaol@nvidia.com>
2025-04-10 13:30:24 +08:00
yuxianq
16c8f39fc5
feat: Support TLLM_OVERRIDE_LAYER_NUM and TLLM_TRACE_MODEL_FORWARD for debugging (#3417)
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
2025-04-10 13:18:30 +08:00
hlu1
fbcf954d9c
[MLA] Deallocate tensors after use (#3286)
Signed-off-by: Hao Lu <haolu@nvidia.com>
2025-04-09 21:36:07 -07:00
brb-nv
c59abae436
feat: Add Gemma3 text-only model support (#3247)
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
2025-04-10 12:34:58 +08:00
QI JUN
b5473f7eca
waive llama3.1 8B test cases with pipeline parallelism (#3433)
* waive llama3.1 8B test cases with pipeline parallelism

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* update

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

---------

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
2025-04-10 11:07:58 +08:00
Frank
9307ff95ae
fix: Add nested aliases for Llama 4 (#3381)
* Add nested aliases for Llama 4

Signed-off-by: Frank Di Natale <3429989+FrankD412@users.noreply.github.com>

* Fix missed alias.

Signed-off-by: Frank Di Natale <3429989+FrankD412@users.noreply.github.com>

---------

Signed-off-by: Frank Di Natale <3429989+FrankD412@users.noreply.github.com>
Co-authored-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com>
2025-04-10 10:18:53 +08:00
peaceh-nv
215fb20567
chore : split GptExecutor tests out of gpt tests to reduce single test time (#3412)
Signed-off-by: peaceh <103117813+peaceh-nv@users.noreply.github.com>
Co-authored-by: QI JUN <22017000+QiJune@users.noreply.github.com>
2025-04-10 09:08:15 +08:00
tburt-nv
8d164f40d7
update allowlist (#3428)
Signed-off-by: Tyler Burt <195370667+tburt-nv@users.noreply.github.com>
2025-04-10 06:41:40 +08:00
Yechan Kim
943218b54a
feat: Add Qwen2.5-VL and refactor Qwen2-VL (#3156)
* feat: Add Qwen2.5-VL and refactor Qwen2-VL

Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>

* fix yapf and codespell

Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>

* add test

Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>

* fix test_e2e

Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>

* generalize get_rope_index

Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>

* fix qwen2.5-vl on REAME

Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>

* fix test

Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>

* fix image test

Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>

---------

Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>
Co-authored-by: Haohang Huang <31998628+symphonylyh@users.noreply.github.com>
2025-04-10 04:09:03 +08:00
Maximiliano Levi
996696203f
fix: #3137 speculative decoding and multimodal input support (#3276)
* fix: broadcast embeddings input when using speculative decoding

Signed-off-by: Maximiliano Levi <maxilevi77@gmail.com>

* fix: use shape tensor instead of tuple

Signed-off-by: Maximiliano Levi <maxilevi77@gmail.com>

* fix: comment

Signed-off-by: Maximiliano Levi <maxilevi77@gmail.com>

---------

Signed-off-by: Maximiliano Levi <maxilevi77@gmail.com>
Co-authored-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
2025-04-09 23:40:19 +08:00
danielafrimi
47f5cf6c0d
lora_tests (#3201)
LoRA tests and layers

Signed-off-by: Ubuntu <dafrimi@nvidia.com>
Co-authored-by: Ubuntu <dafrimi@nvidia.com>
2025-04-09 18:06:52 +03:00
WeiHaocheng
6eee15900e
feat: Enhance the integrated robustness of scaffolding with __init__.py #3305 (#3312)
Signed-off-by: fredw (generated by with_the_same_user script) <20514172+WeiHaocheng@users.noreply.github.com>
2025-04-09 21:13:47 +08:00
石晓伟
c069abc7d8
Update gh pages build script (#3405)
Signed-off-by: Shixiaowei02 <39303645+Shixiaowei02@users.noreply.github.com>
2025-04-09 19:58:38 +08:00
Gabriel Wu
4d78f51608
fix: remove DeepGEMM line info (#3411)
Signed-off-by: Zihua Wu <13583761+lucifer1004@users.noreply.github.com>
2025-04-09 18:01:02 +08:00
wili
6f1b2cdb83
Doc: update steps of using Draft-Target-Model (DTM) in the documents. (#3366)
Signed-off-by: wili-65535 <wili-65535@user.noreply.github.com>
2025-04-09 17:35:01 +08:00
QI JUN
d0671494cd
chore: fix wheel version <= 0.45.1 (#3391)
* fix wheel version to 0.45.1

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* relax version

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

---------

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
2025-04-09 12:31:55 +08:00
sugunav14
64abb01a36
Fix failing DSV3 unit tests (#3385)
* Skipping DSV3 module patch unit tests

Signed-off-by: Suguna Velury <178320438+sugunav14@users.noreply.github.com>

* update tested

Signed-off-by: Suguna Velury <178320438+sugunav14@users.noreply.github.com>

* Fixed failing unit test

Signed-off-by: Suguna Velury <178320438+sugunav14@users.noreply.github.com>

---------

Signed-off-by: Suguna Velury <178320438+sugunav14@users.noreply.github.com>
2025-04-09 11:57:05 +08:00
tburt-nv
3a8443f1e1
extend allowlist (#3379)
Signed-off-by: Tyler Burt <195370667+tburt-nv@users.noreply.github.com>
2025-04-09 11:10:42 +08:00
Iman Tabrizian
8401722245
test: Add single gpu disaggregated tests (#3295)
* test: Add single gpu disaggregated tests

Signed-off-by: Iman Tabrizian <itabrizian@nvidia.com>

* Add deepseek with overlap tests

Signed-off-by: Iman Tabrizian <itabrizian@nvidia.com>

* Use updated prompt

Signed-off-by: Iman Tabrizian <itabrizian@nvidia.com>

* Move test to disaggregated folder

Signed-off-by: Iman Tabrizian <itabrizian@nvidia.com>

---------

Signed-off-by: Iman Tabrizian <itabrizian@nvidia.com>
2025-04-09 09:34:45 +08:00
Tracin
2a2b7bfc66
Fix miss bias add for FP4Linear. (#3361)
Signed-off-by: Tracin <10434017+Tracin@users.noreply.github.com>
Co-authored-by: QI JUN <22017000+QiJune@users.noreply.github.com>
2025-04-09 09:17:54 +08:00
Mike Iovine
5bdf997963
Add Llama 4 (#3302)
Signed-off-by: Mike Iovine <miovine@nvidia.com>
2025-04-09 03:35:21 +08:00