liji-nv
|
e0d0dde058
|
None - Add one-shot version for UB AR NORM FP16/BF16 (#2995)
Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com>
|
2025-03-31 11:16:03 +08:00 |
|
Yan Chunwei
|
794f61c997
|
fix: fix single-node cannot quit issue on slurm (#3140)
Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>
|
2025-03-31 10:15:27 +08:00 |
|
Mike Iovine
|
5416966ddb
|
Add initial EAGLE-3 implementation (#3035)
Signed-off-by: Mike Iovine <miovine@nvidia.com>
|
2025-03-29 22:31:24 +08:00 |
|
Erin
|
c75d7cd684
|
move BuildConfig functional args to llmargs (#3036)
Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>
|
2025-03-29 02:20:18 +08:00 |
|
Aurelien Chartier
|
3de82c41cd
|
Pytorch PP + attention DP support (#3044)
Signed-off-by: Aurelien Chartier <achartier@nvidia.com>
|
2025-03-28 00:11:19 +08:00 |
|
Fanrong Li
|
ec03159e60
|
fix: Waive twoshot to fix acc issue (#3066)
* waive twoshot to fix acc issue
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
---------
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
|
2025-03-27 21:38:52 +08:00 |
|
Yan Chunwei
|
87ab794aa2
|
fix: fix hang in mgmn with trtllm-llmapi-launch command (#3119)
* init
Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>
* restore
Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>
---------
Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>
|
2025-03-27 18:45:43 +08:00 |
|
Fanrong Li
|
0976360204
|
add support for MTP+cuda_graph_padding. (#3096)
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
|
2025-03-27 16:06:14 +08:00 |
|
Yan Chunwei
|
82edd90350
|
fix gpus_per_node in trtllm-bench when world_size < device_count (#3007)
Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>
|
2025-03-27 09:31:40 +08:00 |
|
Suyog Gupta
|
047f2b234d
|
perf: [AutoDeploy] Enable AutoDeploy as a backend in trtllm-bench (#3041)
* Enable AutoDeploy as a backend in trtllm-bench
Signed-off-by: Suyog Gupta <suyogg@nvidia.com>
* update how caches are resized
Signed-off-by: Suyog Gupta <suyogg@nvidia.com>
* fix: files permission from 100755 to 100644
Signed-off-by: Suyog Gupta <suyogg@nvidia.com>
* some comments
Signed-off-by: Suyog Gupta <suyogg@nvidia.com>
* lint
Signed-off-by: Suyog Gupta <suyogg@nvidia.com>
* lint
Signed-off-by: Suyog Gupta <suyogg@nvidia.com>
* lint
Signed-off-by: Suyog Gupta <suyogg@nvidia.com>
* lint
Signed-off-by: Suyog Gupta <suyogg@nvidia.com>
* Fix function name
Signed-off-by: Suyog Gupta <suyogg@nvidia.com>
* refactor
Signed-off-by: Suyog Gupta <suyogg@nvidia.com>
* Remove spurious change
Signed-off-by: Suyog Gupta <suyogg@nvidia.com>
* Add cursor generated doc strings
Signed-off-by: Suyog Gupta <suyogg@nvidia.com>
* re-enable ad test
Signed-off-by: Suyog Gupta <suyogg@nvidia.com>
* some perf cleanup
Signed-off-by: Suyog Gupta <suyogg@nvidia.com>
* debug ci
Signed-off-by: Suyog Gupta <suyogg@nvidia.com>
* ensure that overlap scheduler is enabled
Signed-off-by: Suyog Gupta <suyogg@nvidia.com>
* Reorder the tests
Signed-off-by: Suyog Gupta <suyogg@nvidia.com>
---------
Signed-off-by: Suyog Gupta <suyogg@nvidia.com>
Co-authored-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
|
2025-03-26 14:33:14 -07:00 |
|
wili
|
3e035f2219
|
v1.2 (#3082)
Signed-off-by: wili <wili@nvidia.com>
|
2025-03-26 23:31:29 +08:00 |
|
Jinyang Yuan
|
6b583f6f83
|
perf: Enable CUDA graphs when attention DP is used and active requests on different GPUs are uneven (#3010)
Signed-off-by: Jinyang Yuan <154768711+jinyangyuan-nvidia@users.noreply.github.com>
Co-authored-by: raccoonliukai <raccoonliu@tencent.com>
|
2025-03-26 21:09:25 +08:00 |
|
Enwei Zhu
|
224469b096
|
test: [TRTLLM-4334] Create 1.0 criteria scope from API stability references (#3069)
* committed APIs validation
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
* fix
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
* clean name
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
* separate
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
* add TODOs
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
* fix naming
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
* fix
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
---------
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
|
2025-03-26 18:14:35 +08:00 |
|
Kaiyu Xie
|
ea3739ee62
|
Fix: fuse message not aligned on different processes (#3067)
Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
|
2025-03-26 17:15:27 +08:00 |
|
Yechan Kim
|
3c7cb6629c
|
Add EXAONE-Deep (#3054)
Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>
Co-authored-by: QI JUN <22017000+QiJune@users.noreply.github.com>
|
2025-03-26 14:24:04 +08:00 |
|
DylanChen-NV
|
1ac0566a93
|
fix: fix for cp > kvHeadNum (#3002)
* fix for cp > kvHeadNum
Signed-off-by: Dylan Chen <191843203+DylanChen-NV@users.noreply.github.com>
* fix for None kv_head_num
Signed-off-by: Dylan Chen <191843203+DylanChen-NV@users.noreply.github.com>
---------
Signed-off-by: Dylan Chen <191843203+DylanChen-NV@users.noreply.github.com>
|
2025-03-26 12:39:02 +08:00 |
|
HuiGao-NV
|
25f2434495
|
fix: Set correct draft_token_nums to dummy requests for torch compilation with MTP (#3053)
Set correct draft_token_nums to dummy requests for torch compilation with MTP
Signed-off-by: Hui Gao <huig@nvidia.com>
|
2025-03-26 11:32:57 +08:00 |
|
yuxianq
|
268933b5cc
|
Refactor imports inside tensorrt_llm._torch. (#3015)
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
|
2025-03-26 11:01:07 +08:00 |
|
WeiHaocheng
|
7ac04ada2a
|
doc: Add README.md for scaffolding (#3048)
* Add README.md for scaffolding
Signed-off-by: fredw <20514172+WeiHaocheng@users.noreply.github.com>
* Update tensorrt_llm/scaffolding/README.md
Co-authored-by: dongxuy04 <78518666+dongxuy04@users.noreply.github.com>
Signed-off-by: WeiHaocheng <20514172+WeiHaocheng@users.noreply.github.com>
---------
Signed-off-by: fredw <20514172+WeiHaocheng@users.noreply.github.com>
Signed-off-by: WeiHaocheng <20514172+WeiHaocheng@users.noreply.github.com>
Co-authored-by: dongxuy04 <78518666+dongxuy04@users.noreply.github.com>
|
2025-03-25 13:58:01 +08:00 |
|
Aurelien Chartier
|
ef78518310
|
Only gather responses on rank 0 (#3040)
Signed-off-by: Aurelien Chartier <achartier@nvidia.com>
|
2025-03-24 21:54:51 -07:00 |
|
Zhanrui Sun
|
c2ffce7dbd
|
chore: bump version to "0.19.0.dev2025032500" (#3019)
Signed-off-by: ZhanruiSunCh <184402041+ZhanruiSunCh@users.noreply.github.com>
|
2025-03-25 10:04:17 +08:00 |
|
bhsueh_NV
|
11f9ecb2fd
|
chore: remove useless param (#3023)
Signed-off-by: bhsueh <11360707+byshiue@users.noreply.github.com>
|
2025-03-25 08:36:45 +08:00 |
|
Netanel Haber
|
da0b0e0ee3
|
fix: disable kv cache reuse when minimum window size is reached, instead of maximum window size (#2983)
* fix variable window size reuse - disable when *min attention window* starts sliding, not max
* isPreCyclic -> isCyclic, and invert logic, for clarity
* getDecoderState()
Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>
|
2025-03-24 22:49:52 +08:00 |
|
Yan Chunwei
|
531b98ed62
|
feat: Add several pure python configs to LlmArgs (#2997)
* add SchedulerConfig
* add PeftCacheConfig
|
2025-03-24 16:16:17 +08:00 |
|
Kaiyu Xie
|
2631f21089
|
Update (#2978)
Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
|
2025-03-23 16:39:35 +08:00 |
|
Kaiyu Xie
|
3aa6b11d13
|
Update TensorRT-LLM (#2936)
* Update TensorRT-LLM
---------
Co-authored-by: changcui <cuichang147@gmail.com>
|
2025-03-18 21:25:19 +08:00 |
|
Kaiyu Xie
|
9b931c0f63
|
Update TensorRT-LLM (#2873)
|
2025-03-11 21:13:42 +08:00 |
|
Kaiyu Xie
|
77d7fe1eb2
|
Update TensorRT-LLM (#2849)
* Update TensorRT-LLM
---------
Co-authored-by: aotman <chenhangatm@gmail.com>
|
2025-03-04 18:44:00 +08:00 |
|
Kaiyu Xie
|
ab5b19e027
|
Update TensorRT-LLM (#2820)
|
2025-02-25 21:21:49 +08:00 |
|
Kaiyu Xie
|
2ea17cdad2
|
Update TensorRT-LLM (#2792)
* Update TensorRT-LLM
---------
Co-authored-by: jlee <jungmoolee@clika.io>
|
2025-02-18 21:27:39 +08:00 |
|
Kaiyu Xie
|
e88da961c5
|
Update TensorRT-LLM (#2783)
|
2025-02-13 18:40:22 +08:00 |
|
Dan Blanaru
|
16d2467ea8
|
Update TensorRT-LLM (#2755)
* Update TensorRT-LLM
---------
Co-authored-by: Denis Kayshev <topenkoff@gmail.com>
Co-authored-by: akhoroshev <arthoroshev@gmail.com>
Co-authored-by: Patrick Reiter Horn <patrick.horn@gmail.com>
Update
|
2025-02-11 03:01:00 +00:00 |
|
Denis Kayshev
|
d93a2dde84
|
Fix kwarg name (#2691)
|
2025-01-20 12:18:26 +08:00 |
|
Kaiyu Xie
|
be17881062
|
Update TensorRT-LLM (#2582)
|
2024-12-16 21:50:47 -08:00 |
|
Kaiyu Xie
|
aaacc9bd68
|
Update TensorRT-LLM (#2562)
* Update TensorRT-LLM
---------
Co-authored-by: Starrick Liu <73152103+StarrickLiu@users.noreply.github.com>
|
2024-12-11 00:31:05 -08:00 |
|
石晓伟
|
548b5b7310
|
Update TensorRT-LLM (#2532)
* blossom-ci.yml: run vulnerability scan on blossom
* open source efb18c1256f8c9c3d47b7d0c740b83e5d5ebe0ec
---------
Co-authored-by: niukuo <6831097+niukuo@users.noreply.github.com>
Co-authored-by: pei0033 <59505847+pei0033@users.noreply.github.com>
Co-authored-by: Kyungmin Lee <30465912+lkm2835@users.noreply.github.com>
Co-authored-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
|
2024-12-04 21:16:56 +08:00 |
|
Kyungmin Lee
|
4420547017
|
Fix typo (#2473)
|
2024-12-02 10:11:27 +08:00 |
|
Kaiyu Xie
|
385626572d
|
Update TensorRT-LLM (#2502)
* Update TensorRT-LLM
---------
Co-authored-by: 岑灿 <yunyi.hyy@alibaba-inc.com>
|
2024-11-26 16:51:34 +08:00 |
|
Kaiyu Xie
|
535c9cc673
|
Update TensorRT-LLM (#2460)
|
2024-11-19 18:30:34 +08:00 |
|
Kaiyu Xie
|
c629546ce4
|
Update TensorRT-LLM (#2436)
|
2024-11-12 15:27:49 +08:00 |
|
Kaiyu Xie
|
b7868dd1bd
|
Update TensorRT-LLM (#2413)
|
2024-11-05 16:27:06 +08:00 |
|
Kaiyu Xie
|
f14d1d433c
|
Update TensorRT-LLM (#2389)
* Update TensorRT-LLM
---------
Co-authored-by: Alessio Netti <netti.alessio@gmail.com>
|
2024-10-29 22:24:38 +08:00 |
|
Kaiyu Xie
|
1730a587d8
|
Update TensorRT-LLM (#2363)
* Update TensorRT-LLM
---------
Co-authored-by: tonylek <137782967+tonylek@users.noreply.github.com>
|
2024-10-22 20:27:35 +08:00 |
|
Kaiyu Xie
|
75057cd036
|
Update TensorRT-LLM (#2333)
* Update TensorRT-LLM
---------
Co-authored-by: Puneesh Khanna <puneesh.khanna@tii.ae>
Co-authored-by: Ethan Zhang <26497102+ethnzhng@users.noreply.github.com>
|
2024-10-15 15:28:40 +08:00 |
|
Kaiyu Xie
|
8681b3a4c0
|
open source 4dbf696ae9b74a26829d120b67ab8443d70c8e58 (#2297)
* Update TensorRT-LLM
---------
Co-authored-by: Bhuvanesh Sridharan <bhuvanesh.sridharan@sprinklr.com>
Co-authored-by: Qingquan Song <ustcsqq@gmail.com>
|
2024-10-08 12:19:19 +02:00 |
|
Dan Blanaru
|
48686bca3a
|
open source 7f370deb0090d885d7518c2b146399ba3933c004 (#2273)
* Update TensorRT-LLM
---------
Co-authored-by: Qingquan Song <ustcsqq@gmail.com>
|
2024-09-30 13:51:19 +02:00 |
|
Kaiyu Xie
|
40274aac39
|
Bump version to 0.14.0.dev2024092401 (#2258)
|
2024-09-26 10:26:16 +08:00 |
|
Kaiyu Xie
|
e153372759
|
Update TensorRT-LLM (#2253)
* Update TensorRT-LLM
---------
Co-authored-by: Ivan Sorokin <isorokin@nvidia.com>
Co-authored-by: lkm2835 <lkm2835@gmail.com>
|
2024-09-24 17:27:31 +02:00 |
|
Kaiyu Xie
|
a65dba7aaf
|
Bump version to 0.14.0.dev2024091700 (#2234)
|
2024-09-18 08:58:36 +08:00 |
|
Kaiyu Xie
|
fe7dc6ad4e
|
Update TensorRT-LLM (#2230)
* Update TensorRT-LLM
---------
Co-authored-by: Yi Wang <yi.wang.2005@gmail.com>
Co-authored-by: lkm2835 <lkm2835@gmail.com>
|
2024-09-17 14:39:09 +08:00 |
|