Commit Graph

522 Commits

Author SHA1 Message Date
Yechan Kim
5460d18b10
feat: trtllm-serve multimodal support (#3590)
* feat: trtllm-serve multimodal support

Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>

* remove disable argument

Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>

* remove disable

Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>

* add and separate tests and move the doc

Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>

* remove block_resue arg from serve.py

Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>

---------

Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>
Co-authored-by: Haohang Huang <31998628+symphonylyh@users.noreply.github.com>
2025-04-19 05:01:28 +08:00
mayani-nv
ce8329646f
Update run.py for draft_target_model (#3615)
This change makes the draft target model works without mismatch in the vocab size

Signed-off-by: mayani-nv <67936769+mayani-nv@users.noreply.github.com>
Co-authored-by: rakib-hasan <rhasan@nvidia.com>
2025-04-19 01:01:50 +08:00
pcastonguay
ae5671644a
feat: Disaggregated router class (#3584)
* Add draft scheduler class

Signed-off-by: Shunkang <182541032+Shunkangz@users.noreply.github.co>

* Refactor the design

Signed-off-by: Shunkang <182541032+Shunkangz@users.noreply.github.co>

* feat: Introduce router class for disaggregated server

Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com>

* Add unit tests for router class

Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com>

* Adding tests for disagg_utils

Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com>

* Fixing missing import

Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com>

* Fixing disagg integration tests

Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com>

* Addressing MR review comments

Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com>

---------

Signed-off-by: Shunkang <182541032+Shunkangz@users.noreply.github.co>
Signed-off-by: Patrice Castonguay <55748270+pcastonguay@users.noreply.github.com>
Co-authored-by: Shunkang <182541032+Shunkangz@users.noreply.github.co>
2025-04-19 00:34:12 +08:00
QI JUN
b9fce42717
enable test_ptp_quickstart_advanced_mixed_precision (#3667)
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
2025-04-18 05:06:24 -07:00
Zheng Duan
bce7ea8c38
test: add kv cache event tests for disagg workers (#3602) 2025-04-18 18:30:19 +08:00
Yan Chunwei
2a09826ec4
fix hmac in remote mpi session (#3649)
Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>
Co-authored-by: Tao Li @ NVIDIA <tali@nvidia.com>
2025-04-18 17:47:51 +08:00
HuiGao-NV
d3608d6818
Remove dummy forward path (#3669)
Remove dummy forward path
2025-04-18 16:17:50 +08:00
Dom Brown
dbd9a83b0d
feat: Integrate GPUDirect Storage (GDS) into Executor API (#3582)
* feat: Integrate GPUDirect Storage (GDS) into Executor API

Squash of several dev commits

Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>
2025-04-18 15:59:21 +08:00
Zheyu Fu
90a28b917f
feat: Add Dynasor-CoT in scaffolding examples. (#3501)
Signed-off-by: Zheyu Fu <zheyufu2@gmail.com>
Co-authored-by: Junda Chen <32371474+GindaChen@users.noreply.github.com>
Co-authored-by: Yichao Fu <57950249+fuyichao2000@users.noreply.github.com>
Co-authored-by: Andy Dai <zhongdongmin@nvidia.com>
2025-04-18 07:48:01 +00:00
Erin
4fedf0be5c
unwaive test for nvbug_5150466 (#3552)
Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>
2025-04-18 15:15:58 +08:00
Yuan Tong
0b0e6d8a0a
refactor: Clean up CMakeLists.txt (#3479)
Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com>
2025-04-18 14:39:29 +08:00
Emma Qiao
2f48985b9c
infra: Add step to generate new duration file (#3298)
* Add step to generate new duration file

Signed-off-by: EmmaQiaoCh <qqiao@nvidia.com>

* Install python in earlier step

Signed-off-by: EmmaQiaoCh <qqiao@nvidia.com>

* Clone repo and add debug info

Signed-off-by: EmmaQiaoCh <qqiao@nvidia.com>

* Remove debug info and only generate duration for post-merge

Signed-off-by: EmmaQiaoCh <qqiao@nvidia.com>

* Test for the new duration file

Signed-off-by: EmmaQiaoCh <qqiao@nvidia.com>

* Update the duration file format

Signed-off-by: EmmaQiaoCh <qqiao@nvidia.com>

* Move generate_duration.py to scripts folder and add try-catch avoiding any broken

Signed-off-by: EmmaQiaoCh <qqiao@nvidia.com>

---------

Signed-off-by: EmmaQiaoCh <qqiao@nvidia.com>
2025-04-18 12:56:31 +08:00
peaceh-nv
88cff61fa1
chore : Split more tests out of gpt tests (#3524)
Signed-off-by: peaceh <103117813+peaceh-nv@users.noreply.github.com>
2025-04-18 12:04:57 +08:00
dongfengy
b71a0f76b4
test: Add llama 4 to ci (#3520)
* Add llama 4 to ci

Signed-off-by: Dongfeng Yu <dongfengy@nvidia.com>

* Only test trtllm

Signed-off-by: Dongfeng Yu <dongfengy@nvidia.com>

* Disable marverick

Signed-off-by: Dongfeng Yu <dongfengy@nvidia.com>

---------

Signed-off-by: Dongfeng Yu <dongfengy@nvidia.com>
2025-04-18 11:25:52 +08:00
Iman Tabrizian
fc88d67675
chore: Refactor test_disaggregated.py (#3154)
* Refactor test_disaggregated.py

Signed-off-by: Iman Tabrizian <itabrizian@nvidia.com>

* Address review comments

Signed-off-by: Iman Tabrizian <itabrizian@nvidia.com>

* Remove waived tests

Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>

* fix: Fix streaming endpoint chat completions

Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>

---------

Signed-off-by: Iman Tabrizian <itabrizian@nvidia.com>
Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>
2025-04-18 11:04:06 +08:00
Chang Liu
b8818b45be
fix: llama4: address couple of issues in llama4 attention module (#3491)
* fix attn module for llama4

* Address comments

* Rebase to accommodate latest attn refactor and refactor l4attn

* Remove aux_stream from classic attn

* Use RMSNorm for L2Norm

* Update tensorrt_llm/_torch/models/modeling_llama.py

Co-authored-by: hlu1 <14827759+hlu1@users.noreply.github.com>
Signed-off-by: Chang Liu <lc9114@gmail.com>

* Add typing informations for _attn_qkv

* Remove redundant comment

* Simplify llama4 DecoderLayer logic

---------

Signed-off-by: Chang Liu <lc9114@gmail.com>
Co-authored-by: hlu1 <14827759+hlu1@users.noreply.github.com>
2025-04-18 01:54:59 +00:00
Jackch-NV
1b2b112d44
fix sage attention headsize check error in bertAttentionPlugin.cpp (#3660)
Signed-off-by: Jackch-NV <69230184+Jackch-NV@users.noreply.github.com>
2025-04-18 09:28:04 +08:00
rakib-hasan
ff3b741045
feat: adding multimodal (only image for now) support in trtllm-bench (#3490)
* feat: adding multimodal (only image for now) support in trtllm-bench

Signed-off-by: Rakib Hasan <rhasan@nvidia.com>

* fix: add  in load_dataset() calls to maintain the v2.19.2 behavior

Signed-off-by: Rakib Hasan <rhasan@nvidia.com>

* re-adding prompt_token_ids and using that for prompt_len

Signed-off-by: Rakib Hasan <rhasan@nvidia.com>

* updating the datasets version in examples as well

Signed-off-by: Rakib Hasan <rhasan@nvidia.com>

* api changes are not needed

Signed-off-by: Rakib Hasan <rhasan@nvidia.com>

* moving datasets requirement and removing a missed api change

Signed-off-by: Rakib Hasan <rhasan@nvidia.com>

* addressing review comments

Signed-off-by: Rakib Hasan <rhasan@nvidia.com>

* refactoring the quickstart example

Signed-off-by: Rakib Hasan <rhasan@nvidia.com>

---------

Signed-off-by: Rakib Hasan <rhasan@nvidia.com>
2025-04-18 07:06:16 +08:00
QI JUN
26ebd95302
chore: update multi gpu trigger file list (#3665)
* update multi gpu trigger file list

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* update multi gpu trigger file list

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

---------

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
2025-04-17 11:19:01 -07:00
QI JUN
91660939fd
tests: waive test_llm_multi_node (#3664)
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
2025-04-18 01:59:16 +08:00
Frank
5a6cb2b985
fix: Correct reporting of text dtype for Llama 4 (#3494)
Signed-off-by: Frank Di Natale <3429989+FrankD412@users.noreply.github.com>
2025-04-18 00:07:49 +08:00
Yukun He
83b36ebecd
Fix fused_moe fallback issue. (#3652)
min_latency_mode is only set to False during warmup phase. Thus when it becomes true during inference, all tactics fall back to the default one and thus cause perf regression.

Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
2025-04-17 23:17:04 +08:00
yuxianq
b9b1c1368c
feat: Support unfused rope in MLA. (#3610)
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
2025-04-17 16:50:49 +08:00
Ivy Zhang
ad19ca3cbf
remove benchmark test list (#3644)
Signed-off-by: Ivy Zhang <25222398+crazydemo@users.noreply.github.com>
Co-authored-by: Larry <197874197+LarryXFly@users.noreply.github.com>
2025-04-17 16:23:41 +08:00
Netanel Haber
3c52ac098f
feat: allocate minimal blocks per window size (#3028)
* implement variable window attention by breaking the block manager into window block managers per window size

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* revert isCyclic to be true if the min attention window is reached, not per window size

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* add explanatory comment to mCyclicThreshold

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* load correct gemma config

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* don't shadow inputLength in addSequence - it should remain the function scope input length between window size loop iterations

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* fix KVCacheManagerVariableWindowAttentionWithReuseTest for multiple window block managers

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* if TYPE_CHECKING

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* set temp_attention_window_inputs to None explicitly

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* set temp_attention_window_inputs to None explicitly

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* pass dtype as well

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* test_gemma variable sliding window attention

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* allot a fraction of primary/secondaryBlocks to different window size heaps, depending on the window size's total contribution to the kvcache size (i.e., including all layers)

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* remove || mEnableBlockReuse which erroneously triggers beamsearch code for cyclic variable attention window code

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* turn off request delaying for MaxUtil

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* make comments better

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* windowSizesTotalSum using std::accumulate

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* fix error handling of forwardAsync - forwardAsync catch-all catch cleanup code that runs terminateRequest can also fail and must be caught

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* fix comments

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* remove assert that kills disagg tests, since it isn't necessary

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* fix corrupted expression: 'isNewTask && (peftCacheManager ?' -> '(isNewTask && peftCacheManager) ?' which caused boolean algebra. Main is correct

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* add Gemma3 to SUPPORTED_HF_ARCHITECTURES

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* support Gemma3

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* finally fix test_gemma - always spread at least {} into generate_summary_cmd, never None

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* finally fix test_gemma - always spread at least {} into generate_summary_cmd, never None

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* fix kvfactor field for deepseek

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* fix comment

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* fix gemma-3 entries in testlist to include vswa

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* only quantize gemma2 VSWA

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

remove misleading comment

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

fix test_gemma

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* fix test_gemma

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* fix test_gemma

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* in sendRequestInfo, fromOldAllocatedBlockIds->fromOldAllocatedBlockIds, like in main

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>

* fix: disable KV cache reuse if using attention sink (#3021)

* fix: disable KV cache reuse if using attention sink

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* fix: disable KV cache reuse if sink bubble

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

* add comment

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

---------

Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>

---------

Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
Co-authored-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
2025-04-17 16:04:57 +08:00
Yiqing Yan
1c6f3debbb
Waive L0 tests (#3651)
Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
2025-04-17 15:13:56 +08:00
xinhe-nv
b82a4e8d01
test: [CI] Add failed cases into waives.txt (#3627)
* update waive list

Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com>

* fix waives

Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com>

---------

Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com>
2025-04-17 14:45:41 +08:00
Tao Li @ NVIDIA
e4476bf521
update fp8 doc (#3647) (#3650)
Signed-off-by: taoli <litaotju@users.noreply.github.com>
Co-authored-by: taoli <litaotju@users.noreply.github.com>
2025-04-17 13:37:08 +08:00
danielafrimi
0f084d9566
added loraOp into lora layer + test for mlp and comparison to lora plugin (#3455)
Loraop integration into torch modules

Signed-off-by: Ubuntu <dafrimi@nvidia.com>
2025-04-17 12:48:27 +08:00
yuxianq
239fe0ff26
chore: Use ellipsis as default value to detect whether residual argument is provided (#3626)
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
2025-04-17 12:31:58 +08:00
Luis Vega
a06bff5052
Fix rotary_emb param in NemotronH attention (#3646)
Signed-off-by: Luis Vega <vegaluisjose@users.noreply.github.com>
2025-04-16 21:03:07 -07:00
Void
950cadf2bd
add support for smaller hidden_dim (#3609)
Signed-off-by: Yilin Zhang <18275976+yilin-void@users.noreply.github.com>
Co-authored-by: Yukun He <23156053+hyukn@users.noreply.github.com>
2025-04-17 12:00:32 +08:00
Ivy Zhang
b2fb0fe843
test: add quickstart test for nemotron-ultra (#3596)
* add quickstart test for nemotron-ultra

Signed-off-by: Ivy Zhang <25222398+crazydemo@users.noreply.github.com>

* fix test name

Signed-off-by: Ivy Zhang <25222398+crazydemo@users.noreply.github.com>

---------

Signed-off-by: Ivy Zhang <25222398+crazydemo@users.noreply.github.com>
Co-authored-by: Larry <197874197+LarryXFly@users.noreply.github.com>
2025-04-17 11:16:41 +08:00
ruodil
5e2ebebe76
tests: change qa perf test to trtllm-bench (#3189)
Signed-off-by: Ruodi <200874449+ruodil@users.noreply.github.com>
Co-authored-by: Larry <197874197+LarryXFly@users.noreply.github.com>
2025-04-17 09:53:32 +08:00
Chuang Zhu
f4ddc304f2
disable ib for ucx test (#3613)
Signed-off-by: Chuang Zhu <111838961+chuangz0@users.noreply.github.com>
2025-04-17 06:43:57 +08:00
QI JUN
57cafe7f9b
waive test_fp8_scaled_mm (#3637)
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
2025-04-16 15:07:30 -07:00
Luis Vega
0bda1f9780
feat: Nemotron-H model support (#3430)
* added files for nemotron-h

Signed-off-by: Luis Vega <lvega@nvidia.com>

* use try/except to import RMSNorm

Signed-off-by: Luis Vega <lvega@nvidia.com>

---------

Signed-off-by: Luis Vega <lvega@nvidia.com>
Co-authored-by: QI JUN <22017000+QiJune@users.noreply.github.com>
2025-04-16 14:05:56 -07:00
Mike Iovine
41a6c98544
Support CUDA graphs for EAGLE3 (#3176)
Signed-off-by: Mike Iovine <6158008+mikeiovine@users.noreply.github.com>
2025-04-17 04:53:50 +08:00
hlu1
b6bae33453
Clean up linear.py, mlp.py, gated_mlp.py (#3553)
Signed-off-by: Hao Lu <14827759+hlu1@users.noreply.github.com>
2025-04-16 12:21:44 -07:00
Yibin Li
351808efeb
fix: Use hmac authentication for pickle encryption (#3384)
* hmac initial implementation to encrypt worker and proxy queue

Signed-off-by: Yibin Li <109242046+yibinl-nvidia@users.noreply.github.com>

* set different hmac key for each pair of server/client queue

Signed-off-by: Yibin Li <109242046+yibinl-nvidia@users.noreply.github.com>

* fix comments

Signed-off-by: Yibin Li <109242046+yibinl-nvidia@users.noreply.github.com>

* fix style

Signed-off-by: Yibin Li <109242046+yibinl-nvidia@users.noreply.github.com>

---------

Signed-off-by: Yibin Li <109242046+yibinl-nvidia@users.noreply.github.com>
2025-04-17 00:40:13 +08:00
QI JUN
fac1a905e9
waive test_llm_multi_node_with_postproc (#3628)
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
2025-04-16 05:49:39 -07:00
Olya Kozlova
b3e6723dbc
feat: Adding FP8 BMM from Codegen (#3541)
* Adding FP8 BMM from Codegen

Signed-off-by: Olya Kozlova <okozlova@s4124-0110.nvidia.com>

* Fixed licenses

Signed-off-by: Olya Kozlova <okozlova@s4124-0062.nvidia.com>

---------

Signed-off-by: Olya Kozlova <okozlova@s4124-0110.nvidia.com>
Signed-off-by: Olya Kozlova <okozlova@s4124-0062.nvidia.com>
Co-authored-by: Olya Kozlova <okozlova@6u1g-0018.nvidia.com>
Co-authored-by: Olya Kozlova <okozlova@s4124-0062.nvidia.com>
2025-04-16 10:37:15 +02:00
Yiteng Niu
ca88674210
update user list (#3614)
Signed-off-by: Yiteng Niu <6831097+niukuo@users.noreply.github.com>
2025-04-16 15:13:29 +08:00
Gabriel Wu
2e0cd7922e
fix: add SM90 guard for FP8 Blockscale GEMM (#3575)
* fix: add SM90 guard for FP8 Blockscale GEMM

Signed-off-by: Zihua Wu <13583761+lucifer1004@users.noreply.github.com>

* fix: add SM90 guard for FP8 Blockscale GEMM

Signed-off-by: Zihua Wu <13583761+lucifer1004@users.noreply.github.com>

---------

Signed-off-by: Zihua Wu <13583761+lucifer1004@users.noreply.github.com>
Co-authored-by: Tao Li @ NVIDIA <tali@nvidia.com>
2025-04-16 14:44:37 +08:00
yuxianq
fd8ded2b2b
feat: Support cos_sin_cache in all cases. (#3517)
Signed-off-by: Yuxian Qiu <142763828+yuxianq@users.noreply.github.com>
2025-04-16 13:48:44 +08:00
QI JUN
ab29348db2
waive test_llm_phi_quantization_1gpu (#3603)
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
2025-04-16 13:33:46 +08:00
Jinyang Yuan
efabf6b443
chore: Add comments to modifications that fix TP size of DeepSeek-V3/R1 when using more than 16 GPUs (#3572)
Signed-off-by: Jinyang Yuan <154768711+jinyangyuan-nvidia@users.noreply.github.com>
2025-04-15 21:51:42 -07:00
Zhanrui Sun
9d88ee3e45
chore: bump version to 0.20.0rc0 (#3561)
Signed-off-by: ZhanruiSunCh <184402041+ZhanruiSunCh@users.noreply.github.com>
2025-04-16 11:41:21 +08:00
narutolhy
ccd73c71a5
feat: Add stream generation task scaffolding examples (#3527)
* stream generation task/controller

Signed-off-by: narutolhy <582909902@qq.com>

* edit README

Signed-off-by: narutolhy <582909902@qq.com>

* rename README

Signed-off-by: narutolhy <582909902@qq.com>

---------

Signed-off-by: narutolhy <582909902@qq.com>
2025-04-16 11:33:55 +08:00
Yan Chunwei
409c294c4e
fix trtllm-bench mgmn (#3563)
Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>
2025-04-16 11:04:09 +08:00