TensorRT-LLMs/jenkins
QI JUN 991939a0f4
chore: increase A30 for cpp test (#3811)
* increase A30 for cpp test

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* enable parallel run test for gpt_executor

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* clean

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* decrease freeGpuMemoryFraction of cpp tests

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

* fix

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>

---------

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
2025-04-24 16:34:39 -07:00
..
Build.groovy infra: Switch to urm.nvidia.com as a WAR for urm-rn.nvidia.com connection issue 2025-03-31 13:05:29 +08:00
BuildDockerImage.groovy infra: [TRTLLM-4370] Fix the build error when build GH200 image (#3229) 2025-04-03 17:33:50 +08:00
controlCCache.groovy chore: Mass integration of release/0.18 (#3421) 2025-04-16 10:03:29 +08:00
GH200ImageBuilder.groovy infra: [TRTLLM-4370] Fix the build error when build GH200 image (#3229) 2025-04-03 17:33:50 +08:00
L0_MergeRequest.groovy infra: [TRTLLM-4417]Support auto trigger special test stage for special file change (#3478) 2025-04-23 20:32:19 +08:00
L0_Test.groovy chore: increase A30 for cpp test (#3811) 2025-04-24 16:34:39 -07:00
license_cpp.json feat: Add support for FP8 MLA on Hopper and Blackwell. (#3190) 2025-04-07 15:14:13 +08:00