TensorRT-LLMs/jenkins
QI JUN 16ca45747b
always trigger multi gpu test to protect modeling_llama.py and modeling_deepseekv3.py (#3434)
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
2025-04-11 13:19:23 +08:00
..
Build.groovy infra: Switch to urm.nvidia.com as a WAR for urm-rn.nvidia.com connection issue 2025-03-31 13:05:29 +08:00
BuildDockerImage.groovy infra: [TRTLLM-4370] Fix the build error when build GH200 image (#3229) 2025-04-03 17:33:50 +08:00
controlCCache.groovy infra: Switch to urm.nvidia.com as a WAR for urm-rn.nvidia.com connection issue 2025-03-31 13:05:29 +08:00
GH200ImageBuilder.groovy infra: [TRTLLM-4370] Fix the build error when build GH200 image (#3229) 2025-04-03 17:33:50 +08:00
L0_MergeRequest.groovy always trigger multi gpu test to protect modeling_llama.py and modeling_deepseekv3.py (#3434) 2025-04-11 13:19:23 +08:00
L0_Test.groovy Update gh pages build script (#3405) 2025-04-09 19:58:38 +08:00
license_cpp.json feat: Add support for FP8 MLA on Hopper and Blackwell. (#3190) 2025-04-07 15:14:13 +08:00