TensorRT-LLMs/tests/integration/defs/examples/serve
JunyiXu-nv 2a5b8800e1 [https://nvbugs/5754977][fix] Use free port for serve test (#10878)
Signed-off-by: Junyi Xu <219237550+JunyiXu-nv@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
2026-02-02 16:26:46 +08:00
..
test_configs [TRTLLM-7876][test] Test trtllm-serve with --extra_llm_api_options (#7492) 2025-09-04 10:34:38 +08:00
test_serve_negative.py [TRTLLM-9181][feat] improve disagg-server prometheus metrics; synchronize workers' clocks when workers are dynamic (#9726) 2025-12-16 05:16:32 -08:00
test_serve.py [https://nvbugs/5754977][fix] Use free port for serve test (#10878) 2026-02-02 16:26:46 +08:00
test_spec_decoding_metrics.py [None][feat] Auto download speculative models from HF for pytorch backend, add speculative_model field alias (#10099) 2026-01-14 21:06:07 -08:00