TensorRT-LLMs/tests/unittest/llmapi/apps
Ye Zhang bcf5ec0c9a
[None][feat] Core Metrics Implementation (#5785)
Signed-off-by: Ye Zhang <zhysishu@gmail.com>
Signed-off-by: Shunkang <182541032+Shunkangz@users.noreply.github.co>
2025-08-09 02:48:53 -04:00
..
__init__.py test: reorganize tests folder hierarchy (#2996) 2025-03-27 12:07:53 +08:00
_test_llm_chat.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00
_test_llm_server.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00
_test_openai_chat_json.py chore: update trtllm-serve usage doc by removing backend parameter when it use torch as backend. (#6419) 2025-07-30 11:11:06 -04:00
_test_openai_chat_multimodal.py [TRTLLM-6785][feat] BREAKING CHANGE Enable TRTLLM sampler by default (#6216) 2025-08-07 22:19:37 -04:00
_test_openai_chat_structural_tag.py chore: update trtllm-serve usage doc by removing backend parameter when it use torch as backend. (#6419) 2025-07-30 11:11:06 -04:00
_test_openai_chat.py [TRTLLM-6785][feat] BREAKING CHANGE Enable TRTLLM sampler by default (#6216) 2025-08-07 22:19:37 -04:00
_test_openai_completions.py [TRTLLM-6785][feat] BREAKING CHANGE Enable TRTLLM sampler by default (#6216) 2025-08-07 22:19:37 -04:00
_test_openai_consistent_chat.py [TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312) 2025-06-20 03:01:10 +08:00
_test_openai_lora.py [5830][feat] Improve LoRA cache memory control (#6220) 2025-07-31 09:26:38 +03:00
_test_openai_metrics.py [BREAKING CHANGE]: change default backend to PyTorch in trtllm-serve (#5717) 2025-07-21 21:09:43 +08:00
_test_openai_misc.py [TRTLLM-6785][feat] BREAKING CHANGE Enable TRTLLM sampler by default (#6216) 2025-08-07 22:19:37 -04:00
_test_openai_multi_chat.py [TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312) 2025-06-20 03:01:10 +08:00
_test_openai_multi_gpu.py [BREAKING CHANGE]: change default backend to PyTorch in trtllm-serve (#5717) 2025-07-21 21:09:43 +08:00
_test_openai_multi_nodes.py [BREAKING CHANGE]: change default backend to PyTorch in trtllm-serve (#5717) 2025-07-21 21:09:43 +08:00
_test_openai_prometheus.py [None][feat] Core Metrics Implementation (#5785) 2025-08-09 02:48:53 -04:00
_test_openai_reasoning.py [BREAKING CHANGE]: change default backend to PyTorch in trtllm-serve (#5717) 2025-07-21 21:09:43 +08:00
_test_trtllm_serve_benchmark.py tests: [TRTQA-2906] add benchmark serving tests (#4901) 2025-06-05 14:33:03 +08:00
_test_trtllm_serve_duplicated_args.py chore: update trtllm-serve usage doc by removing backend parameter when it use torch as backend. (#6419) 2025-07-30 11:11:06 -04:00
_test_trtllm_serve_example.py [None][chore] Enhance trtllm-serve example test (#6604) 2025-08-06 20:30:35 +08:00
_test_trtllm_serve_lora.py [5830][feat] Improve LoRA cache memory control (#6220) 2025-07-31 09:26:38 +03:00
_test_trtllm_serve_multimodal_example.py [TRTLLM-6785][feat] BREAKING CHANGE Enable TRTLLM sampler by default (#6216) 2025-08-07 22:19:37 -04:00
openai_server.py feat: support abort disconnected requests (#3214) 2025-04-07 16:14:58 +08:00
README.md Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
utils.py [TRTLLM-6785][feat] BREAKING CHANGE Enable TRTLLM sampler by default (#6216) 2025-08-07 22:19:37 -04:00

This directory contains the end-to-end tests for the LLM API applications in examples/apps.

These tests are triggered in the test_e2e.py.