TensorRT-LLMs/tests/unittest/llmapi/apps
Enwei Zhu 21efb50068
[TRTLLM-6406] feat: Enable guided decoding with overlap scheduler (#6000)
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
2025-07-17 17:46:10 +08:00
..
__init__.py test: reorganize tests folder hierarchy (#2996) 2025-03-27 12:07:53 +08:00
_test_llm_chat.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00
_test_llm_server.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00
_test_openai_chat_multimodal.py feat: add MultimodalParams & putting all multimodal params into it and refactor HyperCLOVAX & Qwen2/2.5-VL (#5522) 2025-07-07 18:03:12 -07:00
_test_openai_chat_structural_tag.py [TRTLLM-6406] feat: Enable guided decoding with overlap scheduler (#6000) 2025-07-17 17:46:10 +08:00
_test_openai_chat.py [TRTLLM-1658][feat] Enable multiple response in trtllm-serve for TRT backend (#4623) 2025-05-28 11:36:44 +08:00
_test_openai_completions.py [nvbug 5004744][fix] rewrite completion API to avoid repetitive tokens (#5201) 2025-07-14 17:17:30 +08:00
_test_openai_consistent_chat.py [TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312) 2025-06-20 03:01:10 +08:00
_test_openai_lora.py Breaking change: perf: [TRTLLM-4662] Enable cuda graph by default (#5480) 2025-07-14 16:42:23 +08:00
_test_openai_metrics.py [TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312) 2025-06-20 03:01:10 +08:00
_test_openai_misc.py fix cancel request logic (#5800) 2025-07-14 10:23:20 +08:00
_test_openai_multi_chat.py [TRTLLM-5208][BREAKING CHANGE] chore: make pytorch LLM the default (#5312) 2025-06-20 03:01:10 +08:00
_test_openai_multi_gpu.py test: reorganize tests folder hierarchy (#2996) 2025-03-27 12:07:53 +08:00
_test_openai_multi_nodes.py test: reorganize tests folder hierarchy (#2996) 2025-03-27 12:07:53 +08:00
_test_openai_reasoning.py start OAIServer with max_beam_width=1 for TorchSampler (#5427) 2025-06-25 15:52:06 +08:00
_test_trtllm_serve_benchmark.py tests: [TRTQA-2906] add benchmark serving tests (#4901) 2025-06-05 14:33:03 +08:00
_test_trtllm_serve_duplicated_args.py [feat] Allow overriding cli args with yaml file in trtllm-serve (#4164) 2025-05-08 21:19:05 -04:00
_test_trtllm_serve_example.py doc: add genai-perf benchmark & slurm multi-node for trtllm-serve doc (#3407) 2025-04-16 00:11:58 +08:00
_test_trtllm_serve_lora.py [TRTLLM-5831][feat] Add LoRA support for pytorch backend in trtllm-serve (#5376) 2025-06-29 12:46:30 +00:00
_test_trtllm_serve_multimodal_example.py chore: Partition LlmArgs into TorchLlmArgs and TrtLlmArgs (#3823) 2025-05-22 09:40:56 +08:00
openai_server.py feat: support abort disconnected requests (#3214) 2025-04-07 16:14:58 +08:00
README.md Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00

This directory contains the end-to-end tests for the LLM API applications in examples/apps.

These tests are triggered in the test_e2e.py.