TensorRT-LLMs/tests/unittest/llmapi/apps
Pengyun Lin fa61825c74
[None][feat] Support custom chat template for tool calling (#9297)
Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
2025-11-25 22:07:04 +08:00
..
__init__.py test: reorganize tests folder hierarchy (#2996) 2025-03-27 12:07:53 +08:00
_test_disagg_serving_multi_nodes.py [https://nvbugs/5550671][fix] fix disagg-serving multinodes test failure (#8307) 2025-10-16 22:46:19 +08:00
_test_llm_chat.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00
_test_llm_server.py Add thread leak check and fix thread/memory leak issues. (#3270) 2025-04-08 19:03:18 +08:00
_test_openai_cache_salt.py [TRTLLM-7398][feat] Support KV cache salting for secure KV cache reuse (#7106) 2025-09-06 17:58:32 -04:00
_test_openai_chat_guided_decoding.py [TRTLLM-9295][fix] use greedy decoding in test_openai_compatible_json_schema (#9305) 2025-11-20 08:32:23 +01:00
_test_openai_chat_harmony.py [None][chore] Enable multiple postprocess workers tests for chat completions api (#7602) 2025-09-15 12:16:44 +08:00
_test_openai_chat_multimodal.py [https://nvbugs/5685428][fix] fix test_openai_chat_multimodal.py (#9406) 2025-11-24 16:56:33 -08:00
_test_openai_chat.py [TRTLLM-8598][feat] enable n > 1 in OpenAI API with PyTorch backend (#8951) 2025-11-07 17:47:35 -08:00
_test_openai_completions.py [TRTLLM-8598][feat] enable n > 1 in OpenAI API with PyTorch backend (#8951) 2025-11-07 17:47:35 -08:00
_test_openai_consistent_chat.py [TRTLLM-8682][chore] Remove auto_parallel module (#8329) 2025-10-22 20:53:08 -04:00
_test_openai_lora.py [https://nvbugs/5390853][fix] Fix _test_openai_lora.py - disable cuda graph (#6965) 2025-08-17 16:56:16 +03:00
_test_openai_metrics.py [BREAKING CHANGE]: change default backend to PyTorch in trtllm-serve (#5717) 2025-07-21 21:09:43 +08:00
_test_openai_misc.py [TRTLLM-8269][test] do not explicitly pass temperature=0 to select greedy sampling (#8110) 2025-10-02 10:20:32 +02:00
_test_openai_mmencoder.py [TRTLLM-7326][feat] Add standalone multimodal encoder (#6743) 2025-08-19 21:42:50 -07:00
_test_openai_multi_chat.py [TRTLLM-8682][chore] Remove auto_parallel module (#8329) 2025-10-22 20:53:08 -04:00
_test_openai_multi_gpu.py [BREAKING CHANGE]: change default backend to PyTorch in trtllm-serve (#5717) 2025-07-21 21:09:43 +08:00
_test_openai_multi_nodes.py [BREAKING CHANGE]: change default backend to PyTorch in trtllm-serve (#5717) 2025-07-21 21:09:43 +08:00
_test_openai_perf_metrics.py [TRTLLM-6549][feat] add perf metrics endpoint to openai server and openai disagg server (#6985) 2025-08-26 15:34:44 +08:00
_test_openai_prometheus.py [None][feat] Add trtllm_ prefix for exposed metrics (#8845) 2025-11-06 15:27:18 +08:00
_test_openai_reasoning.py [None][feat] Support Qwen3 reasoning parser (#8000) 2025-10-21 14:08:39 +08:00
_test_openai_responses.py [TRTLLM-7208][feat] Implement basic functionalities for Responses API (#7341) 2025-09-02 07:08:22 -04:00
_test_openai_tool_call.py [TRTLLM-8214][feat] Support Qwen3 tool parser (#8216) 2025-10-29 15:48:29 +08:00
_test_trtllm_serve_benchmark.py [TRTLLM-7070][feat] add gpt-oss chunked prefill tests (#7779) 2025-09-22 00:12:43 -07:00
_test_trtllm_serve_duplicated_args.py chore: update trtllm-serve usage doc by removing backend parameter when it use torch as backend. (#6419) 2025-07-30 11:11:06 -04:00
_test_trtllm_serve_example.py [None][chore] Enhance trtllm-serve example test (#6604) 2025-08-06 20:30:35 +08:00
_test_trtllm_serve_lora.py [5830][feat] Improve LoRA cache memory control (#6220) 2025-07-31 09:26:38 +03:00
_test_trtllm_serve_multimodal_benchmark.py [https://nvbugs/5494698][fix] skip gemma3 27b on blackwell (#7505) 2025-09-10 21:09:27 +08:00
_test_trtllm_serve_multimodal_example.py [TRTLLM-8737][feat] Support media_io_kwargs on trtllm-serve (#8528) 2025-10-24 12:53:40 -04:00
_test_trtllm_serve_top_logprobs.py [TRTLLM-1302][feat] Topk logprobs for TRT backend and top1 logprob for PyT backend (#6097) 2025-09-12 15:32:34 +08:00
openai_server.py [None][feat] perf_metrics endpoint functionality improvement (#8005) 2025-10-02 17:43:25 -07:00
README.md [TRTLLM-8214][feat] Support Qwen3 tool parser (#8216) 2025-10-29 15:48:29 +08:00
test_chat_utils.py [None][feat] Support custom chat template for tool calling (#9297) 2025-11-25 22:07:04 +08:00
test_harmony_channel_validation.py [https://nvbugs/5521799][fix] add harmony channel validation (#8837) 2025-11-03 02:31:54 -08:00
test_tool_parsers.py [None][fixes] Add tool call parsing fixes and Qwen3 coder parser (#8817) 2025-11-13 04:34:38 -08:00
utils.py [TRTLLM-7182][test] add multi-nodes test for disagg-serving (#7470) 2025-09-24 08:31:56 +08:00

This directory contains the end-to-end tests for trtllm-serve.

These tests are triggered in the test_e2e.py.