TensorRT-LLMs/examples/serve
Pengyun Lin 684b37df02
[https://nvbugs/5747938][fix] Use local tokenizer (#10230)
Signed-off-by: Pengyun Lin <81065165+LinPoly@users.noreply.github.com>
2025-12-26 22:08:10 +08:00
..
chat_templates [TRTLLM-9604][feat] DS R1 & V3.1 tool parser (#10010) 2025-12-19 17:20:03 +08:00
compatibility [TRTLLM-9860][doc] Add docs and examples for Responses API (#9946) 2025-12-14 21:46:13 -08:00
aiperf_client_for_multimodal.sh [TRTLLM-9091] [feat] Replace GenAI-Perf with AIPerf (#9310) 2025-12-23 13:25:55 +08:00
aiperf_client.sh [https://nvbugs/5747938][fix] Use local tokenizer (#10230) 2025-12-26 22:08:10 +08:00
curl_chat_client_for_multimodal.sh feat: enhance trtllm serve multimodal (#3757) 2025-05-15 16:16:31 -07:00
curl_chat_client.sh feat: trtllm-serve multimodal support (#3590) 2025-04-19 05:01:28 +08:00
curl_completion_client.sh feat: trtllm-serve multimodal support (#3590) 2025-04-19 05:01:28 +08:00
curl_responses_client.sh [TRTLLM-9860][doc] Add docs and examples for Responses API (#9946) 2025-12-14 21:46:13 -08:00
deepseek_r1_reasoning_parser.sh [TRTC-102][docs] --extra_llm_api_options->--config in docs/examples/tests (#10005) 2025-12-19 13:48:43 -05:00
openai_chat_client_for_multimodal.py [TRTLLM-8737][feat] Support media_io_kwargs on trtllm-serve (#8528) 2025-10-24 12:53:40 -04:00
openai_chat_client.py [TRTLLM-5277] chore: refine llmapi examples for 1.0 (part1) (#5431) 2025-07-01 19:06:41 +08:00
openai_completion_client_for_lora.py [TRTLLM-5831][feat] Add LoRA support for pytorch backend in trtllm-serve (#5376) 2025-06-29 12:46:30 +00:00
openai_completion_client_json_schema.py [TRTC-102][docs] --extra_llm_api_options->--config in docs/examples/tests (#10005) 2025-12-19 13:48:43 -05:00
openai_completion_client.py [TRTLLM-5277] chore: refine llmapi examples for 1.0 (part1) (#5431) 2025-07-01 19:06:41 +08:00
openai_responses_client.py [TRTLLM-9860][doc] Add docs and examples for Responses API (#9946) 2025-12-14 21:46:13 -08:00
README.md doc: refactor trtllm-serve examples and doc (#3187) 2025-04-04 11:40:43 +08:00
requirements.txt [TRTLLM-9091] [feat] Replace GenAI-Perf with AIPerf (#9310) 2025-12-23 13:25:55 +08:00

Online Serving Examples with trtllm-serve

We provide a CLI command, trtllm-serve, to launch a FastAPI server compatible with OpenAI APIs, here are some client examples to query the server, you can check the source code here or refer to the command documentation and examples for detailed information and usage guidelines.