TensorRT-LLMs/tensorrt_llm/serve
Yilin Fan 6a5806b747
[TRTLLM-7292][feat] Support multi-threaded tokenizers for trtllm-serve (#7515)
Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
2025-09-05 18:10:22 -04:00
..
scripts [None] [fix] Minor fixes to slurm and benchmark scripts (#7453) 2025-09-02 01:57:03 -04:00
__init__.py Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00
chat_utils.py [TRTLLM-6771][feat] Support MMMU for multimodal models (#6828) 2025-08-21 08:54:12 +08:00
harmony_adapter.py [TRTLLM-7207][feat] Chat completions API for gpt-oss (#7261) 2025-08-28 10:22:06 +08:00
metadata_server.py feat: Add integration of etcd (#3738) 2025-06-03 20:01:44 +08:00
openai_disagg_server.py [None][feat] Add logging for OAI disagg server (#7232) 2025-08-26 21:02:03 -07:00
openai_protocol.py [TRTLLM-7207][feat] Chat completions API for gpt-oss (#7261) 2025-08-28 10:22:06 +08:00
openai_server.py [TRTLLM-7292][feat] Support multi-threaded tokenizers for trtllm-serve (#7515) 2025-09-05 18:10:22 -04:00
postprocess_handlers.py [None][fix] acceptance rate calculation fix in benchmark_serving (#6746) 2025-08-19 17:29:36 +08:00
router.py feat: Dynamically remove servers in PD (#5270) 2025-06-25 09:50:04 +08:00