TensorRT-LLMs/tensorrt_llm/serve
Zheng Duan 24fc1f9acf
[None][fix] using arrival time in llmapi when creating LlmRequest in pytorch workflow (#7553)
Signed-off-by: zhengd-nv <200704041+zhengd-nv@users.noreply.github.com>
2025-09-15 07:26:01 -04:00
..
scripts [https://nvbugs/5369366] [fix] Report failing requests (#7060) 2025-09-04 12:56:23 -07:00
__init__.py Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00
chat_utils.py [TRTLLM-6771][feat] Support MMMU for multimodal models (#6828) 2025-08-21 08:54:12 +08:00
harmony_adapter.py [TRTLLM-7779][feat] Support multiple postprocess workers for chat completions API (#7508) 2025-09-08 11:11:35 +08:00
metadata_server.py feat: Add integration of etcd (#3738) 2025-06-03 20:01:44 +08:00
openai_disagg_server.py [None][chore] Mass integration of release/1.0 - 3rd (#7519) 2025-09-08 14:03:04 +08:00
openai_protocol.py [TRTLLM-1302][feat] Topk logprobs for TRT backend and top1 logprob for PyT backend (#6097) 2025-09-12 15:32:34 +08:00
openai_server.py [None][fix] using arrival time in llmapi when creating LlmRequest in pytorch workflow (#7553) 2025-09-15 07:26:01 -04:00
postprocess_handlers.py [TRTLLM-1302][feat] Topk logprobs for TRT backend and top1 logprob for PyT backend (#6097) 2025-09-12 15:32:34 +08:00
responses_utils.py [TRTLLM-7208][feat] Implement basic functionalities for Responses API (#7341) 2025-09-02 07:08:22 -04:00
router.py feat: Dynamically remove servers in PD (#5270) 2025-06-25 09:50:04 +08:00