TensorRT-LLMs/tensorrt_llm/serve
Erin 83f37614ef
feat: Support Top-K logprobs and prompt_logprobs in LLMAPI (#3388)
* support return logprob in llmapi

Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>

update and add test

Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>

stability test

Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>

* revert removal of old flag

Signed-off-by: Erin Ho <erinh@nvidia.com>
Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>

---------

Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>
Signed-off-by: Erin Ho <erinh@nvidia.com>
2025-05-01 12:47:14 -04:00
..
__init__.py Update TensorRT-LLM (#2820) 2025-02-25 21:21:49 +08:00
chat_utils.py feat: trtllm-serve multimodal support (#3590) 2025-04-19 05:01:28 +08:00
openai_disagg_server.py feat: Disaggregated router class (#3584) 2025-04-19 00:34:12 +08:00
openai_protocol.py feat: Support Top-K logprobs and prompt_logprobs in LLMAPI (#3388) 2025-05-01 12:47:14 -04:00
openai_server.py feat: trtllm-serve multimodal support (#3590) 2025-04-19 05:01:28 +08:00
postprocess_handlers.py chore: Unify Python NVTX call (#3450) 2025-04-15 23:25:36 +08:00
router.py feat: Disaggregated router class (#3584) 2025-04-19 00:34:12 +08:00