TensorRT-LLMs/tensorrt_llm/executor
Erin e9d360180c
fix: [nvbug 5321627] handle cases when TRT backend return more logits than output tokens (#4921)
Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>
2025-06-06 07:12:42 +08:00
..
__init__.py Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
executor.py feat: Support Top-K logprobs and prompt_logprobs in LLMAPI (#3388) 2025-05-01 12:47:14 -04:00
ipc.py fix: llmapi-launch add add trtllm-bench test with engine building (#4… (#4550) 2025-06-01 08:38:01 +08:00
postproc_worker.py feat: return logits in PyTorch flow (#3221) 2025-04-24 16:56:03 -07:00
proxy.py feat: Support Top-K logprobs and prompt_logprobs in LLMAPI (#3388) 2025-05-01 12:47:14 -04:00
request.py feat: Add multimodal embedding field in LlmRequest (#3855) 2025-05-01 12:23:30 +08:00
result.py fix: [nvbug 5321627] handle cases when TRT backend return more logits than output tokens (#4921) 2025-06-06 07:12:42 +08:00
utils.py fix: llmapi-launch add add trtllm-bench test with engine building (#4… (#4550) 2025-06-01 08:38:01 +08:00
worker.py [https://nvbugspro.nvidia.com/bug/5243740][fix] deduce default max_tokens for trtllm-serve (#4265) 2025-05-19 00:34:40 +08:00