TensorRT-LLMs/tensorrt_llm/executor
Cao Dong 62010c0ab7
[None][feat] Return topk logprobs in torch backend (#7976)
Signed-off-by: Cao Dong <87467313+dcaox@users.noreply.github.com>
2025-09-30 09:32:37 +08:00
..
__init__.py chore: rename ExecutorBindingsWorker/Proxy (#4716) 2025-05-29 10:32:35 +08:00
base_worker.py [None][feat] Return topk logprobs in torch backend (#7976) 2025-09-30 09:32:37 +08:00
executor.py [#7692][fix] recognize RequestError as per-request error in background handler (#7726) 2025-09-24 11:11:17 +08:00
ipc.py [https://nvbugs/5503440][fix] Fix potential hang due to wrong type of ZMQ socket and protocol for worker_init_status_queue (#7646) 2025-09-19 18:13:33 +08:00
postproc_worker.py [TRTLLM-1302][feat] Topk logprobs for TRT backend and top1 logprob for PyT backend (#6097) 2025-09-12 15:32:34 +08:00
proxy.py [https://nvbugs/5503440][fix] Fix potential hang due to wrong type of ZMQ socket and protocol for worker_init_status_queue (#7646) 2025-09-19 18:13:33 +08:00
request.py [None][fix] using arrival time in llmapi when creating LlmRequest in pytorch workflow (#7553) 2025-09-15 07:26:01 -04:00
result.py [TRTLLM-7015] [feat] Enable prompt_logprobs in pytorch backend (#7580) 2025-09-23 18:48:10 -07:00
utils.py [#7692][fix] recognize RequestError as per-request error in background handler (#7726) 2025-09-24 11:11:17 +08:00
worker.py [https://nvbugs/5495789][feat] Optionally disable server GC and worker GC (#7995) 2025-09-26 21:39:24 +08:00