TensorRT-LLMs/tensorrt_llm/executor
mpikulski 1c7f601265
[https://nvbugs/5508890][fix] gen. result cleanup when using PostprocWorker (#7771)
Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>
2025-09-18 14:01:18 +08:00
..
__init__.py chore: rename ExecutorBindingsWorker/Proxy (#4716) 2025-05-29 10:32:35 +08:00
executor.py [None][fix] using arrival time in llmapi when creating LlmRequest in pytorch workflow (#7553) 2025-09-15 07:26:01 -04:00
ipc.py [None][chore] Mass integration of release/1.0 - 3rd (#7519) 2025-09-08 14:03:04 +08:00
postproc_worker.py [TRTLLM-1302][feat] Topk logprobs for TRT backend and top1 logprob for PyT backend (#6097) 2025-09-12 15:32:34 +08:00
proxy.py [https://nvbugs/5508890][fix] gen. result cleanup when using PostprocWorker (#7771) 2025-09-18 14:01:18 +08:00
request.py [None][fix] using arrival time in llmapi when creating LlmRequest in pytorch workflow (#7553) 2025-09-15 07:26:01 -04:00
result.py [TRTLLM-1302][feat] Topk logprobs for TRT backend and top1 logprob for PyT backend (#6097) 2025-09-12 15:32:34 +08:00
utils.py fix[nvbug5298640]: trtllm-llmapi-launch multiple LLM instances (#4727) 2025-06-19 06:13:53 +08:00
worker.py [None][fix] using arrival time in llmapi when creating LlmRequest in pytorch workflow (#7553) 2025-09-15 07:26:01 -04:00