TensorRT-LLMs/tensorrt_llm/executor
ixlmar 1ebceb790d
[TRTLLM-5508][feat] check input tokens + improve error handling (#5170)
Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>
2025-08-05 18:27:43 +01:00
..
__init__.py chore: rename ExecutorBindingsWorker/Proxy (#4716) 2025-05-29 10:32:35 +08:00
executor.py [None][feat] Add support of scheduling attention dp request (#6246) 2025-08-01 20:38:01 -04:00
ipc.py [fix]: Fall back to HMAC to Avoid IPC Serialization Churn (#5074) 2025-06-13 11:37:50 +08:00
postproc_worker.py feat: Support post_proc for bench (#5122) 2025-06-15 13:02:38 +08:00
proxy.py [TRTLLM-5508][feat] check input tokens + improve error handling (#5170) 2025-08-05 18:27:43 +01:00
request.py [None][feat] Add support of scheduling attention dp request (#6246) 2025-08-01 20:38:01 -04:00
result.py [fix] Add detokenization-based stop word logic to LLM API (#5948) 2025-07-29 10:16:59 -07:00
utils.py fix[nvbug5298640]: trtllm-llmapi-launch multiple LLM instances (#4727) 2025-06-19 06:13:53 +08:00
worker.py [TRTLLM-5508][feat] check input tokens + improve error handling (#5170) 2025-08-05 18:27:43 +01:00