TensorRT-LLMs/tensorrt_llm/executor
Zero Zeng 953f4fd69e
[None][fix] acceptance rate calculation fix in benchmark_serving (#6746)
Signed-off-by: Zero Zeng <38289304+zerollzeng@users.noreply.github.com>
2025-08-19 17:29:36 +08:00
..
__init__.py chore: rename ExecutorBindingsWorker/Proxy (#4716) 2025-05-29 10:32:35 +08:00
executor.py [None][fix] Refactoring to avoid circular import when importing torch models (#6720) 2025-08-11 18:00:42 -04:00
ipc.py [fix]: Fall back to HMAC to Avoid IPC Serialization Churn (#5074) 2025-06-13 11:37:50 +08:00
postproc_worker.py [None][feat] Core Metrics Implementation (#5785) 2025-08-09 02:48:53 -04:00
proxy.py [TRTLLM-5508][feat] check input tokens + improve error handling (#5170) 2025-08-05 18:27:43 +01:00
request.py [None][feat] Add support of scheduling attention dp request (#6246) 2025-08-01 20:38:01 -04:00
result.py [None][fix] acceptance rate calculation fix in benchmark_serving (#6746) 2025-08-19 17:29:36 +08:00
utils.py fix[nvbug5298640]: trtllm-llmapi-launch multiple LLM instances (#4727) 2025-06-19 06:13:53 +08:00
worker.py [https://nvbugs/5427043][fix] request length exceeds max_num_tokens (#6821) 2025-08-14 13:31:12 +08:00