TensorRT-LLMs/tensorrt_llm/executor
QI JUN 4a8ac8dd62
[TRTLLM-8480][chore] clean create_py_executor API (#8412)
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
2025-10-17 23:52:02 -04:00
..
rpc [TRTLLM-8189][chore] enhance GenerationExecutor with RPC (part1) (#5543) 2025-10-05 17:28:20 +08:00
__init__.py chore: rename ExecutorBindingsWorker/Proxy (#4716) 2025-05-29 10:32:35 +08:00
base_worker.py [TRTLLM-8480][chore] clean create_py_executor API (#8412) 2025-10-17 23:52:02 -04:00
executor.py [TRTLLM-8480][chore] clean create_py_executor API (#8412) 2025-10-17 23:52:02 -04:00
ipc.py [TRTLLM-8189][chore] enhance GenerationExecutor with RPC (part1) (#5543) 2025-10-05 17:28:20 +08:00
postproc_worker.py [None][feat] perf_metrics endpoint functionality improvement (#8005) 2025-10-02 17:43:25 -07:00
proxy.py [TRTLLM-8480][chore] clean create_py_executor API (#8412) 2025-10-17 23:52:02 -04:00
ray_executor.py [TRTLLM-8480][chore] clean create_py_executor API (#8412) 2025-10-17 23:52:02 -04:00
ray_gpu_worker.py [TRTLLM-8480][chore] clean create_py_executor API (#8412) 2025-10-17 23:52:02 -04:00
request.py [None][fix] using arrival time in llmapi when creating LlmRequest in pytorch workflow (#7553) 2025-09-15 07:26:01 -04:00
result.py [None][feat] Support cached tokens for Openai server (#7637) 2025-10-16 20:51:37 +08:00
rpc_proxy.py [TRTLLM-8480][chore] clean create_py_executor API (#8412) 2025-10-17 23:52:02 -04:00
rpc_worker.py [TRTLLM-8480][chore] clean create_py_executor API (#8412) 2025-10-17 23:52:02 -04:00
utils.py [#7692][fix] recognize RequestError as per-request error in background handler (#7726) 2025-09-24 11:11:17 +08:00
worker.py [TRTLLM-8480][chore] clean create_py_executor API (#8412) 2025-10-17 23:52:02 -04:00