TensorRT-LLMs/tensorrt_llm/executor
QI JUN 1c6e490894
[TRTLLM-9065][chore] remove PyTorchConfig completely (#8856)
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
2025-11-06 22:37:03 -08:00
..
rpc [None][chore] Optimize perf for the RPC executor and add some profile utilities to llm-api (#8415) 2025-11-03 17:59:49 -08:00
__init__.py chore: rename ExecutorBindingsWorker/Proxy (#4716) 2025-05-29 10:32:35 +08:00
base_worker.py [TRTLLM-9065][chore] remove PyTorchConfig completely (#8856) 2025-11-06 22:37:03 -08:00
executor.py [None][chore] Optimize perf for the RPC executor and add some profile utilities to llm-api (#8415) 2025-11-03 17:59:49 -08:00
ipc.py [None][chore] replace print_colored_debug with logger_debug (#8417) 2025-10-22 17:54:38 +08:00
postproc_worker.py [None][feat] perf_metrics endpoint functionality improvement (#8005) 2025-10-02 17:43:25 -07:00
proxy.py [None][chore] replace print_colored_debug with logger_debug (#8417) 2025-10-22 17:54:38 +08:00
ray_executor.py [None][fix] Change Ray submit() to use async RPC (#8636) 2025-10-28 00:56:13 -04:00
ray_gpu_worker.py [https://nvbugs/5527655][feat] Add NUMA-aware CPU affinity autoconfig (#8805) 2025-11-06 11:59:46 -08:00
request.py [None][feat] Add opentelemetry tracing (#5897) 2025-10-27 18:51:07 +08:00
result.py [None][feat] Return logprobs incrementally in torch backend (#8785) 2025-11-07 10:23:39 +08:00
rpc_proxy.py [None][chore] Optimize perf for the RPC executor and add some profile utilities to llm-api (#8415) 2025-11-03 17:59:49 -08:00
rpc_worker.py [None][chore] Optimize perf for the RPC executor and add some profile utilities to llm-api (#8415) 2025-11-03 17:59:49 -08:00
utils.py [None][chore] replace print_colored_debug with logger_debug (#8417) 2025-10-22 17:54:38 +08:00
worker.py [https://nvbugs/5527655][feat] Add NUMA-aware CPU affinity autoconfig (#8805) 2025-11-06 11:59:46 -08:00