TensorRT-LLMs/tensorrt_llm/executor
Venky 639c939a4f
[TRTC-1943][feat] Env vars override support in LLM API (#9104)
Signed-off-by: Venky Ganesh <23023424+venkywonka@users.noreply.github.com>
2025-12-01 10:04:49 -08:00
..
rpc [TRTLLM-8988][feat] Unify MPI & Ray's req/response handling with RPC Client/Server (#8765) 2025-11-13 17:21:24 -08:00
__init__.py chore: rename ExecutorBindingsWorker/Proxy (#4716) 2025-05-29 10:32:35 +08:00
base_worker.py [https://nvbugs/5564465][fix] Overwrite only if default_max_tokens is legal (#8538) 2025-11-20 12:43:13 -05:00
executor.py [None][chore] Optimize perf for the RPC executor and add some profile utilities to llm-api (#8415) 2025-11-03 17:59:49 -08:00
ipc.py [None][chore] replace print_colored_debug with logger_debug (#8417) 2025-10-22 17:54:38 +08:00
postproc_worker.py [None][feat] perf_metrics endpoint functionality improvement (#8005) 2025-10-02 17:43:25 -07:00
proxy.py [None][chore] replace print_colored_debug with logger_debug (#8417) 2025-10-22 17:54:38 +08:00
ray_executor.py [TRTLLM-8988][feat] Unify MPI & Ray's req/response handling with RPC Client/Server (#8765) 2025-11-13 17:21:24 -08:00
ray_gpu_worker.py [TRTLLM-8988][feat] Unify MPI & Ray's req/response handling with RPC Client/Server (#8765) 2025-11-13 17:21:24 -08:00
request.py [None][feat] Add opentelemetry tracing (#5897) 2025-10-27 18:51:07 +08:00
result.py [https://nvbugs/5687820][fix] Remove self.abort() in DetokenizedGenerationResult (#9449) 2025-11-27 22:54:40 +08:00
rpc_proxy.py [TRTLLM-8988][feat] Unify MPI & Ray's req/response handling with RPC Client/Server (#8765) 2025-11-13 17:21:24 -08:00
rpc_worker.py [TRTLLM-8988][feat] Unify MPI & Ray's req/response handling with RPC Client/Server (#8765) 2025-11-13 17:21:24 -08:00
utils.py [None][chore] replace print_colored_debug with logger_debug (#8417) 2025-10-22 17:54:38 +08:00
worker.py [TRTC-1943][feat] Env vars override support in LLM API (#9104) 2025-12-01 10:04:49 -08:00