TensorRT-LLMs/tensorrt_llm/executor
Aurelien Chartier 833c0dea4a
[TRTLLM-6104] feat: add request_perf_metrics to LLMAPI (#5497)
Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com>
2025-06-27 17:03:05 +02:00
..
__init__.py chore: rename ExecutorBindingsWorker/Proxy (#4716) 2025-05-29 10:32:35 +08:00
executor.py [feat] Add llm args to tune python gc threshold (#5141) 2025-06-16 17:45:22 +08:00
ipc.py [fix]: Fall back to HMAC to Avoid IPC Serialization Churn (#5074) 2025-06-13 11:37:50 +08:00
postproc_worker.py feat: Support post_proc for bench (#5122) 2025-06-15 13:02:38 +08:00
proxy.py fix[nvbug5298640]: trtllm-llmapi-launch multiple LLM instances (#4727) 2025-06-19 06:13:53 +08:00
request.py [TRTLLM-5007][feat] Add multimodal hashing support (image hashing) (#4145) 2025-06-10 01:59:56 +08:00
result.py [TRTLLM-6104] feat: add request_perf_metrics to LLMAPI (#5497) 2025-06-27 17:03:05 +02:00
utils.py fix[nvbug5298640]: trtllm-llmapi-launch multiple LLM instances (#4727) 2025-06-19 06:13:53 +08:00
worker.py [TRTLLM-5921][feat] Prevent serialization of entire LoRA adapters in each request (#5080) 2025-06-26 08:15:06 +03:00