TensorRT-LLMs/tensorrt_llm/executor
Yuan Tong 668a0335e4
fix: Proper error bubbling for PyExecutor (#3321)
* fix: Proper error bubbling for PyExecutor
* fix: Proper shutdown
* fix: multi gpu proper shutdown

Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com>
2025-04-15 14:49:46 +08:00
..
__init__.py Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
executor.py Update (#2978) 2025-03-23 16:39:35 +08:00
ipc.py chore: code cleanup for error logging and SharedMemory in proxy.py (#3432) 2025-04-10 21:57:06 +08:00
postproc_worker.py chore: code cleanup for error logging and SharedMemory in proxy.py (#3432) 2025-04-10 21:57:06 +08:00
proxy.py fix: Proper error bubbling for PyExecutor (#3321) 2025-04-15 14:49:46 +08:00
request.py Update (#2978) 2025-03-23 16:39:35 +08:00
result.py fix: Fixing issue with first gen token being returned twice in streaming (#3427) 2025-04-13 22:45:09 -04:00
utils.py make LLM-API slurm examples executable (#3402) 2025-04-13 21:42:45 +08:00
worker.py fix: Proper error bubbling for PyExecutor (#3321) 2025-04-15 14:49:46 +08:00