TensorRT-LLMs/tensorrt_llm/hlapi/mgmn_worker_node.py
Kaiyu Xie 9bd15f1937
TensorRT-LLM v0.10 update
* TensorRT-LLM Release 0.10.0

---------

Co-authored-by: Loki <lokravi@amazon.com>
Co-authored-by: meghagarwal <16129366+megha95@users.noreply.github.com>
2024-06-05 20:43:25 +08:00

13 lines
428 B
Python

#!/usr/bin/env python3
import logging
from mpi4py.futures import MPICommExecutor
from mpi4py.MPI import COMM_WORLD
# For multi-node MPI, the worker nodes should launch MPICommExecutor to accept tasks sent from rank0
with MPICommExecutor(COMM_WORLD) as executor:
if executor is not None:
raise RuntimeError(f"rank{COMM_WORLD.rank} should not have executor")
logging.warning(f"worker rank{COMM_WORLD.rank} quited")