[TRTLLM-7030][fix] Refactor the example doc of dist-serving (#6766)

Signed-off-by: Shixiaowei02 <39303645+Shixiaowei02@users.noreply.github.com>
This commit is contained in:
Shi Xiaowei 2025-08-13 17:39:27 +08:00 committed by GitHub
parent bc5f766e0e
commit fe7dda834d
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
18 changed files with 224 additions and 57 deletions

View File

@ -336,7 +336,7 @@ cd cpp/build
`disaggServerBenchmark` only supports `decoder-only` models.
Here is the basic usage:
```
export TRTLLM_USE_MPI_KVCACHE=1
export TRTLLM_USE_UCX_KVCACHE=1
mpirun -n ${proc} benchmarks/disaggServerBenchmark --context_engine_dirs ${context_engine_0},${context_engine_1}...,${context_engine_{m-1}} \
--generation_engine_dirs ${generation_engine_0},${generation_engine_1}...,${generation_engine_{n-1}} --dataset ${dataset_path}
```
@ -344,7 +344,7 @@ This command will launch m context engines and n generation engines. You need to
for example:
```
export TRTLLM_USE_MPI_KVCACHE=1
export TRTLLM_USE_UCX_KVCACHE=1
mpirun -n 7 benchmarks/disaggServerBenchmark --context_engine_dirs ${llama_7b_tp2_pp1_dir},${llama_7b_tp1_pp1_dir} --generation_engine_dirs ${llama_7b_tp1_pp1_dir},${llama_7b_tp2_pp1_dir} --dataset ${dataset_path}
# need 6 gpus and 7 processes to launch the benchmark.

View File

@ -66,17 +66,6 @@ A. Yes, it's recommended that different executor use different GPUs . We support
### Debugging FAQs
*Q. How to handle error `Disaggregated serving is not enabled, please check the configuration?`*
A. please set `backendType` of `CacheTransceiverConfig`.
```cpp
ExecutorConfig executorConfig{...};
executorConfig.setCacheTransceiverConfig(texec::CacheTransceiverConfig(BackendType::DEFAULT));
```
When the environment variable `TRTLLM_USE_MPI_KVCACHE=1` is set, TRT-LLM will transfer the KV cache using `CUDA-aware MPI`. All executor processes involved must share the same MPI world communicator. Consequently, with `TRTLLM_USE_MPI_KVCACHE=1`, TRT-LLM only supports launching multiple executors via `MPI`. Additionally, the `CommunicationMode` for the executors must be set to `kLEADER` or `kORCHESTRATOR` with `SpawnProcesses=false` for the `disaggregated-service`. These restrictions do not apply when `TRTLLM_USE_UCX_KVCACHE=1` is set.
*Q. Does TRT-LLM support using GPU direct RDMA for inter-node KV Cache transfer?*
A. Yes, TRT-LLM supports using GPU direct RDMA for inter-node KV cache transfer.

View File

@ -277,7 +277,7 @@ We also conducted performance evaluations of Qwen 3 on GB200 GPUs. The data indi
### Reproducing Steps
We provide a set of scripts to reproduce the performance data presented in this paper. Please refer to the usage instructions described in [this document](https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/disaggregated/slurm).
We provide a set of scripts to reproduce the performance data presented in this paper. Please refer to the usage instructions described in [this document](https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/disaggregated/slurm/benchmark).
## Future Work

View File

@ -124,10 +124,10 @@ From the `examples/cpp/executor/build` folder, you can also run the `executorExa
```
./executorExampleDisaggregated -h
```
Note setting `TRTLLM_USE_MPI_KVCACHE=1` is required to run disaggregated executor.
Note setting `TRTLLM_USE_UCX_KVCACHE=1` is required to run disaggregated executor.
For example, you can run :
```
export TRTLLM_USE_MPI_KVCACHE=1
export TRTLLM_USE_UCX_KVCACHE=1
mpirun -n <num_ranks> --allow-run-as-root --oversubscribe ./executorExampleDisaggregated --context_engine_dir <path_to_context_engine_dir> --context_rank_size <num_ranks_for_context> --generation_engine_dir <path_to_generation_engine_dir> --generation_rank_size <num_ranks_for_generation> --input_tokens ../inputTokens.csv

View File

@ -1,38 +1,64 @@
# Disaggregated Serving
To run TensorRT-LLM in disaggregated mode, you must first launch context (prefill) and generation (decode) servers using `trtllm-serve`.
The execution method of disaggregated serving relies on the `trtllm-serve` command. Specifically, compared to the standard usage of `trtllm-serve`, serving requires running this command multiple times to separately start the router and workers (including context and generation) serving components. This document focuses on this approach and provides a detailed guide on how to use it.
## Launching disaggregated servers locally on single node
Please note that disaggregated serving is currently an experimental feature, so the usage described in this document may change in the future.
We use the `cache_transceiver_config` configuration to set up disaggregated serving, which includes the following parameters:
## Startup Procedure
### Configuration File
The `trtllm-serve` command supports the `extra-llm-config.yaml` parameter. In the extra LLM configuration file, the `cache_transceiver_config` field is specifically used for disaggregated service. It is mainly used to specify additional parameters required for the KV cache transmission process.
```yaml
cache_transceiver_config:
# KV cache transmission backend. Valid options include `DEFAULT` (i.e., UCX), `UCX`, `NIXL`.
backend: <str>
# KV cache buffer size. Set it ≥ the maximum ISL (Input Sequence Length) for best performance.
max_tokens_in_buffer: <int>
```
`backend` specifies the communication backend for transferring the kvCache, valid options include `DEFAULT`,`UCX`, `NIXL`, and `MPI`, the default backend is UCX.
The following is an example, consisting of the `ctx_extra-llm-api-config.yaml` and `gen_extra-llm-api-config.yaml` files needed in the sections below.
`max_tokens_in_buffer` defines the buffer size for kvCache transfers, it is recommended to set this value greater than or equal to the maximum ISL (Input Sequence Length) of all requests for optimal performance.
```yaml
# ctx_extra-llm-api-config.yaml
You can use multiple `trtllm-serve` commands to launch the context and generation servers that will be used
for disaggregated serving. For example, you could launch two context servers and one generation servers as follows:
# The overlap scheduler for context servers is currently disabled, as it is
# not yet supported in disaggregated context server architectures.
disable_overlap_scheduler: True
cache_transceiver_config:
backend: UCX
max_tokens_in_buffer: 2048
```
```yaml
# gen_extra-llm-api-config.yaml
cache_transceiver_config:
backend: UCX
max_tokens_in_buffer: 2048
```
### Basic Usage
For non-SLURM clusters - particularly in single-node, multi-GPU setups, it is recommended to use standard mode. In such cases, the system does not enforce limits on process creation or termination.
Suppose we have three CUDA devices on the same machine. The first two devices are used to launch one context model each, and the third device is used to launch one generation model. In this case, the following commands need to be executed.
```bash
# Generate context_extra-llm-api-config.yml
# Overlap scheduler for context servers are disabled because it's not supported for disaggregated context servers yet
echo -e "disable_overlap_scheduler: True\ncache_transceiver_config:\n backend: UCX\n max_tokens_in_buffer: 2048" > context_extra-llm-api-config.yml
# Start context servers
CUDA_VISIBLE_DEVICES=0 trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 --host localhost --port 8001 --extra_llm_api_options ./context_extra-llm-api-config.yml &> log_ctx_0 &
CUDA_VISIBLE_DEVICES=1 trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 --host localhost --port 8002 --extra_llm_api_options ./context_extra-llm-api-config.yml &> log_ctx_1 &
CUDA_VISIBLE_DEVICES=0 trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 \
--host localhost --port 8001 \
--extra_llm_api_options ./ctx_extra-llm-api-config.yaml &> log_ctx_0 &
# Generate gen_extra-llm-api-config.yml
echo -e "cache_transceiver_config:\n backend: UCX\n max_tokens_in_buffer: 2048" > gen_extra-llm-api-config.yml
CUDA_VISIBLE_DEVICES=1 trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 \
--host localhost --port 8002 \
--extra_llm_api_options ./ctx_extra-llm-api-config.yaml &> log_ctx_1 &
# Start generation servers
CUDA_VISIBLE_DEVICES=2 trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 --host localhost --port 8003 --extra_llm_api_options ./gen_extra-llm-api-config.yml &> log_gen_0 &
# Start generation server
CUDA_VISIBLE_DEVICES=2 trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 \
--host localhost --port 8003 \
--extra_llm_api_options ./gen_extra-llm-api-config.yaml &> log_gen_0 &
```
Once the context and generation servers are launched, you can launch the disaggregated
@ -40,11 +66,16 @@ server, which will accept requests from clients and do the orchestration between
and generation servers. The disaggregated server can be launched with:
```bash
# Start proxy
trtllm-serve disaggregated -c disagg_config.yaml
```
where `disagg_config.yaml` contains information about the context and generation servers. For the current example,
it would look like:
```yaml
# disagg_config.yaml
hostname: localhost
port: 8000
backend: pytorch
@ -61,13 +92,11 @@ generation_servers:
Clients can then send requests to the disaggregated server at `localhost:8000`, which is an OpenAI API compatible endpoint.
## Launching disaggregated servers on SLURM clusters
Refer to [Disaggregated Inference Benchmark Scripts](./slurm/).
## Sending requests to the disaggregated server
#### Sending requests to the disaggregated server
Once the context, generation and disaggregated servers are launched, you can send requests to the disaggregated server using curl:
```bash
curl http://localhost:8000/v1/completions \
-H "Content-Type: application/json" \
@ -78,33 +107,124 @@ curl http://localhost:8000/v1/completions \
"temperature": 0
}' -w "\n"
```
Or using the provided client parsing the prompts from a file and sending request to the disaggregated server specified in the `disagg_config.yaml` file at the `chat` endpoint:
```
python3 ./clients/disagg_client.py -c disagg_config.yaml -p ./clients/prompts.json -e chat
```
### Launching disaggregated servers on SLURM clusters
To simplify usage, TensorRT-LLM internally relies on MPI spawning processes. However, some clusters do not offer such process flexibility. In these cases, we provide the `trtllm-llmapi-launch` tool to launch all processes at once. Therefore, when using TensorRT-LLM on a Slurm cluster, please refer to the following method.
#### Single-Node Execution
After starting the node and entering interactive mode, you can run the following command to prevent process spawning.
```bash
# Start context servers
CUDA_VISIBLE_DEVICES=0 trtllm-llmapi-launch trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 \
--host localhost --port 8001 \
--extra_llm_api_options ./ctx_extra-llm-api-config.yaml &> log_ctx_0 &
CUDA_VISIBLE_DEVICES=1 trtllm-llmapi-launch trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 \
--host localhost --port 8002 \
--extra_llm_api_options ./ctx_extra-llm-api-config.yaml &> log_ctx_1 &
# Start generation server
CUDA_VISIBLE_DEVICES=2 trtllm-llmapi-launch trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 \
--host localhost --port 8003 \
--extra_llm_api_options ./gen_extra-llm-api-config.yaml &> log_gen_0 &
# Start proxy
trtllm-llmapi-launch trtllm-serve disaggregated -c disagg_config.yaml
```
#### Multi-Node Execution
If the model you are running cannot fit within a single node and requires multiple nodes,
we introduce the startup method using [srun](https://slurm.schedmd.com/srun.html) to run parallel jobs.
```bash
srun -A <account> -p <partition> -t <time> -N <num_nodes> --ntasks-per-node=<tasks_per_node> \
--container-image=<container_image> \
--container-mounts=<mount_paths> \
--mpi=<mpi_type> \
bash -c '<your_command>'
```
When using `srun`, the `-N` and `--ntasks-per-node` options are two critical parameters that
determine how your job is distributed across the cluster.
- `-N <num_nodes>`: Specifies how many physical nodes to use.
- `--ntasks-per-node=<num_tasks>`: Specifies how many tasks to run on each node.
Together, they define the total number of tasks your job will run:
$$
\text{Total tasks} = N \times \text{ntasks-per-node}
$$
Therefore, the command can be written as follows:
```bash
# The `container_image` must have the TensorRT-LLM wheel package pre-installed.
# Once the task is successfully launched, an API service will be available externally at http://host_ip:PORT.
# Launch a context with `tp_size=8` using two 4-GPU nodes.
srun -A <account> -p <partition> -t <time> \
-N 2 --ntasks-per-node=4 \
--container-image=<container_image> \
--container-mounts=<mount_paths> \
--mpi=pmix \
bash -c "trtllm-llmapi-launch trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 --tp_size 8 --host 0.0.0.0 --port $PORT --extra_llm_api_options $WORK/ctx_extra-llm-api-config.yaml"
# Launch a generation with `tp_size=4` using one 4-GPU node.
srun -A <account> -p <partition> -t <time> \
-N 1 --ntasks-per-node=4 \
--container-image=<container_image> \
--container-mounts=<mount_paths> \
--mpi=pmix \
bash -c "trtllm-llmapi-launch trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 --tp_size 4 --host 0.0.0.0 --port $PORT --extra_llm_api_options $WORK/gen_extra-llm-api-config.yaml"
# Launch a proxy.
# The above-mentioned value needs to be replaced with the IP address of the host machine accessible to external
# clients, and filled in the `disagg_config.yaml` file.
srun -A <account> -p <partition> -t <time> \
-N 1 --ntasks-per-node=1 \
--container-image=<container_image> \
--container-mounts=<mount_paths> \
--mpi=pmix \
bash -c "trtllm-llmapi-launch trtllm-serve disaggregated -c $WORK/disagg_config.yaml"
```
Additionally, we offer a fully executable script—please refer to [Disaggregated SLURM Scripts](./slurm/simple_example/).
## Dynamic scaling (Prototype)
Currently, trtllm supports dynamic addition and removal of servers by leveraging ETCD. To enable this feature, you should start the context and generation servers with an additional flag ```--metadata_server_config_file``` and ```--server_role```.
Before launching the context and generation servers, you should first start the ETCD server. By default, the ETCD server listens for client requests at ```localhost:2379```.
```bash
etcd
```
After this, you can enable the dynamic scaling feature for the use case above as follows:
```bash
export TRTLLM_USE_UCX_KVCACHE=1
After this, you can enable the dynamic scaling feature for the use case above as follows:
```bash
# Context servers
CUDA_VISIBLE_DEVICES=0 trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 --host localhost --port 8001 --server_role CONTEXT --extra_llm_api_options ./context_extra-llm-api-config.yml --metadata_server_config_file ./metadata_config.yml &> log_ctx_0 &
CUDA_VISIBLE_DEVICES=1 trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 --host localhost --port 8002 --server_role CONTEXT --extra_llm_api_options ./context_extra-llm-api-config.yml --metadata_server_config_file ./metadata_config.yml &> log_ctx_1 &
CUDA_VISIBLE_DEVICES=0 trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 --host localhost --port 8001 --server_role CONTEXT --extra_llm_api_options ./ctx_extra-llm-api-config.yaml --metadata_server_config_file ./metadata_config.yaml &> log_ctx_0 &
CUDA_VISIBLE_DEVICES=1 trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 --host localhost --port 8002 --server_role CONTEXT --extra_llm_api_options ./ctx_extra-llm-api-config.yaml --metadata_server_config_file ./metadata_config.yaml &> log_ctx_1 &
# Generation servers
CUDA_VISIBLE_DEVICES=2 trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 --host localhost --port 8003 --server_role GENERATION --extra_llm_api_options ./gen_extra-llm-api-config.yml --metadata_server_config_file ./metadata_config.yml &> log_gen_0 &
CUDA_VISIBLE_DEVICES=2 trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 --host localhost --port 8003 --server_role GENERATION --extra_llm_api_options ./gen_extra-llm-api-config.yaml --metadata_server_config_file ./metadata_config.yaml &> log_gen_0 &
```
As for the disaggregated server, you should also specify the --metadata_server_config_file like the following
```bash
trtllm-serve disaggregated -c disagg_config.yaml -m ./metadata_config.yml
trtllm-serve disaggregated -c disagg_config.yaml -m ./metadata_config.yaml
```
The metadata_config file looks like
@ -120,27 +240,29 @@ The ```hostname``` and ```port``` must match those used when starting the ETCD s
### Dynamically adding servers
Users can add servers by directly launching them with trtllm-serve. For example, you can start an additional generation server as follows:
```bash
CUDA_VISIBLE_DEVICES=3 trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 \
--host localhost --port 8004 \
--server_role GENERATION \
--extra_llm_api_options ./gen_extra-llm-api-config.yml \
--metadata_server_config_file ./metadata_config.yml &> log_gen_0 &
--extra_llm_api_options ./gen_extra-llm-api-config.yaml \
--metadata_server_config_file ./metadata_config.yaml &> log_gen_0 &
```
TensorRT-LLM will automatically register any newly launched server with the ETCD server, allowing the router to send new requests to the added server.
### Dynamically removing servers
When removing servers, special attention is required in the current version. You need to first remove the corresponding key from the ETCD server. After you see the log message "Server xxxx is removed," you can then safely shut down the server. This part will be improved soon.
## Launching context and generation servers using MPI (Deprecated)
## Startup Procedure with MPI Worker (Deprecated)
In the past, we used `disaggregated_mpi_worker` to allow context nodes and generation nodes to operate within the same MPI world. However, this approach conflicts with the dynamic node addition and removal functionality. As a result, disaggregated_mpi_worker has been marked as deprecated, and the corresponding examples will be gradually removed.
One can also launch all context and generation servers using MPI. This can be done by issuing the following command:
```bash
export TRTLLM_USE_MPI_KVCACHE=1
mpirun -n <total_num_ranks> trtllm-serve disaggregated_mpi_worker -c disagg_config.yaml
```
where `<total_num_ranks>` is the sum of `TP*PP` for all context and generation servers. For the example above, `total_num_ranks` is 3
where `total_num_ranks` is the sum of `TP*PP` for all context and generation servers. For the example above, `total_num_ranks` is 3
since `TP` and `PP` is 1 for the two context and one generation server.
The `disagg_config.yaml` file must now contain the configuration parameters of the context and generation servers. For example,
@ -174,10 +296,9 @@ generation_servers:
```
Once the context and generation servers are launched, you can again launch the disaggregated server with
```bash
trtllm-serve disaggregated -c disagg_config.yaml
```
## Know Issues
The MPI communication backend for kvCache transfer has been deprecated and may not be supported in the future. When using the MPI backend, the environment variable `TRTLLM_USE_MPI_KVCACHE=1` should be set to avoid conflicts between mpi4py and kvCache transfer.
The MPI communication backend for KV cache transfer has been deprecated and may not be supported in the future. When using the MPI backend, the environment variable `TRTLLM_USE_MPI_KVCACHE=1` should be set to avoid conflicts between mpi4py and KV cache transfer.

View File

@ -0,0 +1,6 @@
# The overlap scheduler for context servers is currently disabled, as it is
# not yet supported in disaggregated context server architectures.
disable_overlap_scheduler: True
cache_transceiver_config:
backend: UCX
max_tokens_in_buffer: 2048

View File

@ -0,0 +1,12 @@
# Please replace `ctx_hostname` and `gen_hostname` with the actual addresses.
hostname: localhost
port: 8000
backend: pytorch
context_servers:
num_instances: 1
urls:
- "ctx_hostname:8001"
generation_servers:
num_instances: 1
urls:
- "gen_hostname:8002"

View File

@ -0,0 +1,3 @@
cache_transceiver_config:
backend: UCX
max_tokens_in_buffer: 2048

View File

@ -0,0 +1,36 @@
#!/bin/bash
#SBATCH --partition=${partition}
#SBATCH --account=${account}
#SBATCH --job-name=${job_name}
#SBATCH --time=02:00:00
container_image=""
mount_paths=""
work_path=""
ctx_port=8001
gen_port=8002
# The `container_image` must have the TensorRT-LLM wheel package pre-installed.
# Once the task is successfully launched, an API service will be available externally at http://host_ip:PORT.
# Launch a context with `tp_size=8` using two 4-GPU nodes.
srun --container-image=${container_image} \
--container-mounts=${mount_paths} \
-N 2 --ntasks-per-node=4 \
--mpi=pmix \
bash -c "trtllm-llmapi-launch trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 --tp_size 8 --host 0.0.0.0 --port ${ctx_port} --extra_llm_api_options ${work_path}/ctx_extra-llm-api-config.yaml" &
# Launch a generation with `tp_size=4` using one 4-GPU node.
srun --container-image=${container_image} \
--container-mounts=${mount_paths} \
-N 1 --ntasks-per-node=4 \
--mpi=pmix \
bash -c "trtllm-llmapi-launch trtllm-serve TinyLlama/TinyLlama-1.1B-Chat-v1.0 --tp_size 8 --host 0.0.0.0 --port ${gen_port} --extra_llm_api_options ${work_path}/gen_extra-llm-api-config.yaml" &
# Launch a proxy.
# The above-mentioned value needs to be replaced with the IP address of the host machine accessible to external
# clients, and filled in the `disagg_config.yaml` file.
srun --container-image=${container_image} \
--container-mounts=${mount_paths} \
-N 1 --ntasks-per-node=1 \
--mpi=pmix \
bash -c "trtllm-llmapi-launch trtllm-serve disaggregated -c ${work_path}/disagg_config.yaml"

View File

@ -17,7 +17,7 @@ Please note that:
### Core Scripts
Note that, core implementation of the slurm scripts are included in `examples/disaggregated/slurm`.
Note that, core implementation of the slurm scripts are included in `examples/disaggregated/slurm/benchmark`.
1. `submit.sh` - Main entry point for submitting benchmark jobs
2. `process_gen_iterlog.py` - Processes benchmark results and generates reports
@ -35,8 +35,8 @@ Before running the scripts, ensure you have:
### Running Benchmarks
```bash
# Refer to `examples/disaggregated/slurm/`
# Please find the `disaggr_torch.slurm` script in the `examples/disaggregated/slurm/` directory.
# Refer to `examples/disaggregated/slurm/benchmark/`
# Please find the `disaggr_torch.slurm` script in the `examples/disaggregated/slurm/benchmark/` directory.
# Make sure that SLURM parameters are correctly set in `disaggr_torch.slurm` before executing this script.
./submit.sh
```

View File

@ -1,6 +1,6 @@
#!/bin/bash
echo "Please find the \`disaggr_torch.slurm\` script in the \`examples/disaggregated/slurm/\` directory."
echo "Please find the \`disaggr_torch.slurm\` script in the \`examples/disaggregated/slurm/benchmark/\` directory."
partition=<partition>
account=<account>