TensorRT-LLMs/examples/disaggregated/disagg_config.yaml
Kaiyu Xie b4e5df0ee0
Breaking change: perf: Enable scheduling overlap by default (#4174)
Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
2025-05-15 14:27:36 +08:00

23 lines
507 B
YAML

hostname: localhost
port: 8000
model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
free_gpu_memory_fraction: 0.25
backend: "pytorch"
pytorch_backend_config:
use_cuda_graph: False
disable_overlap_scheduler: True
context_servers:
num_instances: 1
tensor_parallel_size: 1
pipeline_parallel_size: 1
kv_cache_config:
free_gpu_memory_fraction: 0.2
urls:
- "localhost:8001"
generation_servers:
num_instances: 1
tensor_parallel_size: 1
pipeline_parallel_size: 1
urls:
- "localhost:8002"