mirror of
https://github.com/NVIDIA/TensorRT-LLM.git
synced 2026-01-13 22:18:36 +08:00
[None][chore] Change trt-server to trtlllm-server in opentelemetry readme (#9173)
Signed-off-by: Stanley Sun <stsun@nvidia.com> Co-authored-by: Larry Xu <197874197+LarryXFly@users.noreply.github.com>
This commit is contained in:
parent
5e5300898b
commit
96cfdd8a72
@ -49,7 +49,7 @@ export JAEGER_IP=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' ja
|
||||
export OTEL_EXPORTER_OTLP_TRACES_PROTOCOL=grpc
|
||||
export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=grpc://$JAEGER_IP:4317
|
||||
export OTEL_EXPORTER_OTLP_TRACES_INSECURE=true
|
||||
export OTEL_SERVICE_NAME="trt-server"
|
||||
export OTEL_SERVICE_NAME="trtllm-server"
|
||||
```
|
||||
|
||||
Then run TensorRT-LLM with OpenTelemetry, and make sure to set `return_perf_metrics` to true in the model configuration:
|
||||
@ -61,7 +61,7 @@ trtllm-serve models/Qwen3-8B/ --otlp_traces_endpoint="$OTEL_EXPORTER_OTLP_TRACES
|
||||
## Send requests and find traces in Jaeger
|
||||
|
||||
You can send a request to the server and view the traces in [Jaeger UI](http://localhost:16686/).
|
||||
The traces should be visible under the service name "trt-server".
|
||||
The traces should be visible under the service name "trtllm-server".
|
||||
|
||||
## Configuration for Disaggregated Serving
|
||||
|
||||
|
||||
Loading…
Reference in New Issue
Block a user