dify/api/extensions/otel/semconv/dify.py
GareArc 8ceb1ed96f
feat(telemetry): add input/output token split to enterprise OTEL traces
- Add PROMPT_TOKENS and COMPLETION_TOKENS to WorkflowNodeExecutionMetadataKey
- Store prompt/completion tokens in node execution metadata JSON (no schema change)
- Calculate workflow-level token split by summing node executions on-the-fly
- Export gen_ai.usage.input_tokens and output_tokens to enterprise telemetry
- Add semantic convention constants for token attributes
- Maintain backward compatibility (historical data shows null)

BREAKING: None
MIGRATION: None (uses JSON metadata, no schema changes)
2026-02-05 20:12:30 -08:00

36 lines
1002 B
Python

"""Dify-specific semantic convention definitions."""
class DifySpanAttributes:
"""Attribute names for Dify-specific spans."""
APP_ID = "dify.app_id"
"""Application identifier."""
TENANT_ID = "dify.tenant_id"
"""Tenant identifier."""
USER_TYPE = "dify.user_type"
"""User type, e.g. Account, EndUser."""
STREAMING = "dify.streaming"
"""Whether streaming response is enabled."""
WORKFLOW_ID = "dify.workflow_id"
"""Workflow identifier."""
INVOKE_FROM = "dify.invoke_from"
"""Invocation source, e.g. SERVICE_API, WEB_APP, DEBUGGER."""
INVOKED_BY = "dify.invoked_by"
"""Invoked by, e.g. end_user, account, user."""
USAGE_INPUT_TOKENS = "gen_ai.usage.input_tokens"
"""Number of input tokens (prompt tokens) used."""
USAGE_OUTPUT_TOKENS = "gen_ai.usage.output_tokens"
"""Number of output tokens (completion tokens) generated."""
USAGE_TOTAL_TOKENS = "gen_ai.usage.total_tokens"
"""Total number of tokens used."""