Skip to main content

Observability - Python SDK feature guide

The observability section of the Temporal Developer's guide covers the many ways to view the current state of your Temporal Application—that is, ways to view which Workflow Executions are tracked by the Temporal Platform and the state of any specified Workflow Execution, either currently or at points of an execution.

This section covers features related to viewing the state of the application, including:

How to emit metrics

Each Temporal SDK is capable of emitting an optional set of metrics from either the Client or the Worker process. For a complete list of metrics capable of being emitted, see the SDK metrics reference.

Metrics can be scraped and stored in time series databases, such as:

Temporal also provides a dashboard you can integrate with graphing services like Grafana. For more information, see:

Metrics in Python are configured globally; therefore, you should set a Prometheus endpoint before any other Temporal code.

The following example exposes a Prometheus endpoint on port 9000.

from temporalio.runtime import Runtime, TelemetryConfig, PrometheusConfig

# Create a new runtime that has telemetry enabled. Create this first to avoid
# the default Runtime from being lazily created.
new_runtime = Runtime(telemetry=TelemetryConfig(metrics=PrometheusConfig(bind_address="0.0.0.0:9000")))
my_client = await Client.connect("my.temporal.host:7233", runtime=new_runtime)

How to setup Tracing

Tracing allows you to view the call graph of a Workflow along with its Activities and any Child Workflows.

Temporal Web's tracing capabilities mainly track Activity Execution within a Temporal context. If you need custom tracing specific for your use case, you should make use of context propagation to add tracing logic accordingly.

For information about how to configure exporters and instrument your code, see Tracing Temporal Services with OTEL.

To configure tracing in Python, install the opentelemetry dependencies.

# This command installs the `opentelemetry` dependencies.
pip install temporalio[opentelemetry]

Then the temporalio.contrib.opentelemetry.TracingInterceptor class can be set as an interceptor as an argument of Client.connect().

When your Client is connected, spans are created for all Client calls, Activities, and Workflow invocations on the Worker. Spans are created and serialized through the server to give one trace for a Workflow Execution.

How to log from a Workflow

Send logs and errors to a logging service, so that when things go wrong, you can see what happened.

The SDK core uses WARN for its default logging level.

You can log from a Workflow using Python's standard library, by importing the logging module logging.

Set your logging configuration to a level you want to expose logs to. The following example sets the logging information level to INFO.

logging.basicConfig(level=logging.INFO)

Then in your Workflow, set your logger and level on the Workflow. The following example logs the Workflow.

View the source code in the context of the rest of the application code.

# ...
workflow.logger.info("Workflow input parameter: %s" % name)

How to provide a custom logger

Use a custom logger for logging.

Use the built-in Logging facility for Python to set a custom logger.

How to use Visibility APIs

The term Visibility, within the Temporal Platform, refers to the subsystems and APIs that enable an operator to view Workflow Executions that currently exist within a Cluster.

How to use Search Attributes

The typical method of retrieving a Workflow Execution is by its Workflow Id.

However, sometimes you'll want to retrieve one or more Workflow Executions based on another property. For example, imagine you want to get all Workflow Executions of a certain type that have failed within a time range, so that you can start new ones with the same arguments.

You can do this with Search Attributes.

  • Default Search Attributes like WorkflowType, StartTime and ExecutionStatus are automatically added to Workflow Executions.
  • Custom Search Attributes can contain their own domain-specific data (like customerId or numItems).

The steps to using custom Search Attributes are:

  • Create a new Search Attribute in your Cluster in the CLI or Web UI.
    • For example: temporal operator search-attribute create --name CustomKeywordField --type Text
      • Replace CustomKeywordField with the name of your Search Attribute.
      • Replace Text with a type value associated with your Search Attribute: Text | Keyword | Int | Double | Bool | Datetime | KeywordList
  • Set the value of the Search Attribute for a Workflow Execution:
    • On the Client by including it as an option when starting the Execution.
    • In the Workflow by calling UpsertSearchAttributes.
  • Read the value of the Search Attribute:
    • On the Client by calling DescribeWorkflow.
    • In the Workflow by looking at WorkflowInfo.
  • Query Workflow Executions by the Search Attribute using a List Filter:

Here is how to query Workflow Executions:

Use the list_workflows() method on the Client handle and pass a List Filter as an argument to filter the listed Workflows.

View the source code in the context of the rest of the application code.

# ...
async for workflow in client.list_workflows('WorkflowType="GreetingWorkflow"'):
print(f"Workflow: {workflow.id}")

How to set custom Search Attributes

After you've created custom Search Attributes in your Cluster (using tctl search-attribute createor the Cloud UI), you can set the values of the custom Search Attributes when starting a Workflow.

To set custom Search Attributes, use the search_attributes parameter of the 'start_workflow()' method.

View the source code in the context of the rest of the application code.

# ...
handle = await client.start_workflow(
GreetingWorkflow.run,
id="search-attributes-workflow-id",
task_queue="search-attributes-task-queue",
search_attributes={"CustomKeywordField": ["old-value"]},
)

How to upsert Search Attributes

You can upsert Search Attributes to add or update Search Attributes from within Workflow code.

To upsert custom Search Attributes, use the upsert_search_attributes() method.

The keys are added to or replace the existing Search Attributes, similar to dict.update().

View the source code in the context of the rest of the application code.

# ...
workflow.upsert_search_attributes({"CustomKeywordField": ["new-value"]})

How to remove a Search Attribute from a Workflow

To remove a Search Attribute that was previously set, set it to an empty array: [].

To remove a Search Attribute, use the upsert_search_attributes() function with an empty list as its value.

View the source code in the context of the rest of the application code.

# ...
workflow.upsert_search_attributes({"CustomKeywordField": []})