Skip to main content

Python SDK developer's guide - Observability

The observability section of the Temporal Developer's guide covers the many ways to view the current state of your Temporal ApplicationLink preview iconWhat is a Temporal Application

A Temporal Application is a set of Workflow Executions.

Learn more—that is, ways to view which Workflow Executions are tracked by the Temporal PlatformLink preview iconWhat is the Temporal Platform?

The Temporal Platform consists of a Temporal Cluster and Worker Processes.

Learn more and the state of any specified Workflow Execution, either currently or at points of an execution.


This guide is a work in progress. Some sections may be incomplete or missing for some languages. Information may change at any time.

If you can't find what you are looking for in the Developer's guide, it could be in older docs for SDKs.

This section covers features related to viewing the state of the application, including:


Each Temporal SDK is capable of emitting an optional set of metrics from either the Client or the Worker process. For a complete list of metrics capable of being emitted, see the SDK metrics referenceLink preview iconSDK metrics

The Temporal SDKs emit metrics from Temporal Client usage and Worker Processes.

Learn more.

Metrics can be scraped and stored in time series databases, such as:

Temporal also provides a dashboard you can integrate with graphing services like Grafana. For more information, see:

Metrics in Python are configured globally; therefore, you should set a Prometheus endpoint before any other Temporal code.

The following example exposes a Prometheus endpoint on port 9000.

from temporalio.runtime import Runtime, TelemetryConfig, PrometheusConfig

# Create a new runtime that has telemetry enabled. Create this first to avoid
# the default Runtime from being lazily created.
new_runtime = Runtime(telemetry=TelemetryConfig(metrics=PrometheusConfig(bind_address="")))
my_client = await Client.connect("", runtime=new_runtime)


Tracing allows you to view the call graph of a Workflow along with its Activities and any Child Workflows.

Temporal Web's tracing capabilities mainly track Activity Execution within a Temporal context. If you need custom tracing specific for your use case, you should make use of context propagation to add tracing logic accordingly.

For information about Workflow tracing, see Tracing Temporal Workflows with DataDog.

For information about how to configure exporters and instrument your code, see Tracing Temporal Services with OTEL.

To configure tracing in Python, install the opentelemetry dependencies.

# This command installs the `opentelemetry` dependencies.
pip install temporalio[opentelemetry]

Then the temporalio.contrib.opentelemetry.TracingInterceptor class can be set as an interceptor as an argument of Client.connect().

When your Client is connected, spans are created for all Client calls, Activities, and Workflow invocations on the Worker. Spans are created and serialized through the server to give one trace for a Workflow Execution.


Send logs and errors to a logging service, so that when things go wrong, you can see what happened.

The SDK core uses WARN for its default logging level.

You can log from a Workflow using Python's standard library, by importing the logging module import logging.

Set your logging configuration to a level you want to expose logs to. The following example sets the logging information level to INFO.


Then in your Workflow, set your logger and level on the Workflow. The following example logs the Workflow.

class SayHelloWorkflow:
async def run(self, name: str) -> str:"Running workflow with parameter {name}")
return await workflow.execute_activity(
your_activity, name, start_to_close_timeout=timedelta(seconds=10)

The following is an example output:

INFO:temporalio.workflow:Running workflow with parameter Temporal ({'attempt': 1, 'your-custom-namespace': 'default', 'run_id': 'your-run-id', 'task_queue': 'your-task-queue', 'workflow_id': 'your-workflow-id', 'workflow_type': 'SayHelloWorkflow'})

Logs are skipped during replay by default.

Custom logger

Use a custom logger for logging.

Use the built-in Logging facility for Python to set a custom logger.


The term Visibility, within the Temporal Platform, refers to the subsystems and APIs that enable an operator to view Workflow Executions that currently exist within a Cluster.

Search Attributes

The typical method of retrieving a Workflow Execution is by its Workflow Id.

However, sometimes you'll want to retrieve one or more Workflow Executions based on another property. For example, imagine you want to get all Workflow Executions of a certain type that have failed within a time range, so that you can start new ones with the same arguments.

You can do this with Search AttributesLink preview iconWhat is a Search Attribute?

A Search Attribute is an indexed name used in List Filters to filter a list of Workflow Executions that have the Search Attribute in their metadata.

Learn more.

The steps to using custom Search Attributes are:

Here is how to query Workflow Executions:

Use the list_workflows() method on the Client handle and pass a List FilterLink preview iconWhat is a List Filter?

A List Filter is the SQL-like string that is provided as the parameter to an Advanced Visibility List API.

Learn more as an argument to filter the listed Workflows.

async for workflow in client.list_workflows('WorkflowType="MyWorkflowClass"'):
print(f"Workflow: {}")

Custom Search Attributes

After you've created custom Search Attributes in your Cluster (using tctl search-attribute createor the Cloud UI), you can set the values of the custom Search Attributes when starting a Workflow.

To set custom Search Attributes, use the search_attributes parameter of the 'start_workflow()' method.

handle = await client.start_workflow(
search_attributes={"Your-Custom-Keyword-Field": ["value"]},

Upsert Search Attributes

You can upsert Search Attributes to add or update Search Attributes from within Workflow code.

To upsert custom Search Attributes, use the upsert_search_attributes() method.

The keys are added to or replace the existing Search Attributes, similar to dict.update().

workflow.upsert_search_attributes({"Your-Custom-Keyword-Field": ["new-value"]})

Remove Search Attribute

To remove a Search Attribute that was previously set, set it to an empty array: [].

To remove a Search Attribute, use the upsert_search_attributes() function with an empty list as its value.

workflow.upsert_search_attributes({"Your-Custom-Keyword-Field": []})