Skip to main content

Self-hosted Visibility feature setup

A Visibility store is set up as a part of your Persistence store to enable listing and filtering details about Workflow Executions that exist on your Temporal Cluster.

A Visibility store is required in a Temporal Cluster setup because it is used by Temporal Web UI and CLI to pull Workflow Execution data and enables features like batch operations on a group of Workflow Executions.

With the Visibility store, you can use List Filters with Search Attributes to list and filter Workflow Executions that you want to review.

Setting up advanced Visibility enables access to creating and using multiple custom Search Attributes with your List Filters.

For details, see Search Attributes.

Note that if you use MySQL, PostgreSQL, or SQLite as your Visibility store, Temporal Server version 1.20 and later supports advanced Visibility features on MySQL (version 8.0.17 and later), PostgreSQL (version 12 and later) and SQLite (v3.31.0 and later), in addition to Elasticsearch.

To enable advanced Visibility on your SQL databases, ensure that you do the following:

Beginning with Temporal Server v1.21, you can set up a secondary Visibility store in your Temporal Cluster to enable Dual Visibility. This is useful for migrating your Visibility store database.

Supported databases

The following databases are supported as Visibility stores:

  • MySQL v5.7 and later. Use v8.0.17 (or later) with Temporal Server v1.20 or later for advanced Visibility capabilities. Because standard Visibility is deprecated beginning with Temporal Server v1.21, support for older versions of MySQL will be dropped.
  • PostgreSQL v9.6 and later. Use v12 (or later) with Temporal Server v1.20 or later for advanced Visibility capabilities. Because standard Visibility is deprecated beginning with Temporal Server v1.21, support for older versions of PostgreSQL will be dropped.
  • SQLite v3.31.0 and later for advanced Visibility capabilities.
  • Cassandra. Support for Cassandra as a Visibility database is deprecated beginning with Temporal Server v1.21. For information on migrating from Cassandra to any of the supported databases, see Migrating Visibility database.
  • Elasticsearch supported versions. We recommend operating a Temporal Cluster with Elasticsearch as your Visibility store for any use case that spawns more than a few Workflow Executions.

You can use any combination of the supported databases for your Persistence and Visibility stores. For updates, check Server release notes.

How to set up MySQL Visibility store

Support, stability, and dependency info
  • MySQL v5.7 and later.
  • Support for MySQL v5.7 will be deprecated for all Temporal Server versions after v1.20.
  • With Temporal Server version 1.20 and later, advanced Visibility is available on MySQL v8.0.17 and later.

You can set MySQL as your Visibility store. Verify supported versions before you proceed.

If using MySQL v8.0.17 or later as your Visibility store with Temporal Server v1.20 and later, any custom Search Attributes that you create must be associated with a Namespace in that Cluster.

Persistence configuration

Set your MySQL Visibility store name in the visibilityStore parameter in your Persistence configuration, and then define the Visibility store configuration under datastores.

The following example shows how to set a Visibility store mysql-visibility and define the datastore configuration in your Temporal Cluster configuration YAML.

#...
persistence:
#...
visibilityStore: mysql-visibility
#...
datastores:
default:
#...
mysql-visibility:
sql:
pluginName: "mysql8" # For MySQL v8.0.17 and later. For earlier versions, use "mysql" plugin.
databaseName: "temporal_visibility"
connectAddr: " " # Remote address of this database; for example, 127.0.0.0:3306
connectProtocol: " " # Protocol example: tcp
user: "username_for_auth"
password: "password_for_auth"
maxConns: 2
maxIdleConns: 2
maxConnLifetime: "1h"
#...

For details on the configuration parameters and values, see Cluster configuration.

To enable advanced Visibility features on your MySQL Visibility store, upgrade to MySQL v8.0.17 or later with Temporal Server v1.20 or later. See Upgrade Server on how to upgrade your Temporal Server and database schemas.

For example configuration templates, see MySQL Visibility store configuration.

Database schema and setup

Visibility data is stored in a database table called executions_visibility that must be set up according to the schemas defined (by supported versions):

The following example shows how the auto-setup.sh script sets up your Visibility store.

#...
# set your MySQL environment variables
: "${DBNAME:=temporal}"
: "${VISIBILITY_DBNAME:=temporal_visibility}"
: "${DB_PORT:=}"
: "${MYSQL_SEEDS:=}"
: "${MYSQL_USER:=}"
: "${MYSQL_PWD:=}"
: "${MYSQL_TX_ISOLATION_COMPAT:=false}"

#...
# set connection details
#...
# set up MySQL schema
setup_mysql_schema() {
#...
# use valid schema for the version of the database you want to set up for Visibility
VISIBILITY_SCHEMA_DIR=${TEMPORAL_HOME}/schema/mysql/${MYSQL_VERSION_DIR}/visibility/versioned
if [[ ${SKIP_DB_CREATE} != true ]]; then
temporal-sql-tool --ep "${MYSQL_SEEDS}" -u "${MYSQL_USER}" -p "${DB_PORT}" "${MYSQL_CONNECT_ATTR[@]}" --db "${VISIBILITY_DBNAME}" create
fi
temporal-sql-tool --ep "${MYSQL_SEEDS}" -u "${MYSQL_USER}" -p "${DB_PORT}" "${MYSQL_CONNECT_ATTR[@]}" --db "${VISIBILITY_DBNAME}" setup-schema -v 0.0
temporal-sql-tool --ep "${MYSQL_SEEDS}" -u "${MYSQL_USER}" -p "${DB_PORT}" "${MYSQL_CONNECT_ATTR[@]}" --db "${VISIBILITY_DBNAME}" update-schema -d "${VISIBILITY_SCHEMA_DIR}"
#...
}

Note that the script uses temporal-sql-tool to run the setup.

How to set up PostgreSQL Visibility store

Support, stability, and dependency info
  • PostgreSQL v9.6 and later.
  • With Temporal Cluster version 1.20 and later, advanced Visibility is available on PostgreSQL v12 and later.
  • Support for PostgreSQL v9.6 through v11 will be deprecated for all Temporal Server versions after v1.20; we recommend upgrading to PostgreSQL 12 or later.

You can set PostgreSQL as your Visibility store. Verify supported versions before you proceed.

If using PostgreSQL v12 or later as your Visibility store with Temporal Server v1.20 and later, any custom Search Attributes that you create must be associated with a Namespace in that Cluster.

Persistence configuration

Set your PostgreSQL Visibility store name in the visibilityStore parameter in your Persistence configuration, and then define the Visibility store configuration under datastores.

The following example shows how to set a Visibility store postgres-visibility and define the datastore configuration in your Temporal Cluster configuration YAML.

#...
persistence:
#...
visibilityStore: postgres-visibility
#...
datastores:
default:
#...
postgres-visibility:
sql:
pluginName: "postgres12" # For PostgreSQL v12 and later. For earlier versions, use "postgres" plugin.
databaseName: "temporal_visibility"
connectAddr: " " # remote address of this database; for example, 127.0.0.0:5432
connectProtocol: " " # protocol example: tcp
user: "username_for_auth"
password: "password_for_auth"
maxConns: 2
maxIdleConns: 2
maxConnLifetime: "1h"
#...

To enable advanced Visibility features on your PostgreSQL Visibility store, upgrade to PostgreSQL v12 or later with Temporal Server v1.20 or later. See Upgrade Server for details on how to upgrade your Temporal Server and database schemas.

Database schema and setup

Visibility data is stored in a database table called executions_visibility that must be set up according to the schemas defined (by supported versions):

The following example shows how the auto-setup.sh script sets up your PostgreSQL Visibility store.

#...
# set your PostgreSQL environment variables
: "${DBNAME:=temporal}"
: "${VISIBILITY_DBNAME:=temporal_visibility}"
: "${DB_PORT:=}"
: "${POSTGRES_SEEDS:=}"
: "${POSTGRES_USER:=}"
: "${POSTGRES_PWD:=}"

#... set connection details
# set up PostgreSQL schema
setup_postgres_schema() {
#...

# use valid schema for the version of the database you want to set up for Visibility
VISIBILITY_SCHEMA_DIR=${TEMPORAL_HOME}/schema/postgresql/${POSTGRES_VERSION_DIR}/visibility/versioned
if [[ ${VISIBILITY_DBNAME} != "${POSTGRES_USER}" && ${SKIP_DB_CREATE} != true ]]; then
temporal-sql-tool --plugin postgres --ep "${POSTGRES_SEEDS}" -u "${POSTGRES_USER}" -p "${DB_PORT}" --db "${VISIBILITY_DBNAME}" create
fi
temporal-sql-tool --plugin postgres --ep "${POSTGRES_SEEDS}" -u "${POSTGRES_USER}" -p "${DB_PORT}" --db "${VISIBILITY_DBNAME}" update-schema -d "${VISIBILITY_SCHEMA_DIR}"
#...
}

Note that the script uses temporal-sql-tool to run the setup.

How to set up SQLite Visibility store

Support, stability, and dependency info
  • SQLite v3.31.0 and later.

You can set SQLite as your Visibility store. Verify supported versions before you proceed.

Temporal supports only an in-memory database with SQLite; this means that the database is automatically created when Temporal Server starts and is destroyed when Temporal Server stops.

You can change the configuration to use a file-based database so that it is preserved when Temporal Server stops. However, if you use a file-based SQLite database, upgrading your database schema to enable advanced Visibility features is not supported; in this case, you must delete the database and create it again to upgrade.

If using SQLite v3.31.0 and later as your Visibility store with Temporal Server v1.20 and later, any custom Search Attributes that you create must be associated with a Namespace in that Cluster.

Persistence configuration

Set your SQLite Visibility store name in the visibilityStore parameter in your Persistence configuration, and then define the Visibility store configuration under datastores.

The following example shows how to set a Visibility store sqlite-visibility and define the datastore configuration in your Temporal Cluster configuration YAML.

persistence:
# ...
visibilityStore: sqlite-visibility
# ...
datastores:
# ...
sqlite-visibility:
sql:
user: "username_for_auth"
password: "password_for_auth"
pluginName: "sqlite"
databaseName: "default"
connectAddr: "localhost"
connectProtocol: "tcp"
connectAttributes:
mode: "memory"
cache: "private"
maxConns: 1
maxIdleConns: 1
maxConnLifetime: "1h"
tls:
enabled: false
caFile: ""
certFile: ""
keyFile: ""
enableHostVerification: false
serverName: ""

SQLite (v3.31.0 and later) has advanced Visibility enabled by default.

Database schema and setup

Visibility data is stored in a database table called executions_visibility that must be set up according to the schemas defined (by supported versions) in https://github.com/temporalio/temporal/blob/master/schema/sqlite/v3/visibility/schema.sql.

For an example of setting up the SQLite schema, see Temporalite setup.

How to set up Cassandra Visibility store

Support, stability, and dependency info
  • Support for Cassandra as a Visibility database is deprecated beginning with Temporal Server v1.21. For updates, check the Temporal Server release notes.
  • We recommend migrating from Cassandra to any of the other supported databases for Visibility.

You can set Cassandra as your Visibility store. Verify supported versions before you proceed.

Advanced Visibility is not supported with Cassandra.

To enable advanced Visibility features, use any of the supported databases, such as MySQL, PostgreSQL, SQLite, or Elasticsearch, as your Visibility store. We recommend using Elasticsearch for any Temporal Cluster setup that handles more than a few Workflow Executions because it supports the request load on the Visibility store and helps optimize performance.

To migrate from Cassandra to a supported SQL database, see Migrating Visibility database.

Persistence configuration

Set your Cassandra Visibility store name in the visibilityStore parameter in your Persistence configuration, and then define the Visibility store configuration under datastores.

The following example shows how to set a Visibility store cass-visibility and define the datastore configuration in your Temporal Cluster configuration YAML.

#...
persistence:
#...
visibilityStore: cass-visibility
#...
datastores:
default:
#...
cass-visibility:
cassandra:
hosts: "127.0.0.1"
keyspace: "temporal_visibility"
#...

Database schema and setup

Visibility data is stored in a database table called executions_visibility that must be set up according to the schemas defined (by supported versions) in https://github.com/temporalio/temporal/tree/master/schema/cassandra/visibility.

The following example shows how the auto-setup.sh script sets up your Visibility store.

#...
# set your Cassandra environment variables
: "${KEYSPACE:=temporal}"
: "${VISIBILITY_KEYSPACE:=temporal_visibility}"

: "${CASSANDRA_SEEDS:=}"
: "${CASSANDRA_PORT:=9042}"
: "${CASSANDRA_USER:=}"
: "${CASSANDRA_PASSWORD:=}"
: "${CASSANDRA_TLS_ENABLED:=}"
: "${CASSANDRA_CERT:=}"
: "${CASSANDRA_CERT_KEY:=}"
: "${CASSANDRA_CA:=}"
: "${CASSANDRA_REPLICATION_FACTOR:=1}"
#...
# set connection details
#...
# set up Cassandra schema
setup_cassandra_schema() {
#...
# use valid schema for the version of the database you want to set up for Visibility
VISIBILITY_SCHEMA_DIR=${TEMPORAL_HOME}/schema/cassandra/visibility/versioned
if [[ ${SKIP_DB_CREATE} != true ]]; then
temporal-cassandra-tool --ep "${CASSANDRA_SEEDS}" create -k "${VISIBILITY_KEYSPACE}" --rf "${CASSANDRA_REPLICATION_FACTOR}"
fi
temporal-cassandra-tool --ep "${CASSANDRA_SEEDS}" -k "${VISIBILITY_KEYSPACE}" setup-schema -v 0.0
temporal-cassandra-tool --ep "${CASSANDRA_SEEDS}" -k "${VISIBILITY_KEYSPACE}" update-schema -d "${VISIBILITY_SCHEMA_DIR}"
#...
}

How to integrate Elasticsearch into a Temporal Cluster

Support, stability, and dependency info
  • Elasticsearch v8 is supported beginning with Temporal Server version 1.18.0.
  • Elasticsearch v7.10 is supported beginning with Temporal Server version 1.17.0.
  • Elasticsearch v6.8 is supported through Temporal Server version 1.17.x.
  • Elasticsearch v6.8 and v7.10 are explicitly supported with AWS Elasticsearch.

You can integrate Elasticsearch with your Temporal Cluster as your Visibility store. We recommend using Elasticsearch for large-scale operations on the Temporal Cluster.

To integrate Elasticsearch with your Temporal Cluster, edit the persistence section of your development.yaml configuration file to add Elasticsearch as the visibilityStore, and run the index schema setup commands.

Persistence configuration

Set your Elasticsearch Visibility store name in the visibilityStore parameter in your Persistence configuration, and then define the Visibility store configuration under datastores.

The following example shows how to set a Visibility store named es-visibility and define the datastore configuration in your Temporal Cluster configuration YAML.

persistence:
...
visibilityStore: es-visibility
datastores:
...
es-visibility: # Define the Elasticsearch datastore connection information under the `es-visibility` key
elasticsearch:
version: "v7"
url:
scheme: "http"
host: "127.0.0.1:9200"
indices:
visibility: temporal_visibility_v1_dev

Index schema and index

The following example shows how the auto-setup.sh script sets up an Elasticsearch Visibility store.

#...
# Elasticsearch
: "${ENABLE_ES:=false}"
: "${ES_SCHEME:=http}"
: "${ES_SEEDS:=}"
: "${ES_PORT:=9200}"
: "${ES_USER:=}"
: "${ES_PWD:=}"
: "${ES_VERSION:=v7}"
: "${ES_VIS_INDEX:=temporal_visibility_v1}"
: "${ES_SEC_VIS_INDEX:=}"
: "${ES_SCHEMA_SETUP_TIMEOUT_IN_SECONDS:=0}"
#...
# Validate your ES environment
#...
# Wait for ES to start
#...
# ES_SERVER is the URL of Elasticsearch server; for example, "http://localhost:9200".
SETTINGS_URL="${ES_SERVER}/_cluster/settings"
SETTINGS_FILE=${TEMPORAL_HOME}/schema/elasticsearch/visibility/cluster_settings_${ES_VERSION}.json
TEMPLATE_URL="${ES_SERVER}/_template/temporal_visibility_v1_template"
SCHEMA_FILE=${TEMPORAL_HOME}/schema/elasticsearch/visibility/index_template_${ES_VERSION}.json
INDEX_URL="${ES_SERVER}/${ES_VIS_INDEX}"
curl --fail --user "${ES_USER}":"${ES_PWD}" -X PUT "${SETTINGS_URL}" -H "Content-Type: application/json" --data-binary "@${SETTINGS_FILE}" --write-out "\n"
curl --fail --user "${ES_USER}":"${ES_PWD}" -X PUT "${TEMPLATE_URL}" -H 'Content-Type: application/json' --data-binary "@${SCHEMA_FILE}" --write-out "\n"
curl --user "${ES_USER}":"${ES_PWD}" -X PUT "${INDEX_URL}" --write-out "\n"

Elasticsearch privileges

Ensure that the following privileges are granted for the Elasticsearch Temporal index:

How to set up Dual Visibility

Support, stability, and dependency info
  • Supported from Temporal Server v1.21 onwards.

To enable Dual Visibility, set up a secondary Visibility store with your primary Visibility store, and configure your Temporal Cluster to enable read and/or write operations on the secondary Visibility store.

With Dual Visibility, you can read from only one Visibility store at a time, but can configure your Temporal Cluster to write to primary only, secondary only, or to both primary and secondary stores.

Set up secondary Visibility store

Set the secondary store with the secondaryVisibilityStore configuration key in your Persistence configuration, and then define the secondary Visibility store configuration under datastores.

You can configure any of the supported databases as your secondary store.

Examples:

To configure MySQL as a secondary store with Cassandra as your primary store, do the following.

persistence:
visibilityStore: cass-visibility # This is your primary Visibility store
secondaryVisibilityStore: mysql-visibility # This is your secondary Visibility store
datastores:
cass-visibility:
cassandra:
hosts: "127.0.0.1"
keyspace: "temporal_primary_visibility"
mysql-visibility:
sql:
pluginName: "mysql8" # Verify supported versions. Use a version of SQL that supports advanced Visibility.
databaseName: "temporal_secondary_visibility"
connectAddr: "127.0.0.1:3306"
connectProtocol: "tcp"
user: "temporal"
password: "temporal"

To configure Elasticsearch as both your primary and secondary store, use the configuration key elasticsearch.indices.secondary_visibility, as shown in the following example.

persistence:
visibilityStore: es-visibility
datastores:
es-visibility:
elasticsearch:
version: "v7"
logLevel: "error"
url:
scheme: "http"
host: "127.0.0.1:9200"
indices:
visibility: temporal_visibility_v1
secondary_visibility: temporal_visibility_v1_new
closeIdleConnectionsInterval: 15s

Database schema and setup

The database schema and setup for a secondary store depends on the database you plan to use.

For the Cassandra and MySQL configuration in the previous example, an example setup script would be as follows.

#...
# set your Cassandra environment variables
: "${KEYSPACE:=temporal}"
: "${VISIBILITY_KEYSPACE:=temporal_primary_visibility}"

: "${CASSANDRA_SEEDS:=}"
: "${CASSANDRA_PORT:=9042}"
: "${CASSANDRA_USER:=}"
: "${CASSANDRA_PASSWORD:=}"
: "${CASSANDRA_TLS_ENABLED:=}"
: "${CASSANDRA_CERT:=}"
: "${CASSANDRA_CERT_KEY:=}"
: "${CASSANDRA_CA:=}"
: "${CASSANDRA_REPLICATION_FACTOR:=1}"
#...
# set connection details
#...
# set up Cassandra schema
setup_cassandra_schema() {
#...
# use valid schema for the version of the database you want to set up for Visibility
VISIBILITY_SCHEMA_DIR=${TEMPORAL_HOME}/schema/cassandra/visibility/versioned
if [[ ${SKIP_DB_CREATE} != true ]]; then
temporal-cassandra-tool --ep "${CASSANDRA_SEEDS}" create -k "${VISIBILITY_KEYSPACE}" --rf "${CASSANDRA_REPLICATION_FACTOR}"
fi
temporal-cassandra-tool --ep "${CASSANDRA_SEEDS}" -k "${VISIBILITY_KEYSPACE}" setup-schema -v 0.0
temporal-cassandra-tool --ep "${CASSANDRA_SEEDS}" -k "${VISIBILITY_KEYSPACE}" update-schema -d "${VISIBILITY_SCHEMA_DIR}"
#...
}
#...
# set your MySQL environment variables
: "${DBNAME:=temporal}"
: "${VISIBILITY_DBNAME:=temporal_secondary_visibility}"
: "${DB_PORT:=}"
: "${MYSQL_SEEDS:=}"
: "${MYSQL_USER:=}"
: "${MYSQL_PWD:=}"
: "${MYSQL_TX_ISOLATION_COMPAT:=false}"

#...
# set connection details
#...
# set up MySQL schema
setup_mysql_schema() {
#...
# use valid schema for the version of the database you want to set up for Visibility
VISIBILITY_SCHEMA_DIR=${TEMPORAL_HOME}/schema/mysql/${MYSQL_VERSION_DIR}/visibility/versioned
if [[ ${SKIP_DB_CREATE} != true ]]; then
temporal-sql-tool --ep "${MYSQL_SEEDS}" -u "${MYSQL_USER}" -p "${DB_PORT}" "${MYSQL_CONNECT_ATTR[@]}" --db "${VISIBILITY_DBNAME}" create
fi
temporal-sql-tool --ep "${MYSQL_SEEDS}" -u "${MYSQL_USER}" -p "${DB_PORT}" "${MYSQL_CONNECT_ATTR[@]}" --db "${VISIBILITY_DBNAME}" setup-schema -v 0.0
temporal-sql-tool --ep "${MYSQL_SEEDS}" -u "${MYSQL_USER}" -p "${DB_PORT}" "${MYSQL_CONNECT_ATTR[@]}" --db "${VISIBILITY_DBNAME}" update-schema -d "${VISIBILITY_SCHEMA_DIR}"
#...
}

For Elasticsearch as both primary and secondary Visibility store configuration in the previous example, an example setup script would be as follows.

#...
# Elasticsearch
: "${ENABLE_ES:=false}"
: "${ES_SCHEME:=http}"
: "${ES_SEEDS:=}"
: "${ES_PORT:=9200}"
: "${ES_USER:=}"
: "${ES_PWD:=}"
: "${ES_VERSION:=v7}"
: "${ES_VIS_INDEX:=temporal_visibility_v1_dev}"
: "${ES_SEC_VIS_INDEX:=temporal_visibility_v1_new}"
: "${ES_SCHEMA_SETUP_TIMEOUT_IN_SECONDS:=0}"

#...

# Validate your ES environment
#...
# Wait for ES to start
#...
# Set up Elasticsearch index
setup_es_index() {
ES_SERVER="${ES_SCHEME}://${ES_SEEDS%%,*}:${ES_PORT}"
# ES_SERVER is the URL of Elasticsearch server i.e. "http://localhost:9200".
SETTINGS_URL="${ES_SERVER}/_cluster/settings"
SETTINGS_FILE=${TEMPORAL_HOME}/schema/elasticsearch/visibility/cluster_settings_${ES_VERSION}.json
TEMPLATE_URL="${ES_SERVER}/_template/temporal_visibility_v1_template"
SCHEMA_FILE=${TEMPORAL_HOME}/schema/elasticsearch/visibility/index_template_${ES_VERSION}.json
INDEX_URL="${ES_SERVER}/${ES_VIS_INDEX}"
curl --fail --user "${ES_USER}":"${ES_PWD}" -X PUT "${SETTINGS_URL}" -H "Content-Type: application/json" --data-binary "@${SETTINGS_FILE}" --write-out "\n"
curl --fail --user "${ES_USER}":"${ES_PWD}" -X PUT "${TEMPLATE_URL}" -H 'Content-Type: application/json' --data-binary "@${SCHEMA_FILE}" --write-out "\n"
curl --user "${ES_USER}":"${ES_PWD}" -X PUT "${INDEX_URL}" --write-out "\n"

# Checks for and sets up Elasticsearch as a secondary Visibility store
if [[ ! -z "${ES_SEC_VIS_INDEX}" ]]; then
SEC_INDEX_URL="${ES_SERVER}/${ES_SEC_VIS_INDEX}"
curl --user "${ES_USER}":"${ES_PWD}" -X PUT "${SEC_INDEX_URL}" --write-out "\n"
fi
}

Update Cluster configuration

With the primary and secondary stores set, update the system.secondaryVisibilityWritingMode and system.enableReadFromSecondaryVisibility configuration keys in your self-hosted Cluster's dynamic configuration YAML file to enable read and/or write operations to the secondary Visibility store.

For example, to enable write operations to both primary and secondary stores, but disable reading from the secondary store, use the following.

system.secondaryVisibilityWritingMode:
- value: "dual"
constraints: {}
system.enableReadFromSecondaryVisibility:
- value: false
constraints: {}

For details on the configuration options, see:

How to migrate Visibility database

Support, stability, and dependency info
  • Supported beginning with Temporal Server v1.21.

To migrate your Visibility database, set up a secondary Visibility store to enable Dual Visibility, and update the dynamic configuration in your Cluster to update the read and write operations for the Visibility store.

Dual Visibility setup is optional but useful in gradually migrating your Visibility data to another database.

Before you begin, verify supported databases and versions for a Visibility store.

The following steps describe how to migrate your Visibility database.

After you make any changes to your Cluster configuration, ensure that you restart your services.

Set up secondary Visibility store

  1. In your Cluster configuration, add a secondary Visibility store to your Visibility setup under the Persistence configuration.

    Example: To migrate from Cassandra to Elasticsearch, add Elasticsearch as your secondary database and set it up. For details, see secondary Visibility database schema and setup.

    persistence:
    visibilityStore: cass-visibility
    secondaryVisibilityStore: es-visibility
    datastores:
    cass-visibility:
    cassandra:
    hosts: "127.0.0.1"
    keyspace: "temporal_visibility"
    es-visibility:
    elasticsearch:
    version: "v7"
    logLevel: "error"
    url:
    scheme: "http"
    host: "127.0.0.1:9200"
    indices:
    visibility: temporal_visibility_v1_dev
    closeIdleConnectionsInterval: 15s
  2. Update the dynamic configuration keys on your self-hosted Temporal Cluster to enable write operations to the secondary store and disable read operations. Example:

    system.secondaryVisibilityWritingMode:
    - value: "dual"
    constraints: {}
    system.enableReadFromSecondaryVisibility:
    - value: false
    constraints: {}

At this point, Visibility data is read from the primary store, and all Visibility data is written to both the primary and secondary store. This setting applies only to new Visibility data generated after Dual Visibility is enabled. It does not migrate any existing data in the primary store to the secondary store.

For details on write options to the secondary store, see Secondary Visibility dynamic configuration reference.

Run in dual mode

When you enable a secondary store, only new Visibility data is written to both primary and secondary stores. The primary store still holds the Workflow Execution data from before the secondary store was set up.

Running in dual mode lets you plan for closed and open Workflow Executions data from before the secondary store was set up in your self-hosted Temporal Cluster.

Example:

  • To manage closed Workflow Executions data, run in dual mode until the Namespace Retention Period is reached. After the Retention Period, Workflow Execution data is removed from the Persistence and Visibility stores. If you want to keep the closed Workflow Executions data after the set Retention Period, you must set up Archival.
  • To manage data for all open Workflow Executions, run in dual mode until all the Workflow Executions started before enabling Dual Visibility mode are closed. After the Workflow Executions are closed, verify the Retention Period and set up Archival if you need to keep the data beyond the Retention Period.

You can run your Visibility setup in dual mode for an indefinite period, or until you are ready to deprecate the primary store and move completely to the secondary store without losing data.

Deprecate primary Visibility store

When you are ready to deprecate your primary store, follow these steps.

  1. Update the dynamic configuration YAML to enable read operations from the secondary store. Example:

    system.secondaryVisibilityWritingMode:
    - value: "dual"
    constraints: {}
    system.enableReadFromSecondaryVisibility:
    - value: true
    constraints: {}

    At this point, Visibility data is read from the secondary store only. Verify whether data on the secondary store is correct.

  2. When the secondary store is vetted and ready to replace your current primary store, change your Cluster configuration to set the secondary store as your primary, and remove the dynamic configuration set in the previous steps. Example:

    persistence:
    visibilityStore: es-visibility
    datastores:
    es-visibility:
    elasticsearch:
    version: "v7"
    logLevel: "error"
    url:
    scheme: "http"
    host: "127.0.0.1:9200"
    indices:
    visibility: temporal_visibility_v1_dev
    closeIdleConnectionsInterval: 15s

Managing custom Search Attributes

To manage your custom Search Attributes on Temporal Cloud, use tcld. With Temporal Cloud, you can create and rename custom Search Attributes.

To manage your custom Search Attributes on self-hosted Temporal Clusters, use tctl. With self-hosted Temporal Cluster, you can create and remove custom Search Attributes. Note that if you use SQL databases with Temporal Server v1.20 and later, creating a custom Search Attribute creates a mapping with a database field name in the Visibility store custom_search_attributes table. Removing a custom Search Attribute removes this mapping with the database field name but does not remove the data. If you remove a custom Search Attribute and add a new one, the new custom Search Attribute might be mapped to the database field of the one that was recently removed. This might cause unexpected results when you use the List API to retrieve results using the new custom Search Attribute. These constraints do not apply if you use Elasticsearch.

How to create custom Search Attributes

Add custom Search Attributes to your Visibility store using tctl for a self-hosted Temporal Cluster and tcld for Temporal Cloud.

Creating a custom Search Attribute in your Visibility store makes it available to use in your Workflow metadata and List Filters.

On Temporal Cloud

To create custom Search Attributes on Temporal Cloud, use tcld namespace search-attributes add. For example, to add a custom Search Attributes "CustomSA" to your Temporal Cloud Namespace "YourNamespace", run the following command. tcld namespace search-attributes add --namespace YourNamespace --search-attribute "CustomSA"

On self-hosted Temporal Cluster

If you're self-hosting your Temporal Cluster, verify whether your Visibility database version supports advanced Visibility features.

To create custom Search Attributes in your self-hosted Temporal Cluster Visibility store, use temporal operator search-attribute create with --name and --type command options.

For example, to create a Search Attribute called CustomSA of type Keyword, run the following command:

temporal operator search-attribute create --name CustomSA --type Keyword

Note that if you use a SQL database with advanced Visibility capabilities, you are required to specify a Namespace when creating a custom Search Attribute. For example: temporal operator search-attribute create --name CustomSA --type Keyword --namespace yournamespace

You can also create multiple custom Search Attributes when you set up your Visibility store.

For example, the auto-setup.sh script that is used to set up your local docker-compose Temporal Cluster creates custom Search Attributes in the Visibility store, as shown in the following code snippet from the script (for SQL databases).

add_custom_search_attributes() {
until temporal operator search-attribute list --namespace "${DEFAULT_NAMESPACE}"; do
echo "Waiting for namespace cache to refresh..."
sleep 1
done
echo "Namespace cache refreshed."

echo "Adding Custom*Field search attributes."

temporal operator search-attribute create --namespace "${DEFAULT_NAMESPACE}" --yes \
--name CustomKeywordField --type Keyword \
--name CustomStringField --type Text \
--name CustomTextField --type Text \
--name CustomIntField --type Int \
--name CustomDatetimeField --type Datetime \
--name CustomDoubleField --type Double \
--name CustomBoolField --type Bool
}

Note that this script has been updated for Temporal Server v1.20, which requires associating every custom Search Attribute with a Namespace when using a SQL database.

For Temporal Server v1.19 and earlier, or if using Elasticsearch for advanced Visibility, you can create custom Search Attributes without a Namespace association, as shown in the following example.

add_custom_search_attributes() {
echo "Adding Custom*Field search attributes."
tctl --auto_confirm admin cluster add-search-attributes \
--name CustomKeywordField --type Keyword \
--name CustomStringField --type Text \
--name CustomTextField --type Text \
--name CustomIntField --type Int \
--name CustomDatetimeField --type Datetime \
--name CustomDoubleField --type Double \
--name CustomBoolField --type Bool
}

When your Visibility store is set up and running, these custom Search Attributes are available to use in your Workflow code.

How to remove custom Search Attributes

To remove a Search Attribute key from your self-hosted Temporal Cluster Visibility store, use the command tctl search-attribute remove. Removing Search Attributes is not supported on Temporal Cloud.

For example, if using Elasticsearch for advanced Visibility, to remove a custom Search Attribute called CustomSA of type Keyword use the following command:

tctl search-attribute remove --name CustomSA

With Temporal Server v1.20, if using a SQL database for advanced Visibility, you need to specify the Namespace in your command, as shown in the following command:

tctl --ns yournamespace search-attribute remove --name CustomSA

To check whether the Search Attribute was removed, run tctl search-attribute list and check the list. If you're on Temporal Server v1.20 and later, specify the Namespace from which you removed the Search Attribute. For example, tctl --ns yournamespace search-attribute list.

Note that if you use SQL databases with Temporal Server v1.20 and later, a new custom Search Attribute is mapped to a database field name in the Visibility store custom_search_attributes table. Removing this custom Search Attribute removes the mapping with the database field name but does not remove the data. If you remove a custom Search Attribute and add a new one, the new custom Search Attribute might be mapped to the database field of the one that was recently removed. This might cause unexpected results when you use the List API to retrieve results using the new custom Search Attribute. These constraints do not apply if you use Elasticsearch.