Skip to main content

Temporal Cluster configuration reference

Much of the behavior of a Temporal Cluster is configured using the development.yaml file and may contain the following top-level sections:

Changing any properties in the development.yaml file requires a process restart for changes to take effect. Configuration parsing code is available here.


The global section contains process-wide configuration. See below for a minimal configuration (optional parameters are commented out.)

broadcastAddress: ''
framework: 'tally'
listenAddress: ''


The membership section controls the following membership layer parameters.


The amount of time the service will attempt to join the gossip layer before failing.

Default is 10s.


Used by gossip protocol to communicate with other hosts in the same Cluster for membership info. Use IP address that is reachable by other hosts in the same Cluster. If there is only one host in the Cluster, you can use Check net.ParseIP for supported syntax, only IPv4 is supported.


Configures the Cluster's metric subsystem. Specific provides are configured using provider names as the keys.


The prefix to be applied to all outgoing metrics.


The set of key-value pairs to be reported as part of every metric.


A map from tag name string to tag values string list. This is useful to exclude some tags that might have unbounded cardinality. The value string list can be used to whitelist values of that excluded tag to continue to be included. For example, if you want to exclude task_queue because it has unbounded cardinality, but you still want to see a whitelisted value for task_queue.


The statsd sections supports the following settings:

  • hostPort: The host:port of the statsd server.
  • prefix: Specific prefix in reporting to statsd.
  • flushInterval: Maximum interval for sending packets. (Default 300ms).
  • flushBytes: Specifies the maximum UDP packet size you wish to send. (Default 1432 bytes).


The prometheus sections supports the following settings:

  • framework: The framework to use, currently supports opentelemetry and tally, default is tally. We plan to switch default to opentelemetry once its API become stable.
  • listenAddress: Address for Prometheus to scrape metrics from. The Temporal Server uses the Prometheus client API, and the listenAddress configuration is used to listen for metrics.
  • handlerPath: Metrics handler path for scraper; default is /metrics.


The m3 sections supports the following settings:

  • hostPort: The host:port of the M3 server.
  • service: The service tag to that this client emits.
  • queue: M3 reporter queue size, default is 4k.
  • packetSize: M3 reporter max packet size, default is 32k.


  • port: If specified, this will initialize pprof upon process start on the listed port.


The tls section controls the SSL/TLS settings for network communication and contains two subsections, internode and frontend. The internode section governs internal service communication among roles where the frontend governs SDK client communication to the Frontend Service role.

Each of these subsections contain a server section and a client section. The server contains the following parameters:

  • certFile: The path to the file containing the PEM-encoded public key of the certificate to use.
  • keyFile: The path to the file containing the PEM-encoded private key of the certificate to use.
  • requireClientAuth: boolean - Requires clients to authenticate with a certificate when connecting, otherwise known as mutual TLS.
  • clientCaFiles: A list of paths to files containing the PEM-encoded public key of the Certificate Authorities you wish to trust for client authentication. This value is ignored if requireClientAuth is not enabled.

See the server samples repo for sample TLS configurations.

Below is an example enabling Server TLS (https) between SDKs and the Frontend APIs:

certFile: /path/to/cert/file
keyFile: /path/to/key/file
serverName: dnsSanInFrontendCertificate

Note, the client section generally needs to be provided to specify an expected DNS SubjectName contained in the presented server certificate via the serverName field; this is needed as Temporal uses IP to IP communication. You can avoid specifying this if your server certificates contain the appropriate IP Subject Alternative Names.

Additionally, the rootCaFiles field needs to be provided when the client's host does not trust the Root CA used by the server. The example below extends the above example to manually specify the Root CA used by the Frontend Services:

certFile: /path/to/cert/file
keyFile: /path/to/key/file
serverName: dnsSanInFrontendCertificate
- /path/to/frontend/server/CA/files

Below is an additional example of a fully secured cluster using mutual TLS for both frontend and internode communication with manually specified CAs:

certFile: /path/to/internode/cert/file
keyFile: /path/to/internode/key/file
requireClientAuth: true
- /path/to/internode/serverCa
serverName: dnsSanInInternodeCertificate
- /path/to/internode/serverCa
certFile: /path/to/frontend/cert/file
keyFile: /path/to/frontend/key/file
requireClientAuth: true
- /path/to/internode/serverCa
- /path/to/sdkClientPool1/ca
- /path/to/sdkClientPool2/ca
serverName: dnsSanInFrontendCertificate
- /path/to/frontend/serverCa

Note: In the case that client authentication is enabled, the internode.server certificate is used as the client certificate among services. This adds the following requirements:

  • The internode.server certificate must be specified on all roles, even for a frontend-only configuration.

  • Internode server certificates must be minted with either no Extended Key Usages or both ServerAuth and ClientAuth EKUs.

  • If your Certificate Authorities are untrusted, such as in the previous example, the internode server Ca will need to be specified in the following places:

    • internode.server.clientCaFiles
    • internode.client.rootCaFiles
    • frontend.server.clientCaFiles


The persistence section holds configuration for the data store/persistence layer. The following example shows a minimal specification for a password-secured Cluster using Cassandra.

defaultStore: default
visibilityStore: cass-visibility # The primary Visibility store.
secondaryVisibilityStore: es-visibility # A secondary Visibility store added to enable Dual Visibility.
numHistoryShards: 512
hosts: ''
keyspace: 'temporal'
user: 'username'
password: 'password'
hosts: ''
keyspace: 'temporal_visibility'
version: 'v7'
logLevel: 'error'
scheme: 'http'
host: ''
visibility: temporal_visibility_v1_dev
closeIdleConnectionsInterval: 15s

The following top level configuration items are required:


Required - The number of history shards to create when initializing the Cluster.

Warning: This value is immutable and will be ignored after the first run. Please ensure you set this value appropriately high enough to scale with the worst case peak load for this Cluster.


Required - The name of the data store definition that should be used by the Temporal server.


Required - The name of the primary data store definition that should be used to set up Visibility on the Temporal Cluster.


Optional - The name of the secondary data store definition that should be used to set up Dual Visibility on the Temporal Cluster.


Required - contains named data store definitions to be referenced.

Each definition is defined with a heading declaring a name (ie: default: and visibility: above), which contains a data store definition.

Data store definitions must be either cassandra or sql.


A cassandra data store definition can contain the following values:

  • hosts: Required - "," separated Cassandra endpoints, e.g. ",,".
  • port: Default: 9042 - Cassandra port used for connection by gocql client.
  • user: Cassandra username used for authentication by gocql client.
  • password: Cassandra password used for authentication by gocql client.
  • keyspace: Required - the Cassandra keyspace.
  • datacenter: The data center filter arg for Cassandra.
  • maxConns: The max number of connections to this data store for a single TLS configuration.
  • tls: See TLS below.


A sql data store definition can contain the following values:

  • user: Username used for authentication.
  • password: Password used for authentication.
  • pluginName: Required - SQL database type.
    • Valid values: mysql or postgres.
  • databaseName - required - the name of SQL database to connect to.
  • connectAddr - required - the remote address of the database, e.g. "".
  • connectProtocol - required - the protocol that goes with the connectAddr
    • Valid values: tcp or unix
  • connectAttributes - a map of key-value attributes to be sent as part of connect data_source_name url.
  • maxConns - the max number of connections to this data store.
  • maxIdleConns - the max number of idle connections to this data store
  • maxConnLifetime - is the maximum time a connection can be alive.
  • tls - See below.


The tls and mtls sections can contain the following values:

  • enabled - boolean.
  • serverName - name of the server hosting the data store.
  • certFile - path to the cert file.
  • keyFile - path to the key file.
  • caFile - path to the ca file.
  • enableHostVerification - boolean - true to verify the hostname and server cert (like a wildcard for Cassandra cluster). This option is basically the inverse of InSecureSkipVerify. See InSecureSkipVerify in for more info.

Note: certFile and keyFile are optional depending on server config, but both fields must be omitted to avoid using a client certificate.


The log section is optional and contains the following possible values:

  • stdout - boolean - true if the output needs to go to standard out.
  • level - sets the logging level.
    • Valid values - debug, info, warn, error or fatal, default to info.
  • outputFile - path to output log file.


clusterMetadata contains the local cluster information. The information is used in Multi-Cluster Replication.

An example clusterMetadata section:

enableGlobalNamespace: true
failoverVersionIncrement: 10
masterClusterName: 'active'
currentClusterName: 'active'
enabled: true
initialFailoverVersion: 0
rpcAddress: ''
#type: kafka
  • currentClusterName - required - the name of the current cluster. Warning: This value is immutable and will be ignored after the first run.
  • enableGlobalNamespace - Default: false.
  • replicationConsumer - determines which method to use to consume replication tasks. The type may be either kafka or rpc.
  • failoverVersionIncrement - the increment of each cluster version when failover happens.
  • masterClusterName - the master cluster name, only the master cluster can register/update namespace. All clusters can do namespace failover.
  • clusterInformation - contains the local cluster name to ClusterInformation definition. The local cluster name should be consistent with currentClusterName. ClusterInformation sections consist of:
    • enabled - boolean - whether a remote cluster is enabled for replication.
    • initialFailoverVersion
    • rpcAddress - indicate the remote service address (host:port). Host can be DNS name. Use dns:/// prefix to enable round-robin between IP address for DNS name.


The services section contains configuration keyed by service role type. There are four supported service roles:

  • frontend
  • matching
  • worker
  • history

Below is a minimal example of a frontend service definition under services:

grpcPort: 8233
membershipPort: 8933
bindOnIP: ''

There are two sections defined under each service heading:



rpc contains settings related to the way a service interacts with other services. The following values are supported:

  • grpcPort: Port on which gRPC will listen.
  • membershipPort: Port used to communicate with other hosts in the same Cluster for membership info. Each service should use different port. If there are multiple Temporal Clusters in your environment (Kubernetes for example), and they have network access to each other, each Cluster should use a different membership port.
  • bindOnLocalHost: Determines whether uses as the listener address.
  • bindOnIP: Used to bind service on specific IP, or Check net.ParseIP for supported syntax, only IPv4 is supported, mutually exclusive with BindOnLocalHost option.

Note: Port values are currently expected to be consistent among role types across all hosts.


The publicClient a required section describing the configuration needed to for worker to connect to Temporal server for background server maintenance.

  • hostPort IPv4 host port or DNS name to reach Temporal frontend, reference


hostPort: 'localhost:8933'

Use dns:/// prefix to enable round-robin between IP address for DNS name.



Archival is an optional configuration needed to set up the Archival store. It can be enabled on history and visibility data.

The following list describes supported values for each configuration on the history and visibility data.

  • state: State for Archival setting. Supported values are enabled, disabled. This value must be enabled to use Archival with any Namespace in your Cluster.
    • enabled: Enables Archival in your Cluster setup. When set to enabled, URI and namespaceDefaults values must be provided.
    • disabled: Disables Archival in your Cluster setup. When set to disabled, the enableRead value must be set to false, and under namespaceDefaults, state must be set to disabled, with no values set for provider and URI fields.
  • enableRead: Supported values are true or false. Set to true to allow read operations from the archived Event History data.
  • provider: Location where data should be archived. Subprovider configs are filestore, gstorage, s3, or your_custom_provider. Default configuration specifies filestore.


  • To enable Archival in your Cluster configuration:

    # Cluster-level Archival config enabled
    # Event History configuration
    # Archival is enabled for the History Service data.
    state: 'enabled'
    enableRead: true
    # Namespaces can use either the local filestore provider or the Google Cloud provider.
    fileMode: '0666'
    dirMode: '0766'
    credentialsPath: '/tmp/gcloud/keyfile.json'
    # Configuration for archiving Visibility data.
    # Archival is enabled for Visibility data.
    state: 'enabled'
    enableRead: true
    fileMode: '0666'
    dirMode: '0766'
  • To disable Archival in your Cluster configuration:

    # Cluster-level Archival config disabled
    state: 'disabled'
    enableRead: false
    state: 'disabled'
    enableRead: false

    state: 'disabled'
    state: 'disabled'

For more details on Archival setup, see Set up Archival.



Sets default Archival configuration for each Namespace using namespaceDefaults for history and visibility data.

  • state: Default state of the Archival for the Namespace. Supported values are enabled or disabled.
  • URI: Default URI for the Namespace.

For more details on setting Namespace defaults on Archival, see Namespace creation in Archival setup


# Default values for a Namespace if none are provided at creation.
# Archival defaults.
# Event History defaults.
state: 'enabled'
# New Namespaces will default to the local provider.
URI: 'file:///tmp/temporal_archival/development'
state: 'disabled'
URI: 'file:///tmp/temporal_vis_archival/development'



Contains the Frontend datacenter API redirection policy that you can use for cross-DC replication.

Supported values:

  • policy: Supported values are noop, selected-apis-forwarding, and all-apis-forwarding.
    • noop: Not setting a value or setting noop means no redirection. This is the default value.
    • selected-apis-forwarding: Sets up forwarding for the following APIs to the active Cluster based on the Namespace.
      • StartWorkflowExecution
      • SignalWithStartWorkflowExecution
      • SignalWorkflowExecution
      • RequestCancelWorkflowExecution
      • TerminateWorkflowExecution
      • QueryWorkflow
    • all-apis-forwarding: Sets up forwarding for all APIs on the Namespace in the active Cluster.


policy: 'selected-apis-forwarding'



Configuration for setting up file-based dynamic configuration client for the Cluster.

This setting is required if specifying dynamic configuration. Supported configuration values are as follows:

  • filepath: Specifies the path where the dynamic configuration YAML file is stored. The path should be relative to the root directory.
  • pollInterval: Interval between the file-based client polls to check for dynamic configuration updates. The minimum period you can set is 5 seconds.


filepath: 'config/dynamicconfig/development-cass.yaml'
pollInterval: '10s'