Skip to main content

Temporal Cluster dynamic configuration reference

Temporal Cluster provides dynamic configuration keys that you can update and apply to a running Cluster without restarting your services.

The dynamic configuration keys are set with default values when you create your Cluster configuration. You can override these values as you test your Cluster setup for optimal performance according to your workload requirements.

For the complete list of dynamic configuration keys, see https://github.com/temporalio/temporal/blob/master/common/dynamicconfig/constants.go. Ensure that you check server release notes for any changes to these keys and values.

For the default values of dynamic configuration keys, check the following links:

Setting dynamic configuration is optional. Change these values only if you need to override the default values to achieve better performance on your Temporal Cluster. Also, ensure that you test your changes before setting these in production.

Format

To override the default dynamic configuration values, specify your custom values and constraints for the dynamic configuration keys that you want to change in a YAML configuration file. Use the following format when creating your dynamic configuration file.

testGetBoolPropertyKey:
- value: false
- value: true
constraints:
namespace: 'your-namespace'
- value: false
constraints:
namespace: 'your-other-namespace'
testGetDurationPropertyKey:
- value: '1m'
constraints:
namespace: 'your-namespace'
taskQueueName: 'longIdleTimeTaskqueue'
testGetFloat64PropertyKey:
- value: 12.0
constraints:
namespace: 'your-namespace'
testGetMapPropertyKey:
- value:
key1: 1
key2: 'value 2'
key3:
- false
- key4: true
key5: 2.0

Constraints

You can define constraints on some dynamic configuration keys to set specific values that apply on a Namespace or Task Queue level. Not defining constraints on a dynamic configuration key sets the values across the Cluster.

  • To set global values for the configuration key with no constraints, use the following:

    frontend.globalNamespaceRPS: # Total per-Namespace RPC rate limit applied across the Cluster.
    - value: 5000
  • For keys that can be customized at Namespace level, you can specify multiple values for different Namespaces in addition to one default value that applies globally to all Namespaces. To set values at a Namespace level, use namespace (String) as shown in the following example.

    frontend.persistenceNamespaceMaxQPS: # Rate limit on the number of queries the Frontend sends to the Persistence store.
    - constraints: {} # Sets default value that applies to all Namespaces
    value: 2000 # The default value for this key is 0.
    - constraints: { namespace: 'namespace1' } # Sets limit on number of queries that can be sent from "namespace1" Namespace to the Persistence store.
    value: 4000
    - constraints: { namespace: 'namespace2' }
    value: 1000
  • For keys that can be customized at a Task Queue level, you can specify Task Queue name and Task type in addition to Namespace. To set values at a Task Queue level, use taskQueueName (String) with taskType (optional; supported values: Workflow and Activity).

    For example if you have Workflow Executions creating a large number of Workflow and Activity tasks per second, you can add more partitions to your Task Queues (default is 4) to handle the high throughput of tasks. To do this, add the following to your dynamic configuration file. Note that if changing the number of partitions, you must set the same count for both read and write operations on Task Queues.

    matching.numTaskqueueReadPartitions: # Number of Task Queue partitions for read operations.
    - constraints: { namespace: 'namespace1', taskQueueName: 'tq' } # Applies to the "tq" Task Queue for both Workflows and Activities.
    value: 8 # The default value for this key is 4. Task Queues that need to support high traffic require higher number of partitions. Set these values in accordance to your poller count.
    - constraints: {
    namespace: 'namespace1',
    taskQueueName: 'other-tq',
    taskType: 'Activity',
    } # Applies to the "other_tq" Task Queue for Activities specifically.
    value: 20
    - constraints: { namespace: 'namespace2' } # Applies to all task queues in "namespace2".
    value: 10
    - constraints: {} # Applies to all other task queues in "namespace1" and all other Namespaces.
    value: 16
    matching.numTaskqueueWritePartitions: # Number of Task Queue partitions for write operations.
    - constraints: { namespace: 'namespace1', taskQueueName: 'tq' } # Applies to the "tq" Task Queue for both Workflows and Activities.
    value: 8 # The default value for this key is 4. Task Queues that need to support high traffic require higher number of partitions. Set these values in accordance to your poller count.
    - constraints: {
    namespace: 'namespace1',
    taskQueueName: 'other-tq',
    taskType: 'Activity',
    } # Applies to the "other_tq" Task Queue for Activities specifically.
    value: 20
    - constraints: { namespace: 'namespace2' } # Applies to all task queues in "namespace2".
    value: 10
    - constraints: {} # Applies to all other task queues in "namespace1" and all other Namespaces.
    value: 16

For more examples on how dynamic configuration is set, see:

Commonly used dynamic configuration keys

The following table lists commonly used dynamic configuration keys that can be used for rate limiting requests to the Temporal Cluster.

Setting dynamic configuration keys is optional. If you choose to update these values for your Temporal Cluster, ensure that you are provisioning enough resources to handle the load.

All values listed here are for Temporal server v1.21. Check server release notes to verify any potential breaking changes when upgrading your versions.

Service-level RPS limits

The Requests Per Second (RPS) dynamic configuration keys set the rate at which requests can be made to each service in your Cluster.

When scaling your services, tune the RPS to test your workload and set acceptable provisioning benchmarks. Exceeding these limits results in ResourceExhaustedError.

Dynamic configuration keyTypeDescriptionDefault value
Frontend
frontend.rpsIntRate limit (requests/second) for requests accepted by each Frontend Service host.2400
frontend.namespaceRPSIntRate limit (requests/second) for requests accepted by each Namespace on the Frontend Service.2400
frontend.namespaceCountIntLimit on the number of concurrent Task Queue polls per Namespace per Frontend Service host.1200
frontend.globalNamespaceRPSIntRate limit (requests/second) for requests accepted per Namespace, applied across Cluster. The limit is evenly distributed among available Frontend Service instances. If this is set, it overrides the per-instance limit (frontend.namespaceRPS).0
internal-frontend.globalNamespaceRPSIntRate limit (requests/second) for requests accepted on each Internal-Frontend Service host applied across the Cluster.0
History
history.rpsIntRate limit (requests/second) for requests accepted by each History Service host.3000
Matching
matching.rpsIntRate limit (requests/second) for requests accepted by each Matching Service host.1200
matching.numTaskqueueReadPartitionsIntNumber of read partitions for a Task Queue. Must be set with matching.numTaskqueueWritePartitions.4
matching.numTaskqueueWritePartitionsIntNumber of write partitions for a Task Queue.4

QPS limits for Persistence store

The Queries Per Second (QPS) dynamic configuration keys set the maximum number of queries a service can make per second to the Persistence store.

Persistence store rate limits are evaluated synchronously. Adjust these keys according to your database capacity and workload. If the number of queries made to the Persistence store exceeds the dynamic configuration value, you will see latencies and timeouts on your tasks.

Dynamic configuration keyTypeDescriptionDefault value
Frontend
frontend.persistenceMaxQPSIntMaximum number queries per second that the Frontend Service host can send to the Persistence store.2000
frontend.persistenceNamespaceMaxQPSIntMaximum number of queries per second that each Namespace on the Frontend Service host can send to the Persistence store.
If the value set for this config is less than or equal to 0, the value set for frontend.persistenceMaxQPS will apply.
0
History
history.persistenceMaxQPSIntMaximum number of queries per second that the History host can send to the Persistence store.9000
history.persistenceNamespaceMaxQPSIntMaximum number of queries per second for each Namespace that the History host can send to the Persistence store.
If the value set for this config is less than or equal to 0, then the value set for history.persistenceMaxQPS will apply.
0
Matching
matching.persistenceMaxQPSIntMaximum number of queries per second that the Matching Service host can send to the Persistence store.9000
matching.persistenceNamespaceMaxQPSIntMaximum number of queries per second that the Matching host can send to the Persistence store for each Namespace.
If the value set for this config is less than or equal to 0, the value set for matching.persistenceMaxQPS will apply.
0
Worker
worker.persistenceMaxQPSIntMaximum number of queries per second that the Worker Service host can send to the Persistence store.100
worker.persistenceNamespaceMaxQPSIntMaximum number of queries per second that the Worker host can send to the Persistence store for each Namespace.
If the value set for this config is less than or equal to 0, the value set for worker.persistenceMaxQPS will apply.
0
Visibility
system.visibilityPersistenceMaxReadQPSIntMaximum number queries per second that Visibility database can receive for read operations.9000
system.visibilityPersistenceMaxWriteQPSIntMaximum number of queries per second that Visibility database can receive for write operations.9000

Activity and Workflow default policy setting

You can define default values for Activity and Workflow Retry Policies at the Cluster level with the following dynamic configuration keys.

Dynamic configuration keyTypeDescriptionDefault value
history.defaultActivityRetryPolicyMap (key-value pair elements)Server configuration for an Activity Retry Policy when it is not explicitly set for the Activity in your code.Default values for retry Policy
history.defaultWorkflowRetryPolicyMap (key-value pair elements)Retry Policy for unset fields where the user has set an explicit RetryPolicy, but not specified all the fields.Default values for retry Policy

Size limit settings

The Persistence store in the Cluster has default size limits set for optimal performance. The dynamic configuration keys relating to some of these are listed below.

The default values on these keys are based on extensive testing. You can change these values, but ensure that you are provisioning enough database resources to handle the changed values.

For details on platform limits, see the Temporal Platform limits sheet.

Dynamic configuration keyTypeDescriptionDefault value
limit.maxIDLengthIntLength limit for various Ids, including: Namespace, TaskQueue, WorkflowID, ActivityID, TimerID, WorkflowType, ActivityType, SignalName, MarkerName, ErrorReason/FailureReason/CancelCause, Identity, and RequestID.1000
limit.blobSize.warnIntLimit, in MBs, for BLOBs size in an Event when a warning is thrown in the server logs.512 KB (512 × 1024)
limit.blobSize.errorIntLimit, in MBs, for BLOBs size in an Event when an error occurs in the transaction.2 MB (2 × 1024 × 1024)
limit.historySize.warnIntLimit, in MBs, at which a warning is thrown for the Workflow Execution Event History size.10 MB (10 × 1024 × 1024)
limit.historySize.errorIntLimit, in MBs, at which an error occurs in the Workflow Execution for exceeding allowed size.50 MB (50 × 1024 × 1024)
limit.historyCount.warnIntLimit, in count, at which a warning is thrown for the Workflow Execution Event History size.10,240 Events
limit.historyCount.errorIntLimit, in count, at which an error occurs in the Workflow Execution for exceeding allowed number of Events.51,200 events
limit.numPendingActivities.errorIntMaximum number of pending Activities that a Workflow Execution can have before the ScheduleActivityTask fails with an error.2000
limit.numPendingSignals.errorIntMaximum number of pending Signals that a Workflow Execution can have before the SignalExternalWorkflowExecution commands from this Workflow fail with an error.2000
history.maximumSignalsPerExecutionIntMaximum number of Signals that a Workflow Execution can receive before it throws an Invalid Argument error.10000
limit.numPendingCancelRequests.errorIntMaximum number of pending requests to cancel other Workflows that a Workflow Execution can have before the RequestCancelExternalWorkflowExecution commands fail with an error.2000
limit.numPendingChildExecutions.errorIntMaximum number of pending Child Workflows that a Workflow Execution can have before the StartChildWorkflowExecution commands fail with an error.2000
frontend.visibilityMaxPageSizeIntMaximum number of Workflow Executions shown from the ListWorkflowExecutions API in one page.1000

Secondary Visibility settings

Secondary Visibility configuration keys enable Dual Visibility on your Temporal Cluster. This can be useful when migrating a Visibility database or creating a backup Visibility store.

Dynamic configuration keyTypeDescriptionDefault value
system.enableReadFromSecondaryVisibilityBooleanEnables reading from the secondary Visibility store, and can be set per Namespace. Allowed values are true or false.false
system.secondaryVisibilityWritingModeEnables writing Visibility data to the secondary Visibility store and can be set per Namespace. Setting this value to on disables write operations to the primary Visibility store. Allowed values:
off: Enables writing to primary Visibility store only.
on: Enables writing to secondary Visibility store only.
dual: Enables writing to both primary and secondary Visibility stores.
off