Skip to main content

Clusters

Temporal Clusters explained.

A Temporal Cluster is the group of services, known as the Temporal Server, combined with persistence stores, that together act as a component of the Temporal Platform.

A Temporal Cluster (Server + persistence)

Persistence

A Temporal Cluster's only required dependency for basic operation is a database. Multiple types of databases that are supported.

Persistence

The database stores the following types of data:

  • Tasks: Tasks to be dispatched.
  • State of Workflow Executions:
    • Execution table: A capture of the mutable state of Workflow Executions.
    • History table: An append only log of Workflow Execution History Events.
  • Namespace metadata: Metadata of each Namespace in the Cluster.
  • Visibility data: Enables operations like "show all running Workflow Executions". For production environments, we recommend using Elasticsearch.

An Elasticsearch database can be added to enable Advanced Visibility.

Versions

Temporal tests compatibility by spanning the minimum and maximum stable non-EOL major versions for each supported database. As of time of writing, these specific versions are used in our test pipelines and actively tested before we release any version of Temporal:

  • Cassandra v3.11 and v4.0
  • PostgreSQL v10.18 and v13.4
  • MySQL v5.7 and v8.0 (specifically 8.0.19+ due to a bug)

We update these support ranges once a year. The release notes of each Temporal Server declare when we plan to drop support for database versions reaching End of Life.

  • Because Temporal Server primarily relies on core database functionality, we do not expect compatibility to break often. Temporal has no opinions on database upgrade paths; as long as you can upgrade your database according to each project's specifications, Temporal should work with any version within supported ranges.
  • We do not run tests with vendors like Vitess and CockroachDB, so you rely on their compatibility claims if you use them. Feel free to discuss them with fellow users in our forum.
  • Temporal is working on official SQLite v3.x persistence, but this is meant only for development and testing, not production usage. Cassandra, MySQL, and PostgreSQL schemas are supported and thus can be used as the Server's database.

Monitoring & observation

Temporal emits metrics by default in a format that is supported by Prometheus. Monitoring and observing those metrics is optional. Any software that can pull metrics that supports the same format could be used, but we ensure it works with Prometheus and Grafana versions only.

  • Prometheus >= v2.0
  • Grafana >= v2.5

Visibility

Temporal has built-in Visibility features. To enhance this feature, Temporal supports an integration with Elasticsearch.

  • Elasticsearch v7.10 is supported from Temporal version 1.7.0 onwards
  • Elasticsearch v6.8 is supported in all Temporal versions
  • Both versions are explicitly supported with AWS Elasticsearch

Temporal Server

The Temporal Server consists of four independently scalable services:

  • Frontend gateway: for rate limiting, routing, authorizing
  • History subsystem: maintains data (mutable state, queues, and timers)
  • Matching subsystem: hosts Task Queues for dispatching
  • Worker service: for internal background workflows

For example, a real-life production deployment can have 5 Frontend, 15 History, 17 Matching, and 3 Worker services per cluster.

The Temporal Server services can run independently or be grouped together into shared processes on one or more physical or virtual machines. For live (production) environments, we recommend that each service runs independently, because each one has different scaling requirements and troubleshooting becomes easier. The History, Matching, and Worker services can scale horizontally within a Cluster. The Frontend Service scales differently than the others because it has no sharding or partitioning; it is just stateless.

Each service is aware of the others, including scaled instances, through a membership protocol via Ringpop.

Versions and support

All Temporal Server releases abide by the Semantic Versioning Specification.

Fairly precise upgrade paths and support have been established starting from Temporal v1.7.0.

We provide maintenance support for previously published minor and major versions by continuing to release critical bug fixes related to security, the prevention of data loss, and reliability, whenever they are found.

We aim to publish incremental upgrade guides for each minor and major version, which include specifics about dependency upgrades that we have tested for (such as Cassandra 3.0 -> 3.11).

We offer maintenance support of the last three minor versions after a release and do not plan to "backport" patches beyond that.

We offer maintenance support of major versions for at least 12 months after a GA release, and we provide at least 6 months' notice before EOL/deprecating support.

Dependencies

Temporal offers official support for, and is tested against, dependencies with the exact versions described in the go.mod file of the corresponding release tag. (For example, v1.5.1 dependencies are documented in the go.mod for v1.5.1.)

Frontend Service

The Frontend Service is a stateless gateway service that exposes a strongly typed Proto API. The Frontend Service is responsible for rate limiting, authorizing, validating, and routing all inbound calls.

Frontend Service

Types of inbound calls include the following:

  • Domain CRUD
  • External events
  • Worker polls
  • Visibility requests
  • Admin operations via tctl (the Temporal CLI)
  • Calls from a remote Cluster related to Multi-Cluster Replication

Every inbound request related to a Workflow Execution must have a Workflow Id, which is hashed for routing purposes. The Frontend Service has access to the hash rings that maintain service membership information, including how many nodes (instances of each service) are in the Cluster.

Inbound call rate limiting is applied per host and per namespace.

The Frontend service talks to the Matching service, History service, Worker service, the database, and Elasticsearch (if in use).

  • It uses the grpcPort 7233 to host the service handler.
  • It uses port 6933 for membership-related communication.

History service

The History Service tracks the state of Workflow Executions.

History Service

The History Service scales horizontally via individual shards, configured during the Cluster's creation. The number of shards remains static for the life of the Cluster (so you should plan to scale and over-provision).

Each shard maintains data (routing identifiers, mutable state) and queues. A History shard maintains four types of queues:

  • Transfer queue: transfers internal tasks to the Matching Service. Whenever a new Workflow Task needs to be scheduled, the History Service transactionally dispatches it to the Matching Service.
  • Timer queues: durably persists Timers.
  • Replicator queue: asynchronously replicates Workflow Executions from active Clusters to other passive Clusters (experimental Multi-Cluster feature).
  • Visibility queue: pushes data to the visibility index (Elasticsearch).

The History service talks to the Matching Service and the Database.

  • It uses grpcPort 7234 to host the service handler.
  • It uses port 6934 for membership-related communication.

Matching service

The Matching Service is responsible for hosting Task Queues for Task dispatching.

Matching Service

It is responsible for matching Workers to Tasks and routing new Tasks to the appropriate queue. This service can scale internally by having multiple instances.

It talks to the Frontend service, History service, and the database.

  • It uses grpcPort 7235 to host the service handler.
  • It uses port 6935 for membership related communication.

Worker service

The Worker Service runs background processing for the replication queue, system Workflows, and (in versions older than 1.5.0) the Kafka visibility processor.

Worker Service

It talks to the Frontend service.

  • It uses port 6939 for membership-related communication.

Retention Period

A Retention Period is the amount of time a Workflow Execution Event History remains in the Cluster's persistence store.

A Retention Period applies to a single Namespace and is set when the Namespace is registered.

If the Retention Period isn't set, it defaults to 2 days. The minimum Retention Period is 1 day. The maximum Retention Period is 30 days. Setting the Retention Period to 0 results in the error A valid retention period is not set on request.

Archival

Archival is a feature that automatically backs up Event Histories and Visibility records from Temporal Cluster persistence to a custom blob store.

Workflow Execution Event Histories are backed up after the Retention Period is reached. Visibility records are backed up immediately after a Workflow Execution reaches a Closed status.

Archival enables Workflow Execution data to persist as long as needed, while not overwhelming the Cluster's persistence store.

This feature is helpful for compliance and debugging.

Temporal's Archival feature is considered experimental and not subject to normal versioning and support policy.

Archival is not supported when running Temporal via docker-compose and is disabled by default when installing the system manually and when deploying via helm charts (but can be enabled in the config).

Multi-Cluster Replication

Multi-Cluster Replication is a feature which asynchronously replicates Workflow Executions from active Clusters to other passive Clusters, for backup and state reconstruction. When necessary, for higher availability, Cluster operators can failover to any of the backup Clusters.

Temporal's Multi-Cluster Replication feature is considered experimental and not subject to normal versioning and support policy.

Namespace Versions

A version is a concept in Multi-Cluster Replication that describes the chronological order of events per Namespace.

With Multi-Cluster Replication, all Namespace change events and Workflow Execution History events are replicated asynchronously for high throughput. This means that data across clusters is not strongly consistent. To guarantee that Namespace data and Workflow Execution data will achieve eventual consistency (especially when there is a data conflict during a failover), a version is introduced and attached to Namespaces. All Workflow Execution History entries generated in a Namespace will also come with the version attached to that Namespace.

All participating Clusters are pre-configured with a unique initial version and a shared version increment:

  • initial version < shared version increment

When performing failover for a Namespace from one Cluster to another Cluster, the version attached to the Namespace will be changed by the following rule:

  • for all versions which follow version % (shared version increment) == (active cluster's initial version), find the smallest version which has version >= old version in namespace

When there is a data conflict, a comparison will be made and Workflow Execution History entries with the highest version will be considered the source of truth.

When a cluster is trying to mutate a Workflow Execution History, the version will be checked. A cluster can mutate a Workflow Execution History only if the following is true:

  • The version in the Namespace belongs to this cluster, i.e. (version in namespace) % (shared version increment) == (this cluster's initial version)
  • The version of this Workflow Execution History's last entry (event) is equal or less than the version in the Namespace, i.e. (last event's version) <= (version in namespace)
Namespace version change example

Assuming the following scenario:

  • Cluster A comes with initial version: 1
  • Cluster B comes with initial version: 2
  • Shared version increment: 10

T = 0: Namespace α is registered, with active Cluster set to Cluster A

namespace α's version is 1
all workflows events generated within this namespace, will come with version 1

T = 1: namespace β is registered, with active Cluster set to Cluster B

namespace β's version is 2
all workflows events generated within this namespace, will come with version 2

T = 2: Namespace α is updated to with active Cluster set to Cluster B

namespace α's version is 2
all workflows events generated within this namespace, will come with version 2

T = 3: Namespace β is updated to with active Cluster set to Cluster A

namespace β's version is 11
all workflows events generated within this namespace, will come with version 11

Version history

Version history is a concept which provides a high level summary of version information in regards to Workflow Execution History.

Whenever there is a new Workflow Execution History entry generated, the version from Namespace will be attached. The Workflow Executions's mutable state will keep track of all history entries (events) and the corresponding version.

Version history example (without data conflict)
  • Cluster A comes with initial version: 1
  • Cluster B comes with initial version: 2
  • Shared version increment: 10

T = 0: adding event with event ID == 1 & version == 1

View in both Cluster A & B

| -------- | ------------- | --------------- | ------- |
| Events | Version History |
| -------- | ------------- | --------------- | ------- |
| Event ID | Event Version | Event ID | Version |
| -------- | ------------- | --------------- | ------- |
| 1 | 1 | 1 | 1 |
| -------- | ------------- | --------------- | ------- |

T = 1: adding event with event ID == 2 & version == 1

View in both Cluster A & B

| -------- | ------------- | --------------- | ------- |
| Events | Version History |
| -------- | ------------- | --------------- | ------- |
| Event ID | Event Version | Event ID | Version |
| -------- | ------------- | --------------- | ------- |
| 1 | 1 | 2 | 1 |
| 2 | 1 | | |
| -------- | ------------- | --------------- | ------- |

T = 2: adding event with event ID == 3 & version == 1

View in both Cluster A & B

| -------- | ------------- | --------------- | ------- |
| Events | Version History |
| -------- | ------------- | --------------- | ------- |
| Event ID | Event Version | Event ID | Version |
| -------- | ------------- | --------------- | ------- |
| 1 | 1 | 3 | 1 |
| 2 | 1 | | |
| 3 | 1 | | |
| -------- | ------------- | --------------- | ------- |

T = 3: Namespace failover triggered, Namespace version is now 2 adding event with event ID == 4 & version == 2

View in both Cluster A & B

| -------- | ------------- | --------------- | ------- |
| Events | Version History |
| -------- | ------------- | --------------- | ------- |
| Event ID | Event Version | Event ID | Version |
| -------- | ------------- | --------------- | ------- |
| 1 | 1 | 3 | 1 |
| 2 | 1 | 4 | 2 |
| 3 | 1 | | |
| 4 | 2 | | |
| -------- | ------------- | --------------- | ------- |

T = 4: adding event with event ID == 5 & version == 2

View in both Cluster A & B

| -------- | ------------- | --------------- | ------- |
| Events | Version History |
| -------- | ------------- | --------------- | ------- |
| Event ID | Event Version | Event ID | Version |
| -------- | ------------- | --------------- | ------- |
| 1 | 1 | 3 | 1 |
| 2 | 1 | 5 | 2 |
| 3 | 1 | | |
| 4 | 2 | | |
| 5 | 2 | | |
| -------- | ------------- | --------------- | ------- |

Since Temporal is AP, during failover (change of active Temporal Cluster Namespace), there can exist cases where more than one Cluster can modify a Workflow Execution, causing divergence of Workflow Execution History. Below shows how the version history will look like under such conditions.

Version history example (with data conflict)

Below, shows version history of the same Workflow Execution in 2 different Clusters.

  • Cluster A comes with initial version: 1
  • Cluster B comes with initial version: 2
  • Cluster C comes with initial version: 3
  • Shared version increment: 10

T = 0:

View in both Cluster B & C

| -------- | ------------- | --------------- | ------- |
| Events | Version History |
| -------- | ------------- | --------------- | ------- |
| Event ID | Event Version | Event ID | Version |
| -------- | ------------- | --------------- | ------- |
| 1 | 1 | 2 | 1 |
| 2 | 1 | 3 | 2 |
| 3 | 2 | | |
| -------- | ------------- | --------------- | ------- |

T = 1: adding event with event ID == 4 & version == 2 in Cluster B

| -------- | ------------- | --------------- | ------- |
| Events | Version History |
| -------- | ------------- | --------------- | ------- |
| Event ID | Event Version | Event ID | Version |
| -------- | ------------- | --------------- | ------- |
| 1 | 1 | 2 | 1 |
| 2 | 1 | 4 | 2 |
| 3 | 2 | | |
| 4 | 2 | | |
| -------- | ------------- | --------------- | ------- |

T = 1: namespace failover to Cluster C, adding event with event ID == 4 & version == 3 in Cluster C

| -------- | ------------- | --------------- | ------- |
| Events | Version History |
| -------- | ------------- | --------------- | ------- |
| Event ID | Event Version | Event ID | Version |
| -------- | ------------- | --------------- | ------- |
| 1 | 1 | 2 | 1 |
| 2 | 1 | 3 | 2 |
| 3 | 2 | 4 | 3 |
| 4 | 3 | | |
| -------- | ------------- | --------------- | ------- |

T = 2: replication task from Cluster C arrives in Cluster B

Note: below are a tree structures

                | -------- | ------------- |
| Events |
| -------- | ------------- |
| Event ID | Event Version |
| -------- | ------------- |
| 1 | 1 |
| 2 | 1 |
| 3 | 2 |
| -------- | ------------- |
|
| ------------- | ------------ |
| |
| -------- | ------------- | | -------- | ------------- |
| Event ID | Event Version | | Event ID | Event Version |
| -------- | ------------- | | -------- | ------------- |
| 4 | 2 | | 4 | 3 |
| -------- | ------------- | | -------- | ------------- |

| --------------- | ------- |
| Version History |
| --------------- | ------- |
| Event ID | Version |
| --------------- | ------- |
| 2 | 1 |
| 3 | 2 |
| --------------- | ------- |
|
| ------- | ------------------- |
| |
| --------------- | ------- | | --------------- | ------- |
| Event ID | Version | | Event ID | Version |
| --------------- | ------- | | --------------- | ------- |
| 4 | 2 | | 4 | 3 |
| --------------- | ------- | | --------------- | ------- |

T = 2: replication task from Cluster B arrives in Cluster C, same as above

Conflict resolution

When a Workflow Execution History diverges, proper conflict resolution should be applied.

In Multi-cluster Replication, Workflow Execution History entries (events) are modeled as a tree, as shown in the second example in Version History.

Workflow Execution Histories that diverge will have more than one history branch. Among all history branches, the history branch with the highest version is considered the current branch and the Workflow Execution's mutable state is a summary of the current branch. Whenever there is a switch between Workflow Execution History branches, a complete rebuild of the Workflow Execution's mutable state will occur.

Zombie Workflows

There is an existing contract that for any Namespace and Workflow Id combination, there can be at most one run (Namespace + Workflow Id + Run Id) open / executing.

Multi-cluster Replication aims to keep the Workflow Execution History as up-to-date as possible among all participating Clusters.

Due to the nature of Multi-cluster Replication (for example, Workflow Execution History events are replicated asynchronously) different Runs (same Namespace and Workflow Id) can arrive at the target Cluster at different times, sometimes out of order, as shown below:

| ------------- |          | ------------- |          | ------------- |
| Cluster A | | Network Layer | | Cluster B |
| ------------- | | ------------- | | ------------- |
| | |
| Run 1 Replication Events | |
| -----------------------> | |
| | |
| Run 2 Replication Events | |
| -----------------------> | |
| | |
| | |
| | |
| | Run 2 Replication Events |
| | -----------------------> |
| | |
| | Run 1 Replication Events |
| | -----------------------> |
| | |
| ------------- | | ------------- | | ------------- |
| Cluster A | | Network Layer | | Cluster B |
| ------------- | | ------------- | | ------------- |

Since Run 2 appears in Cluster B first, Run 1 cannot be replicated as "runnable" due to the rule at most one Run open (see above), thus the "zombie" Workflow Execution state is introduced. A "zombie" state is one in which a Workflow Execution which cannot be actively mutated by a Cluster (assuming the corresponding Namespace is active in this Cluster). A zombie Workflow Execution can only be changed by a replication Task.

Run 1 will be replicated similar to Run 2, except when Run 1's execution will become a "zombie" before Run 1 reaches completion.

Workflow Task processing

In the context of Multi-cluster Replication, a Workflow Execution's mutable state is an entity which tracks all pending tasks. Prior to the introduction of Multi-cluster Replication, Workflow Execution History entries (events) are from a single branch, and the Temporal Server will only append new entries (events) to the Workflow Execution History.

After the introduction of Multi-cluster Replication, it is possible that a Workflow Execution can have multiple Workflow Execution History branches. Tasks generated according to one history branch may become invalidated by switching history branches during conflict resolution.

Example:

T = 0: task A is generated according to Event Id: 4, version: 2

| -------- | ------------- |
| Events |
| -------- | ------------- |
| Event ID | Event Version |
| -------- | ------------- |
| 1 | 1 |
| 2 | 1 |
| 3 | 2 |
| -------- | ------------- |
|
|
| -------- | ------------- |
| Event ID | Event Version |
| -------- | ------------- |
| 4 | 2 | <-- task A belongs to this event
| -------- | ------------- |

T = 1: conflict resolution happens, Workflow Execution's mutable state is rebuilt and history Event Id: 4, version: 3 is written down to persistence

                | -------- | ------------- |
| Events |
| -------- | ------------- |
| Event ID | Event Version |
| -------- | ------------- |
| 1 | 1 |
| 2 | 1 |
| 3 | 2 |
| -------- | ------------- |
|
| ------------- | -------------------------------------------- |
| |
| -------- | ------------- | | -------- | ------------- |
| Event ID | Event Version | | Event ID | Event Version |
| -------- | ------------- | | -------- | ------------- |
| 4 | 2 | <-- task A belongs to this event | 4 | 3 | <-- current branch / mutable state
| -------- | ------------- | | -------- | ------------- |

T = 2: task A is loaded.

At this time, due to the rebuild of a Workflow Execution's mutable state (conflict resolution), Task A is no longer relevant (Task A's corresponding Event belongs to non-current branch). Task processing logic will verify both the Event Id and version of the Task against a corresponding Workflow Execution's mutable state, then discard task A.