Skip to main content

Temporal Server security

Overview

A secured Temporal server has its network communication encrypted and has authentication and authorization protocols set up for API calls made to it. Without these, your server could be accessed by unwanted entities.

What is documented on this page are the built-in opt-in security measures that come with Temporal. However users may also choose to design their own security architecture with reverse proxies or run unsecured instances inside of a VPC environment.

Server Samples

The https://github.com/temporalio/samples-server repo offers two examples, which are further explained below:

  • TLS: how to configure Transport Layer Security (TLS) to secure network communication with and within a Temporal cluster.
  • Authorizer: how to inject a low-level authorizer component that can control access to all API calls.

Encryption in transit with mTLS

Temporal supports Mutual Transport Layer Security (mTLS) as a way of encrypting network traffic between the services of a cluster and also between application processes and a Cluster. Self-signed or properly minted certificates can be used for mTLS. mTLS is set in Temporal's TLS configuration. The configuration includes two sections such that intra-Cluster and external traffic can be encrypted with different sets of certificates and settings:

  • internode: Configuration for encrypting communication between nodes in the cluster.
  • frontend: Configuration for encrypting the Frontend's public endpoints.

A customized configuration can be passed using either the WithConfig or WithConfigLoader server options.

See TLS configuration reference for more details.

Encryption at rest with Data Converter

A Data Converter is a Temporal SDK component that encodes and decodes data entering and exiting a Temporal Server.

Data Converter encodes and decodes data

Data is encoded before it is sent to a Temporal Server, and it is decoded when it is received from a Temporal Server.

The main pieces of data that run through the Data Converter are arguments and return values:

  • The Client:
    • Encodes Workflow, Signal, and Query arguments.
    • Decodes Workflow and Query return values.
  • The Worker:
    • Decodes Workflow, Signal, and Query arguments.
    • Encodes Workflow and Query return values.
    • Decodes and encodes Activity arguments and return values.

Each piece of data (like a single argument or return value) is encoded as a Payload Protobuf message, which consists of binary data and key-value metadata.

Default Data Converter

Each Temporal SDK includes a default Data Converter. In most SDKs, the default converter supports binary, JSON, and Protobufs. (In SDKs that cannot determine parameter types at runtime—like TypeScript—Protobufs aren't included in the default converter.) It tries to encode values in the following order:

  • Null
  • Binary
  • Protobuf JSON
  • JSON

For example:

  • If a value is an instance of a Protobuf message, it will be encoded with proto3 JSON.
  • If a value isn't null, binary, or a Protobuf, it will be encoded as JSON. If any part of it is not serializable as JSON (for example, a Date—see JSON data types), an error will be thrown.

The default converter also supports decoding binary Protobufs.

Custom Data Converter

Applications can create their own custom Data Converters to alter the format (for example using MessagePack instead of JSON) or add compression or encryption.

To use a custom Data Converter, provide it in the following contexts:

Custom Data Converters are not applied to all data:

Payload Codecs

In TypeScript and Go, data conversion happens in two stages:

  1. A Payload Converter converts a value into a Payload.
  2. A Payload Codec transforms an array of Payloads (for example, a list of Workflow arguments) into another array of Payloads.

The Payload Codec is an optional step that happens between the wire and the Payload Converter:

Temporal Server <--> Wire <--> Payload Codec <--> Payload Converter <--> User code

Common Payload Codec transformations are compression and encryption.

In codec implementations, we recommended running the function (whether it be compressing, encrypting, etc) on the entire input Payload, and putting the result in a new Payload's data field. That way, the input Payload's headers are preserved. See, for example:

Encryption

Doing encryption in a custom Data Converter ensures that all application data is encrypted during the following actions:

  • Being sent to/from Temporal Server.
  • Moving inside Temporal Server.
  • Stored by Temporal Server.

Then data exists unencrypted in memory only on the Client and in the Worker Process that is executing Workflows and Activities on hosts that the application developer controls.

Our encryption samples use AES GCM with 256-bit keys:

Codec Server

A Codec Server is a feature that can perform additional levels of encoding and decoding on Payloads that are handled by tctl or the Web UI.

The Web UI and tctl both use a default Data Converter, which is capable of serialization only.

Codec Servers can encrypt, compress, and change the format of a Payload object. These measures can further secure your data.

Use case: tctl

Suppose that you want to view Workflow History. This information needs to be decoded before it can be viewed.

You can use tctl workflow showid to view a Workflow Execution Event History.

    tctl workflow showid <workflowID>

With a Codec Server, the Payload is decoded before being deserialized by tctl's default Data Converter. The default Data Converter sends the Payload to a given endpoint, and receives a decoded Payload if the API returns a successful result.

The Data Converter passes this result back to the command line, which prints the decoded result.

Use case - Web UI

Workflow Execution Event History is available in the Web UI. Payload information for each Event is captured within Event 'input' and 'result' fields. Without a Codec Server, this information remains encoded.

Passing these Payloads through a Codec Server returns decoded results to the Web UI. Make sure to enter a valid URL and port for the codec endpoint when configuring the Codec Server.

Codec Server setup

The Codec Server Go sample is an example that shows how to decode a Payload that has been encoded so the Payload can be displayed by tctl and the Web UI.

A codec HTTP protocol specifies two endpoints to handle Payload encoding and decoding.

Implementations must do the following:

  • Send and receive Payloads protobuf as JSON.
  • Check only the final part of the incoming URL to determine whether the request is for /encode or /decode.
note

A Temporal Cluster should already be in operation before starting the Codec Server.

tctl

Start up the Codec Server.

Configure the codec endpoint:

tctl --codec_endpoint 'http://localhost:{PORT}/{namespace}' workflow show --wid codecserver_workflowID

Web UI

codec:
endpoint: {{ default .Env.TEMPORAL_CODEC_ENDPOINT "{namespace}"}}

The codec endpoint can be specified in the configuration file. It can also be changed during runtime.

Select the button with two arrows in the top right area of the screen. This action displays the codec endpoint dialog.

Enter the URL and port number for your codec endpoint. Exit the dialog, go back to the previous page, and refresh the page.

The button should now be light blue, and your Payloads should be displayed in a readable format.

Authentication

There are a few authentication protocols available to prevent unwanted access such as authentication of servers, clients, and users.

Servers

To prevent spoofing and MITM attacks you can specify the serverName in the client section of your respective mTLS configuration. This enables established connections to authenticate the endpoint, ensuring that the server certificate presented to any connecting Client has the appropriate server name in its CN property. It can be used for both internode and frontend endpoints.

More guidance on mTLS setup can be found in the samples-server repo and you can reach out to us for further guidance.

Client connections

To restrict a client's network access to cluster endpoints you can limit it to clients with certificates issued by a specific Certificate Authority (CA). Use the clientCAFiles/ clientCAData and requireClientAuth properties in both the internode and frontend sections of the mTLS configuration.

Users

To restrict access to specific users, authentication and authorization is performed through extensibility points and plugins as described in the Authorization section below.

Authorization

Temporal offers two plugin interfaces for implementing API call authorization:

  • ClaimMapper
  • Authorizer

The authorization and claim mapping logic is customizable, making it available to a variety of use cases and identity schemes. When these are provided the frontend invokes the implementation of these interfaces before executing the requested operation.

See https://github.com/temporalio/samples-server/blob/main/extensibility/authorizer for a sample implementation.

Authorizer plugin interface

The Authorizer has a single Authorize method which is invoked for each incoming API call that is received by the Frontend gRPC service. The Authorize method receives information about the API call and the role/permission claims of the caller.

common/authorization/authorizer.go

// Authorizer is an interface for implementing authorization logic
type Authorizer interface {
Authorize(ctx context.Context, caller *Claims, target *CallTarget) (Result, error)
}

Authorizer allows for a wide range of authorization logic, as information such as the call target, a set of role/permission claims, and any other data available to the system can be used in the authorization logic. The following arguments must be passed to the Authorize method for example:

  • context.Context: General context of the call.
  • authorization.Claims: Claims about the roles assigned to the caller. Its intended use is described below.
  • authorization.CallTarget: Target of the API call.

common/authorization/authorizer.go

// CallTarget is contains information for Authorizer to make a decision.
// It can be extended to include resources like WorkflowType and TaskQueue
type CallTarget struct {
// APIName must be the full API function name.
// Example: "/temporal.api.workflowservice.v1.WorkflowService/StartWorkflowExecution".
APIName string
// If a Namespace is not being targeted this be set to an empty string.
Namespace string
// Request contains a deserialized copy of the API request object
Request interface{}
}

The Authorize method then returns one of two possible decisions within the Result.Decision field:

  • DecisionDeny: the requested API call is not invoked and an error is returned to the caller.
  • DecisionAllow: the requested API call is invoked.

If you don't want to create your own, you can use the default Authorizer:

a := authorization.NewDefaultAuthorizer()

Configure your Authorizer when you start the server via the temporal.WithAuthorizer server option.

If an Authorizer is not set in the server options, Temporal uses the nopAuthority authorizer that unconditionally allows all API calls to pass through.

ClaimMapper plugin interface

ClaimMapper has a single method, GetClaims that is responsible for translating information from the authorization token and/or mTLS certificate of the caller into Claims about the caller's roles within Temporal. This component is customizable and can be set via the temporal.WithClaimMapper server option, enabling a wide range of options for interpreting a caller's identity.

common/authorization/claim_mapper.go

// ClaimMapper converts authorization info of a subject into Temporal claims (permissions) for authorization
type ClaimMapper interface {
GetClaims(authInfo *AuthInfo) (*Claims, error)
}

A typical approach is for ClaimMapper to interpret custom Claims from a caller's JWT access token, such as membership in groups, and map them to Temporal roles for the user. Another approach is to use the subject information from the caller's TLS certificate as a parameter for determining roles. See the default JWT ClaimMapper as an example.

AuthInfo

The AuthInfo struct that is passed to claim mapper's GetClaims method contains an authorization token extracted from the authorization header of the gRPC request. It also includes a pointer to the pkix.Name struct that contains a X.509 distinguishable name from the caller's mTLS certificate.

common/authorization/claim_mapper.go

// Authentication information from subject's JWT token or/and mTLS certificate
type AuthInfo struct {
AuthToken string
TLSSubject *pkix.Name
TLSConnection *credentials.TLSInfo
ExtraData string
Audience string
}

Claims

The Claims struct contains information about permission claims granted to the caller. The Authorizer assumes that the caller has been properly authenticated and trusts the Claims that are passed to it for making an authorization decision.

common/authorization/roles.go

// Claims contains the identity of the subject and subject's roles at the system level and for individual namespaces
type Claims struct {
// Identity of the subject
Subject string
// Role within the context of the whole Temporal cluster or a multi-cluster setup
System Role
// Roles within specific namespaces
Namespaces map[string]Role
// Free form bucket for extra data
Extensions interface{}
}

Role is a bit mask that is a combination of one or more the role constants:

common/authorization/roles.go

// User authz within the context of an entity, such as system, namespace or workflow.
// User may have any combination of these authz within each context, except for RoleUndefined, as a bitmask.
const (
RoleWorker = Role(1 << iota)
RoleReader
RoleWriter
RoleAdmin
RoleUndefined = Role(0)
)

For example, a role can be set as

role := authorization.RoleReader | authorization.RoleWriter

Default JWT ClaimMapper

Temporal offers a default JSON Web Token ClaimMapper that extracts claims from JWT access tokens and translates them into Temporal Claims. The default JWT ClaimMapper needs a public key to perform validation of tokens' digital signatures and expects JWT tokens to be in the certain format described below.

You can use the default JWT ClaimMapper as an example to build your own ClaimMapper for translating a caller's authorization information from other formats and conventions into Temporal Claims.

To get an instance of the default JWT ClaimMapper, call NewDefaultJWTClaimMapper and provide it with an instance of a TokenKeyProvider, a pointer to a config.Authorization config, and a logger.

claimMapper := authorization.NewDefaultJWTClaimMapper(tokenKeyProvider, authCfg, logger)

TokenKeyProvider

To obtain public keys from issuers of JWT tokens and to refresh them over time, the default JWT ClaimMapper uses another pluggable component, the TokenKeyProvider.

common/authorization/token_key_provider.go

// Provides keys for validating JWT tokens
type TokenKeyProvider interface {
EcdsaKey(alg string, kid string) (*ecdsa.PublicKey, error)
HmacKey(alg string, kid string) ([]byte, error)
RsaKey(alg string, kid string) (*rsa.PublicKey, error)
SupportedMethods() []string
Close()
}

// RawTokenKeyProvider is a TokenKeyProvider that provides keys for validating JWT tokens
type RawTokenKeyProvider interface {
GetKey(ctx context.Context, token *jwt.Token) (interface{}, error)
SupportedMethods() []string
Close()
}

Temporal provides an implementation of the TokenKeyProvider, rsaTokenKeyProvider, that dynamically obtains public keys from specified issuers' URIs that adhere to the JWK format.

provider := authorization.NewRSAKeyProvider(cfg)

Note that the rsaTokenKeyProvider returned by NewRSAKeyProvider only implements RSAKey and Close methods, and returns an error from EcdsaKey and HmacKey methods. It is configured via config.Config.Global.Authorization.JWTKeyProvider:

common/config/config.go

    // Contains the config for signing key provider for validating JWT tokens
JWTKeyProvider struct {
KeySourceURIs []string `yaml:"keySourceURIs"`
RefreshInterval time.Duration `yaml:"refreshInterval"`
}

KeySourceURIs are the HTTP endpoints that return public keys of token issuers in the JWK format. RefreshInterval defines how frequently keys should be refreshed. For example, Auth0 exposes such endpoints as https://YOUR_DOMAIN/.well-known/jwks.json.

config.Authorization

  • permissionsClaimName: Name of the Permissions Claim to be used by the default JWT ClaimMapper. "permissions" is used as a default name. Use config.Config.Global.Authorization.PermissionsClaimName configuration property to override the name.

Format of JSON Web Tokens

The default JWT ClaimMapper expects authorization tokens to be in the following format:

Bearer <token>
  • <token>: Must be the Base64 url-encoded value of the token.

The default JWT ClaimMapper expects Permissions Claim in the JWT token to be named "permissions", unless overridden in configuration.

Permissions Claim is expected to be a collection of Individual Permission Claims. Each Individual Permission Claim is expected to be in the following format:

<namespace>:<permission>
  • <namespace>: This can be either a Temporal Namespace name or "system" to represent system-wide permissions.
  • <permission>: This can be one of the four values:
    • read
    • write
    • worker
    • admin

The default JWT claim mapper converts these permissions into Temporal roles for the caller as described above.

Multiple permissions for the same namespace get OR'ed. For example, when accounting:read and accounting:write are found in a token, they are translated into authorization.RoleReader | authorization.RoleWriter.

Example of a JWT payload for The Default JWT ClaimMapper
{
"permissions":[
"system:read",
"namespace1:write"
],
"aud":[
"audience"
],
"exp":1630295722,
"iss":"Issuer"
}

Single sign-on integration

Temporal can be integrated with a single sign-on (SSO) experience by utilizing the ClaimMapper and Authorizer plugins. The default JWT ClaimMapper implementation can be used as is or as a base for a custom implementation of a similar plugin.

Temporal Web

To enable SSO for the Temporal Web UI edit the web service's configuration per the Temporal Web README.