

 Amazon Redshift will no longer support the creation of new Python UDFs starting Patch 198. Existing Python UDFs will continue to function until June 30, 2026. For more information, see the [ blog post ](https://amazonaws-china.com/blogs/big-data/amazon-redshift-python-user-defined-functions-will-reach-end-of-support-after-june-30-2026/). 

# Cluster operations
Cluster operations

After you create a cluster, you can perform cluster operations to optimize performance, control costs, and ensure high availability. Cluster operations allow you to resize, pause, resume, or even recreate clusters as your data warehousing needs evolve. 

Common use cases include scaling compute capacity for peak workloads, pausing clusters during inactive periods to reduce costs, and recreating clusters with different configurations or in different Availability Zones for disaster recovery. The following sections cover the details of performing various cluster operations to effectively manage your Amazon Redshift environment.

# Creating a cluster


With Amazon Redshift, you can create a provisioned cluster to launch a new data warehouse. A provisioned cluster is a collection of computing resources called nodes, which are organized into a single, massively parallel processing (MPP) system. 

Before you create a cluster, read [Amazon Redshift provisioned clusters](working-with-clusters.md) and [Clusters and nodes in Amazon Redshift](working-with-clusters.md#rs-about-clusters-and-nodes).

**To create a cluster**

1. Sign in to the Amazon Web Services Management Console and open the Amazon Redshift console at [https://console.amazonaws.cn/redshiftv2/](https://console.amazonaws.cn/redshiftv2/).

1. On the navigation menu, choose **Clusters**. The clusters for your account in the current Amazon Region are listed. A subset of properties of each cluster is displayed in columns in the list. 

1. Choose **Create cluster** to create a cluster. 

1. Follow the instructions on the console page to enter the properties for **Cluster configuration**. 

   The following step describes an Amazon Redshift console that is running in an Amazon Web Services Region that supports RA3 node types. For a list of Amazon Web Services Regions that support RA3 node types, see [Overview of RA3 node types](https://docs.amazonaws.cn/redshift/latest/mgmt/working-with-clusters.html#rs-ra3-node-types) in the *Amazon Redshift Management Guide*. 

   If you don't know how large to size your cluster, choose **Help me choose**. Doing this starts a sizing calculator that asks you questions about the size and query characteristics of the data that you plan to store in your data warehouse. If you know the required size of your cluster (that is, the node type and number of nodes), choose **I'll choose**. Then choose the **Node type** and number of **Nodes** to size your cluster for the proof of concept.
**Note**  
If your organization is eligible and your cluster is being created in an Amazon Web Services Region where Amazon Redshift Serverless is unavailable, you might be able to create a cluster under the Amazon Redshift free trial program. Choose either **Production** or **Free trial** to answer the question **What are you planning to use this cluster for?** When you choose **Free trial**, you create a configuration with the dc2.large node type. For more information about choosing a free trial, see [Amazon Free Tier](https://www.amazonaws.cn/redshift/pricing).  

1. In the **Database configuration** section, specify a value for **Admin user name**. For **Admin password**, you can choose from the following options:
   +  **Generate a password** – Use a password generated by Amazon Redshift. 
   +  **Manually add an admin password** – Use your own password. 
   +  **Manage admin credentials in Amazon Secrets Manager** – Amazon Redshift uses Amazon Secrets Manager to generate and manage your admin password. Using Amazon Secrets Manager to generate and manage your password's secret incurs a fee. For information on Amazon Secrets Manager pricing, see [Amazon Secrets Manager Pricing](https://www.amazonaws.cn/secrets-manager/pricing/). 

1. (Optional) Follow the instructions on the console page to enter properties for **Cluster permissions**. Provide cluster permissions if your cluster needs to access other Amazon services for you, for example to load data from Amazon S3. 

1. Choose **Create cluster** to create the cluster. The cluster might take several minutes to be ready to use.

## Additional configurations


When you create a cluster, you can specify additional properties to customize it. You can find more details about some of these properties in the following list. 

**IP address type**  
Choose the IP address type for your cluster. You can choose to have your resources communicate only over the IPv4 addressing protocol, or choose dual-stack mode, which lets your resources communicate over both IPv4 and IPv6. This feature is only available in the Amazon GovCloud (US-East) and Amazon GovCloud (US-West) Regions. For more information on Amazon Regions, see [Regions and Availability Zones](https://www.amazonaws.cn/about-aws/global-infrastructure/regions_az/).

**Virtual private cloud (VPC)**  
Choose a VPC that has a cluster subnet group. After the cluster is created, the cluster subnet group can't be changed. 

**Parameter groups**  
Choose a cluster parameter group to associate with the cluster. If you don't choose one, the cluster uses the default parameter group. 

**Encryption**  
Choose whether you want to encrypt all data within the cluster and its snapshots. If you leave the default setting, **None**, encryption is not enabled. If you want to enable encryption, choose whether you want to use Amazon Key Management Service (Amazon KMS) or a hardware security module (HSM), and then configure the related settings. For more information about encryption in Amazon Redshift, see [Amazon Redshift database encryption](working-with-db-encryption.md).  
+ **KMS**

  Choose **Use Amazon Key Management Service (Amazon KMS)** if you want to enable encryption and use Amazon KMS to manage your encryption key. Also, choose the key to use. You can choose a default key, a key from the current account, or a key from a different account.
**Note**  
If you want to use a key from another Amazon account, then enter the Amazon Resource Name (ARN) for the key to use. You must have permission to use the key. For more information about access to keys in Amazon KMS, see [Controlling access to your keys](https://docs.amazonaws.cn/kms/latest/developerguide/control-access.html) in the *Amazon Key Management Service Developer Guide*.

  For more information about using Amazon KMS encryption keys in Amazon Redshift, see [Encryption using Amazon KMS](working-with-db-encryption.md#working-with-aws-kms).
+ **HSM**

  Choose **HSM** if you want to enable encryption and use a hardware security module (HSM) to manage your encryption key.

  If you choose **HSM**, choose values from **HSM Connection** and **HSM Client Certificate**. These values are required for Amazon Redshift and the HSM to form a trusted connection over which the cluster key can be passed. The HSM connection and client certificate must be set up in Amazon Redshift before you launch a cluster. For more information about setting up HSM connections and client certificates, see [Encryption using hardware security modules](working-with-db-encryption.md#working-with-HSM).

**Maintenance track**  
You can choose whether the cluster version used is the **Current**, **Trailing**, or sometimes **Preview** track. 

**Monitoring**  
You can choose whether to create CloudWatch alarms. 

**Configure cross-region snapshot**  
You can choose whether to enable cross-Region snapshots. 

**Automated snapshot retention period**  
You can choose the number of days to retain these snapshots within 35 days. If the node type is DC2, you can choose zero (0) days to not create automated snapshots.

**Manual snapshot retention period**  
You can choose the number of days or `Indefinitely` to retain these snapshots. 

**Extra compute resources for automatic optimizations**  
You can choose whether to allocate extra compute resources to perform automatic optimizations, even during periods of heavy usage. For more information, see [ Allocating extra compute resources for automatic database optimization ](https://docs.amazonaws.cn/redshift/latest/dg/t_extra-compute-autonomics.html) in the *Amazon Redshift Database Developer Guide*.

# Creating a disk space alarm


You can monitor disk space usage and set alarms to be notified when disk space exceeds a specified threshold for a cluster. Creating a disk space usage alarm allows you to proactively manage storage capacity and prevent issues caused by insufficient disk space, such as query failures or data ingestion errors. The following procedure guides you through the process of creating a disk space usage alarm.

**To create a disk space usage alarm for a cluster**

1. Sign in to the Amazon Web Services Management Console and open the Amazon Redshift console at [https://console.amazonaws.cn/redshiftv2/](https://console.amazonaws.cn/redshiftv2/).

1. On the navigation menu, choose **Alarms**. 

1. For **Actions**, choose **Create alarm**. The **Create alarm** page appears.

1. Follow the instructions on the page. 

1. Choose **Create alarm**.

# Viewing a cluster


Viewing a cluster allows you to monitor and manage your cluster's configuration, status, and performance metrics. By viewing cluster details, you can gain insights into resource utilization, query execution times, and system health. The following procedure shows you how to access cluster information.

**To view a cluster**

1. Sign in to the Amazon Web Services Management Console and open the Amazon Redshift console at [https://console.amazonaws.cn/redshiftv2/](https://console.amazonaws.cn/redshiftv2/).

1. On the navigation menu, choose **Clusters**. The clusters for your account in the current Amazon Region are listed. A subset of properties of each cluster is displayed in columns in the list. If you don't have any clusters, choose **Create cluster** to create one.

1. Choose the cluster name in the list to view more details about a cluster.

# Modifying a cluster


When you modify a cluster, changes to the following options are applied immediately:
+ **VPC security groups** 
+ **Publicly accessible** 
+ **Admin user password** 
+ **HSM Connection** 
+ **HSM Client Certificate** 
+ **Maintenance detail** 
+ **Snapshot preferences** 

 Changes to the following options take effect only after the cluster is restarted:
+ **Cluster identifier**

  Amazon Redshift restarts the cluster automatically when you change **Cluster identifier**.
+ **Enhanced VPC routing**

  Amazon Redshift restarts the cluster automatically when you change **Enhanced VPC routing**.
+ **Cluster parameter group** 
+ **IP address type** 

  This feature is only available in the Amazon GovCloud (US-East) and Amazon GovCloud (US-West) Regions. For more information on Amazon Regions, see [Regions and Availability Zones](https://www.amazonaws.cn/about-aws/global-infrastructure/regions_az/).

If you decrease the automated snapshot retention period, existing automated snapshots whose settings fall outside of the new retention period are deleted. For more information, see [Amazon Redshift snapshots and backups](working-with-snapshots.md). 

For more information about cluster properties, see [Additional configurations](create-cluster.md#cluster-create-console-configuration). 

**To modify a cluster**

1. Sign in to the Amazon Web Services Management Console and open the Amazon Redshift console at [https://console.amazonaws.cn/redshiftv2/](https://console.amazonaws.cn/redshiftv2/).

1. On the navigation menu, choose **Clusters**. 

1. Choose the cluster to modify. 

1. Choose **Edit**. The **Edit cluster** page appears.

1. Update the cluster properties. Some of the properties you can modify are: 
   + Cluster identifier
   + Snapshot retention
   + Cluster relocation

   To edit settings for **Network and security**, **Maintenance**, and **Database configurations**, the console provides links to the appropriate cluster details tab.

1. Choose **Save changes**.

# Resizing a cluster
Resizing a cluster

As your data warehousing capacity and performance needs change, you can resize your cluster to make the best use of Amazon Redshift's computing and storage options. 

 When you resize a cluster, you specify a number of nodes or node type that is different from the current configuration of the cluster. While the cluster is in the process of resizing, you cannot run any write or read/write queries on the cluster; you can run only read queries. 

 For more information about resizing clusters, including walking through the process of resizing clusters using different approaches, see [Resizing a cluster](#resizing-cluster). 

**To resize a cluster**

1. Sign in to the Amazon Web Services Management Console and open the Amazon Redshift console at [https://console.amazonaws.cn/redshiftv2/](https://console.amazonaws.cn/redshiftv2/).

1. On the navigation menu, choose **Clusters**. 

1. Choose the cluster to resize. 

1. For **Actions**, choose **Resize**. The **Resize cluster** page appears.

1. Follow the instructions on the page. You can resize the cluster now, once at a specific time, or increase and decrease the size of your cluster on a schedule.

1. Depending on your choices, choose **Resize now** or **Schedule resize**. 

If you have reserved nodes, you can upgrade to RA3 reserved nodes. You can do this when you use the console to restore from a snapshot or to perform an elastic resize. You can use the console to guide you through this process. For more information about upgrading to RA3 nodes, see [Upgrading to RA3 node types](https://docs.amazonaws.cn/redshift/latest/mgmt/working-with-clusters.html#rs-upgrading-to-ra3). 

When you perform a resize operation to upgrade from a DC2.large node type to an RA3.large node type, Amazon Redshift automatically converts interleaved sort keys to compound sort keys. This conversion enables access to the concurrency scaling feature, which does not support queries on tables with interleaved sort keys. While this automatic conversion ensures compatibility with RA3 features, it may impact your existing query performance patterns. 

If you want to maintain interleaved sort keys after upgrading to RA3 nodes, you can recreate your tables with the desired sort key configuration after the resize operation completes. However, choosing this option means you won't be able to use concurrency scaling for these tables.

A resize operation comes in two types:
+ **Elastic resize** – You can add nodes to or remove nodes from your cluster. You can also change the node type, such as from DC2 nodes to RA3 nodes. An elastic resize typically completes quickly, taking ten minutes on average. For this reason, we recommend it as a first option. When you perform an elastic resize, it redistributes data slices, which are partitions that are allocated memory and disk space in each node. Elastic resize is appropriate when you:
  + *Add or reduce nodes in an existing cluster, but you don't change the node type* – This is commonly called an *in-place* resize. When you perform this type of resize, some running queries complete successfully, but others can be dropped as part of the operation.
  + *Change the node type for a cluster* – When you change the node type, a snapshot is created and data is redistributed from the source cluster to a cluster comprised of the new node type. On completion, running queries are dropped. Like the *in-place* resize, it completes quickly.
+ **Classic resize** – You can change the node type, number of nodes, or both, in a similar manner to elastic resize. Classic resize takes more time to complete, but it can be useful in cases where the change in node count or the node type to migrate to doesn't fall within the bounds for elastic resize. This can apply, for instance, when the change in node count is really large. 

**Topics**
+ [

## Elastic resize
](#elastic-resize)
+ [

## Classic resize
](#classic-resize-faster)

## Elastic resize


An elastic resize operation, when you add or remove nodes of the same type, has the following stages:

1. Elastic resize takes a cluster snapshot. No-backup tables are only supported for DC2 nodes. For all other types of cluster, no-backup tables are included in the snapshot. For more information, see [Excluding tables from snapshots](working-with-snapshots.md#snapshots-no-backup-tables). If your cluster doesn't have a recent snapshot, because you disabled automated snapshots, the backup operation can take longer. (To minimize the time before the resize operation begins, we recommend that you enable automated snapshots or create a manual snapshot before starting the resize.) When you start an elastic resize and a snapshot operation is in progress, the resize can fail if the snapshot operation doesn't complete within a few minutes. For more information, see [Amazon Redshift snapshots and backups](working-with-snapshots.md).

1. The operation migrates cluster metadata. The cluster is unavailable for a few minutes. The majority of queries are temporarily paused and connections are held open. It is possible, however, for some queries to be dropped. This stage is short.

1. Session connections are reinstated and queries resume. 

1. Elastic resize redistributes data to node slices, in the background. The cluster is available for read and write operations, but some queries can take longer to run.

1. After the operation completes, Amazon Redshift sends an event notification.

When you use elastic resize to change the node type, it works similarly to when you add or subtract nodes of the same type. First, a snapshot is created. A new target cluster is provisioned with the latest data from the snapshot, and data is transferred to the new cluster in the background. During this period, data is read only. When the resize nears completion, Amazon Redshift updates the endpoint to point to the new cluster and all connections to the source cluster are dropped.

It's unlikely that an elastic resize would fail. However, in the case of a failure, rollback happens automatically in the majority of cases without needing any manual intervention.

If you have reserved nodes, for example DC2 reserved nodes, you can upgrade to RA3 reserved nodes when you perform a resize. You can do this when you perform an elastic resize or use the console to restore from a snapshot. The console guides you through this process. For more information about upgrading to RA3 nodes, see [Upgrading to RA3 node types](https://docs.amazonaws.cn/redshift/latest/mgmt/working-with-clusters.html#rs-upgrading-to-ra3). 

Elastic resize doesn't sort tables or reclaims disk space, so it isn't a substitute for a vacuum operation. For more information, see [Vacuuming tables](https://docs.amazonaws.cn/redshift/latest/dg/t_Reclaiming_storage_space202.html).

Elastic resize has the following constraints:
+ *Elastic resize and data sharing clusters* - When you add or subtract nodes on a cluster that's a producer for data sharing, you can’t connect to it from consumers while Amazon Redshift migrates cluster metadata. Similarly, if you perform an elastic resize and choose a new node type, data sharing is unavailable while connections are dropped and transferred to the new target cluster. In both types of elastic resize, the producer is unavailable for several minutes.
+ *Data transfer from a shared snapshot* - To run an elastic resize on a cluster that is transferring data from a shared snapshot, at least one backup must be available for the cluster. You can view your backups on the Amazon Redshift console snapshots list, the `describe-cluster-snapshots` CLI command, or the `DescribeClusterSnapshots` API operation.
+ *Platform restriction* - Elastic resize is available only for clusters that use the EC2-VPC platform. For more information, see [Use EC2 to create your cluster](working-with-clusters.md#cluster-platforms). 
+ *Storage considerations* - Make sure that your new node configuration has enough storage for existing data. You may have to add additional nodes or change configuration. 
+ *Source vs target cluster size* - The number of nodes and node type that it's possible to resize to with elastic resize is determined by the number of nodes in the source cluster and the node type chosen for the resized cluster. To determine the possible configurations available, you can use the console. Or you can use the `describe-node-configuration-options` Amazon CLI command with the `action-type resize-cluster` option. For more information about the resizing using the Amazon Redshift console, see [Resizing a cluster](#resizing-cluster). 

  The following example CLI command describes the configuration options available. In this example, the cluster named `mycluster` is a `dc2.large` 8-node cluster.

  ```
  aws redshift describe-node-configuration-options --cluster-identifier mycluster --region eu-west-1 --action-type resize-cluster
  ```

  This command returns an option list with recommended node types, number of nodes, and disk utilization for each option. The configurations returned can vary based on the specific input cluster. You can choose one of the returned configurations when you specify the options of the `resize-cluster` CLI command. 
+ *Ceiling on additional nodes* - Elastic resize has limits on the nodes that you can add to a cluster. For example, a dc2 cluster supports elastic resize up to double the number of nodes. To illustrate, you can add a node to a 4-node dc2.8xlarge cluster to make it a five-node cluster, or add more nodes until you reach eight.
**Note**  
The growth and reduction limits are based on the original node type and the number of nodes in the original cluster or its last classic resize. If an elastic resize will exceed the growth or reduction limits, use a classic resize.

  With some ra3 node types, you can increase the number of nodes up to four times the existing count. Specifically, suppose that your cluster consists of ra3.4xlarge or ra3.16xlarge nodes. You can then use elastic resize to increase the number of nodes in an 8-node cluster to 32. Or you can pick a value below the limit. (Keep in mind that the ability to grow the cluster by 4x depends on the source cluster size.) If your cluster has ra3.xlplus nodes, the limit is double.

  All ra3 node types support a decrease in the number of nodes to a quarter of the existing count. For example, you can decrease the size of a cluster with ra3.4xlarge nodes from 12 nodes to 3, or to a number above the minimum.

  The following table lists growth and reduction limits for each node type that supports elastic resize.    
[\[See the AWS documentation website for more details\]](http://docs.amazonaws.cn/en_us/redshift/latest/mgmt/resizing-cluster.html)
**Note**  
 **Choosing legacy node types when you resize an RA3 cluster** – If you attempt to resize from a cluster with RA3 nodes to another node type, such as DC2 , a validation warning message appears in the console, and the resize operation won't complete. This occurs because resize to legacy node types isn't supported. This is to prevent a customer from resizing to a node type that's deprecated or soon to be deprecated. This applies for both elastic resize and classic resize. 

## Classic resize


Classic resize handles use cases where the change in cluster size or node type isn't supported by elastic resize. When you perform a classic resize, Amazon Redshift creates a target cluster and migrates your data and metadata to it from the source cluster. 

### Classic resize to RA3 can provide better availability


Classic resize has been enhanced when the target node type is RA3. It does this by using a backup and restore operation between the source and target cluster. When the resize begins, the source cluster restarts and is unavailable for a few minutes. After that, the cluster is available for read and write operations while the resize continues in the background.

#### Checking your cluster


To ensure you have the best performance and results when you perform a classic resize to an RA3 cluster, complete this checklist. When you don't follow the checklist, you may not get some of the benefits of classic resizing with RA3 nodes, such as the ability to do read and write operations.

1. The size of the data must be below 2 petabytes. (A petabyte is equal to 1,000 terabytes.) To validate the size of your data, create a snapshot and check its size. You can also run the following query to check the size: 

   ```
   SELECT
   sum(case when lower(diststyle) like ('%key%') then size else 0 end) distkey_blocks,
   sum(size) as total_blocks,
   ((distkey_blocks/(total_blocks*1.00)))*100 as Blocks_need_redist
   FROM svv_table_info;
   ```

   The `svv_table_info` table is visible only to superusers.

1. Before you initiate a classic resize, make sure you have a manual snapshot that is no more than 10 hours old. If not, take a snapshot.

1. The snapshot used to perform the classic resize can't be used for a table restore or other purpose.

1. The cluster must be in a VPC.

#### Sorting and distribution operations that result from classic resize to RA3


During classic resize to RA3, tables with KEY distribution that are migrated as EVEN distribution are converted back to their original distribution style. The duration of this is dependent on the size of the data and how busy your cluster is. Query workloads are given higher priority to run over data migration. For more information, see [Distribution styles](https://docs.amazonaws.cn/redshift/latest/dg/c_choosing_dist_sort.html). Both reads and writes to the database work during this migration process, but it can take longer for queries to complete. However, concurrency scaling can boost performance during this time by adding resources for query workloads. You can see the progress of data migration by viewing results from the [SYS\$1RESTORE\$1STATE](https://docs.amazonaws.cn/redshift/latest/dg/SYS_RESTORE_STATE.html) and [SYS\$1RESTORE\$1LOG](https://docs.amazonaws.cn/redshift/latest/dg/SYS_RESTORE_LOG.html) views. More information about monitoring follows.

After the cluster is fully resized, the following sort behavior occurs:
+ If the resize results in the cluster having more slices, KEY distribution tables become partially unsorted, but EVEN tables remain sorted. Additionally, the information about how much data is sorted may not be up to date, directly following the resize. After key recovery, automatic vacuum sorts the table over time.
+ If the resize results in the cluster having fewer slices, both KEY distribution and EVEN distribution tables become partially unsorted. Automatic vacuum sorts the table over time.

For more information about automatic table vacuum, see [Vacuuming tables](https://docs.amazonaws.cn/redshift/latest/dg/t_Reclaiming_storage_space202.html). For more information about slices in compute nodes, see [Data warehouse system architecture](https://docs.amazonaws.cn/redshift/latest/dg/c_high_level_system_architecture.html).

#### Classic resize steps when the target cluster is RA3


Classic resize consists of the following steps, when the target cluster type is RA3 and you've met the prerequisites detailed in the previous section.

1. Migration initiates from the source cluster to the target cluster. When the new, target cluster is provisioned, Amazon Redshift sends an event notification that the resize has started. It restarts your existing cluster, which closes all connections. If your existing cluster is a datasharing producer cluster, connections with consumer clusters are also closed. The restart takes a few minutes. 

1. After the restart, the database is available for reads and writes. Additionally, data sharing resumes, which takes an additional few minutes.

1. Data is migrated to the target cluster. When the target node type is RA3, reads and writes are available during data migration.

1. When the resize process nears completion, Amazon Redshift updates the endpoint to the target cluster, and all connections to the source cluster are dropped. The target cluster becomes the producer for data sharing.

1. The resize completes. Amazon Redshift sends an event notification.

You can view the resize progress on the Amazon Redshift console. The time it takes to resize a cluster depends on the amount of data. 

**Note**  
 **Choosing legacy node types when you resize an RA3 cluster** – If you attempt to resize from a cluster with RA3 nodes to another node type, such as DC2 , a validation warning message appears in the console, and the resize operation won't complete. This occurs because resize to legacy node types isn't supported. This is to prevent a customer from resizing to a node type that's deprecated or soon to be deprecated. This applies for both elastic resize and classic resize. 

#### Monitoring a classic resize when the target cluster is RA3


To monitor a classic resize of a provisioned cluster in progress, including KEY distribution, use [SYS\$1RESTORE\$1STATE](https://docs.amazonaws.cn/redshift/latest/dg/SYS_RESTORE_STATE.html). It shows the percentage completed for the table being converted. You must be a super user to access the data.

Drop tables that you don't need when you perform a classic resize. When you do this, existing tables can be distributed more quickly.

### Classic resize steps when the target cluster isn't RA3


Classic resize consists of the following, when the target node type is anything other than RA3, like DC2, for instance.

1. Migration initiates from the source cluster to the target cluster. When the new, target cluster is provisioned, Amazon Redshift sends an event notification that the resize has started. It restarts your existing cluster, which closes all connections. If your existing cluster is a datasharing producer cluster, connections with consumer clusters are also closed. The restart takes a few minutes.

   Note that any database relation, such as a table or materialized view, created with `BACKUP NO` is not retained during the classic resize. For more information, see [CREATE MATERIALIZED VIEW](https://docs.amazonaws.cn/redshift/latest/dg/materialized-view-create-sql-command.html).

1. Following the restart, the database is available as read only. Data sharing resumes, which takes an additional few minutes.

1. Data is migrated to the target cluster. The database remains read only.

1. When the resize process nears completion, Amazon Redshift updates the endpoint to the target cluster, and all connections to the source cluster are dropped. The target cluster becomes the producer for data sharing.

1. The resize completes. Amazon Redshift sends an event notification.

You can view the resize progress on the Amazon Redshift console. The time it takes to resize a cluster depends on the amount of data.

**Note**  
It can take days or possibly weeks to resize a cluster with a large amount of data when the target cluster isn't RA3, or it doesn't meet the prerequisites for an RA3 target cluster detailed in the previous section.  
Also note that used storage capacity for the cluster can go up after a classic resize. This is normal system behavior when the cluster has additional data slices that result from the classic resize. This use of additional capacity can occur even when the number of nodes in the cluster stays the same.

### Elastic resize vs classic resize


The following table compares behavior between the two resize types.


| Behavior | Elastic resize | Classic resize | Comments | 
| --- | --- | --- | --- | 
| System data retention | Elastic resize retains system log data. | Classic resize doesn't retain system tables and data. | If you have audit logging enabled in your source cluster, you can continue to access the logs in Amazon S3 or in CloudWatch, following a resize. You can keep or delete these logs as your data policies specify. | 
| Changing node types | Elastic resize, when the node type doesn't change: In-place resize, and most queries are held. Elastic resize, with a new node type selected: A new cluster is created. Queries are dropped as the resize process completes. | Classic Resize: A new cluster is created. Queries are dropped during the resize process. |  | 
| Session and query retention | Elastic resize retains sessions and queries when the node type is the same in the source cluster and target. If you choose a new node type, queries are dropped. | Classic resize doesn't retain sessions and queries. Queries are dropped. | When queries are dropped, you can expect some performance degradation. It's best to perform a resize operation during a period of light use. | 
| Cancelling a resize operation | You can't cancel an elastic resize. | You can cancel a classic resize operation before it completes by choosing **Cancel resize** from the cluster details in the Amazon Redshift console.  | The amount of time it takes to cancel a resize depends on the stage of the resize operation when you cancel. When you do this, the cluster isn't available until the cancel operation completes. If the resize operation is in the final stage, you can't cancel. For classic resize to an RA3 cluster, you can't cancel. | 

### Scheduling a resize


You can schedule resize operations for your cluster to scale up to anticipate high use or to scale down for cost savings. Scheduling works for both elastic resize and classic resize. You can set up a schedule on the Amazon Redshift console. For more information, see [Resizing a cluster](#resizing-cluster), under **Managing clusters using the console**. You can also use Amazon CLI or Amazon Redshift API operations to schedule a resize. For more information, see [create-scheduled-action](https://docs.amazonaws.cn/cli/latest/reference/redshift/create-scheduled-action.html) in the *Amazon CLI Command Reference* or [CreateScheduledAction](https://docs.amazonaws.cn/redshift/latest/APIReference/API_CreateScheduledAction.html) in the *Amazon Redshift API Reference*.

### Snapshot, restore, and resize


[Elastic resize](#elastic-resize) is the fastest method to resize an Amazon Redshift cluster. If elastic resize isn't an option for you and you require near-constant write access to your cluster, use the snapshot and restore operations with classic resize as described in the following section. This approach requires that any data that is written to the source cluster after the snapshot is taken must be copied manually to the target cluster after the switch. Depending on how long the copy takes, you might need to repeat this several times until you have the same data in both clusters. Then you can make the switch to the target cluster. This process might have a negative impact on existing queries until the full set of data is available in the target cluster. However, it minimizes the amount of time that you can't write to the database. 

The snapshot, restore, and classic resize approach uses the following process: 

1. Take a snapshot of your existing cluster. The existing cluster is the source cluster. 

1. Note the time that the snapshot was taken. Doing this means that you can later identify the point when you need to rerun extract, transact, load (ETL) processes to load any post-snapshot data into the target database. 

1. Restore the snapshot into a new cluster. This new cluster is the target cluster. Verify that the sample data exists in the target cluster. 

1. Resize the target cluster. Choose the new node type, number of nodes, and other settings for the target cluster. 

1. Review the loads from your ETL processes that occurred after you took a snapshot of the source cluster. Be sure to reload the same data in the same order into the target cluster. If you have ongoing data loads, repeat this process several times until the data is the same in both the source and target clusters. 

1. Stop all queries running on the source cluster. To do this, you can reboot the cluster, or you can log on as a superuser and use the [PG\$1CANCEL\$1BACKEND](https://docs.amazonaws.cn/redshift/latest/dg/PG_CANCEL_BACKEND.html) and the [PG\$1TERMINATE\$1BACKEND](https://docs.amazonaws.cn/redshift/latest/dg/PG_TERMINATE_BACKEND.html) commands. Rebooting the cluster is the easiest way to make sure that the cluster is unavailable. 

1. Rename the source cluster. For example, rename it from `examplecluster` to `examplecluster-source`. 

1. Rename the target cluster to use the name of the source cluster before the rename. For example, rename the target cluster from preceding to `examplecluster`. From this point on, any applications that use the endpoint containing `examplecluster` connect to the target cluster. 

1. Delete the source cluster after you switch to the target cluster, and verify that all processes work as expected. 

Alternatively, you can rename the source and target clusters before reloading data into the target cluster. This approach works if you don't require that any dependent systems and reports be immediately up to date with those for the target cluster. In this case, step 6 moves to the end of the process described preceding. 

The rename process is only required if you want applications to continue using the same endpoint to connect to the cluster. If you don't require this, you can instead update any applications that connect to the cluster to use the endpoint of the target cluster without renaming the cluster. 

There are a couple of benefits to reusing a cluster name. First, you don't need to update application connection strings because the endpoint doesn't change, even though the underlying cluster changes. Second, related items such as Amazon CloudWatch alarms and Amazon Simple Notification Service (Amazon SNS) notifications are tied to the cluster name. This tie means that you can continue using the same alarms and notifications that you set up for the cluster. This continued use is primarily a concern in production environments where you want the flexibility to resize the cluster without reconfiguring related items, such as alarms and notifications. 

# Renaming a cluster


You can rename a cluster if you want the cluster to use a different name. Because the endpoint to your cluster includes the cluster name (also referred to as the *cluster identifier*), the endpoint changes to use the new name after the rename finishes. For example, if you have a cluster named `examplecluster` and rename it to `newcluster`, the endpoint changes to use the `newcluster` identifier. Any applications that connect to the cluster must be updated with the new endpoint. 

You may rename a cluster if you want to change the cluster your applications connect to without having to change the endpoint in those applications. In this case, you must first rename the original cluster and then change the second cluster to reuse the name of the original cluster before the rename. Doing this is necessary because the cluster identifier must be unique within your account and region, so the original cluster and second cluster cannot have the same name. You might do this if you restore a cluster from a snapshot and don't want to change the connection properties of any dependent applications. 

**Note**  
 If you delete the original cluster, you are responsible for deleting any unwanted cluster snapshots. 

When you rename a cluster, the cluster status changes to `renaming` until the process finishes. The old DNS name that was used by the cluster is immediately deleted, although it could remain cached for a few minutes. The new DNS name for the renamed cluster becomes effective within about 10 minutes. The renamed cluster is not available until the new name becomes effective. The cluster will be rebooted and any existing connections to the cluster will be dropped. After this completes, the endpoint will change to use the new name. For this reason, you should stop queries from running before you start the rename and restart them after the rename finishes. 

 Cluster snapshots are retained, and all snapshots associated with a cluster remain associated with that cluster after it is renamed. For example, suppose that you have a cluster that serves your production database and the cluster has several snapshots. If you rename the cluster and then replace it in the production environment with a snapshot, the cluster that you renamed still has those existing snapshots associated with it. 

 Amazon CloudWatch alarms and Amazon Simple Notification Service (Amazon SNS) event notifications are associated with the name of the cluster. If you rename the cluster, you must update these accordingly. You can update the CloudWatch alarms in the CloudWatch console, and you can update the Amazon SNS event notifications in the Amazon Redshift console on the **Events** pane. The load and query data for the cluster continues to display data from before the rename and after the rename. However, performance data is reset after the rename process finishes. 

For more information, see [Modifying a cluster](modify-cluster.md).

# Upgrading the release version of a cluster


You can upgrade the release maintenance version of a cluster that has a **Release Status** value of **New release available**. When you upgrade the maintenance version, you can choose to upgrade immediately or upgrade in the next maintenance window.

**Important**  
If you upgrade immediately, your cluster is offline until the upgrade completes.

**To upgrade a cluster to a new release version**

1. Sign in to the Amazon Web Services Management Console and open the Amazon Redshift console at [https://console.amazonaws.cn/redshiftv2/](https://console.amazonaws.cn/redshiftv2/).

1. On the navigation menu, choose **Clusters**. 

1. Choose the cluster to upgrade. 

1. For **Actions**, choose **Upgrade cluster version**. The **Upgrade cluster version** page appears.

1. Follow the instructions on the page. 

1. Choose **Upgrade cluster version**. 

# Pausing and resuming a cluster


If you have a cluster that only needs to be available at specific times, you can pause the cluster and later resume it. While the cluster is paused, on-demand billing is suspended. Only the cluster's storage incurs charges. For more information about pricing, see the [Amazon Redshift pricing page](http://www.amazonaws.cn/redshift/pricing/). 

When you pause a cluster, Amazon Redshift creates a snapshot, begins terminating queries, and puts the cluster in a pausing state. If you delete a paused cluster without requesting a final snapshot, then you can't restore the cluster. You can't cancel or roll back a pause or resume operation after it's initiated. 

You can pause and resume a cluster on the Amazon Redshift console, with the Amazon CLI, or with Amazon Redshift API operations. 

You can schedule actions to pause and resume a cluster. When you use the new Amazon Redshift console to create a recurring schedule to pause and resume, then two scheduled actions are created for the date range that you choose. The scheduled action names are suffixed with `-pause` and `-resume`. The total length of the name must fit within the maximum size of a scheduled action name. 

You can't pause the following types of clusters: 
+ EC2-Classic clusters. 
+ Clusters that are not active, for example, a cluster that is currently modifying. 
+ Hardware security module (HSM) clusters. 
+ Clusters that have automated snapshots turned off. 

When deciding to pause a cluster, consider the following: 
+ Connections or queries to the cluster aren't available.
+ You can't see query monitoring information of a paused cluster on the Amazon Redshift console. 
+ You can't modify a paused cluster. Any scheduled actions on the cluster aren't done. These include creating snapshots, resizing clusters, and cluster maintenance operations. 
+ Hardware metrics aren't created. Update your CloudWatch alarms if you have alarms set on missing metrics. 
+ You can't copy the latest automated snapshots of a paused cluster to manual snapshots. 
+ While a cluster is pausing, it can't be resumed until the pause operation is complete. 
+ When you pause a cluster, billing is suspended. However, the pause operation typically completes within 15 minutes, depending upon the size of the cluster. 
+ Audit logs are archived and not restored on resume. 
+ After a cluster is paused, traces and logs might not be available for troubleshooting problems that occurred before the pause. 
+  If you're managing your admin credentials using Amazon Secrets Manager and pause your cluster, your cluster's secret won't be deleted and you'll continue to be billed for the secret. For more information on managing your Redshift admin password with Amazon Secrets Manager, see [Managing Amazon Redshift admin passwords using Amazon Secrets Manager](redshift-secrets-manager-integration.md). 
+ No-backup tables on the cluster are restored on resume for RA3 instance types. They aren't restored on resume for DC2 instance types. For more information about no-backup tables, see [Excluding tables from snapshots](working-with-snapshots.md#snapshots-no-backup-tables).

When you resume a cluster, consider the following: 
+ The cluster version of the resumed cluster is updated to the maintenance version based on the maintenance window of the cluster. 
+ If you delete the subnet associated with a paused cluster, you might have an incompatible network. In this case, restore your cluster from the latest snapshot. 
+ If you delete an Elastic IP address while the cluster is paused, then a new Elastic IP address is requested. 
+ If Amazon Redshift can't resume the cluster with its previous elastic network interface, then Amazon Redshift tries to allocate a new one. 
+ When you resume a cluster, your node IP addresses might change. You might need to update your VPC settings to support these new IP addresses for features like COPY from Secure Shell (SSH) or COPY from Amazon EMR.
+ If you try to resume a cluster that isn't paused, the resume operation returns an error. If the resume operation is part of a scheduled action, modify or delete the scheduled action to prevent future errors. 
+ Depending upon the size of the cluster, it can take several minutes to resume a cluster before queries can be processed. In addition, query performance can be impacted for some period of time while the cluster is being re-hydrated after resume completes. 

# Rebooting a cluster


Rebooting a cluster is a cluster operation that restarts the cluster with the same configuration as before the reboot. You can reboot a cluster to apply pending maintenance updates, reset configuration changes, recover from certain issues, or troubleshoot cluster problems. Rebooting a cluster can help ensure optimal performance, security, and stability of the Amazon Redshift environment. The following procedure provides detailed steps for rebooting an Amazon Redshift cluster.

When you reboot a cluster, the cluster status is set to `rebooting` and a cluster event is created when the reboot is completed. Any pending cluster modifications are applied at this reboot.

**To reboot a cluster**

1. Sign in to the Amazon Web Services Management Console and open the Amazon Redshift console at [https://console.amazonaws.cn/redshiftv2/](https://console.amazonaws.cn/redshiftv2/).

1. On the navigation menu, choose **Clusters**. 

1. Choose the cluster to reboot. 

1. For **Actions**, choose **Reboot cluster**. The **Reboot cluster** page appears.

1. Choose **Reboot cluster**. 

# Relocating a cluster


By using *relocation* in Amazon Redshift, you allow Amazon Redshift to move a cluster to another Availability Zone (AZ) without any loss of data or changes to your applications. With relocation, you can continue operations when there is an interruption of service on your cluster with minimal impact. 

When cluster relocation is turned on, Amazon Redshift might choose to relocate clusters in some situations. In particular, this happens where issues in the current Availability Zone prevent optimal cluster operation or to improve service availability. You can also invoke the relocation function in cases where resource constraints in a given Availability Zone are disrupting cluster operations. An example is the ability to resume or resize a cluster. Amazon Redshift offers the relocation feature at no extra charge.

When an Amazon Redshift cluster is relocated to a new Availability Zone, the new cluster has the same endpoint as the original cluster. Your applications can reconnect to the endpoint and continue operations without modifications or loss of data. However, relocation might not always be possible due to potential resource constraints in a given Availability Zone.

Amazon Redshift cluster relocation is supported for the RA3 instance types only. RA3 instance types use Redshift Managed Storage (RMS) as a durable storage layer. The latest copy of a cluster's data is always available in other Availability Zones in an Amazon Region. In other words, you can relocate an Amazon Redshift cluster to another Availability Zone without any loss of data. 

When you turn on relocation for your cluster, Amazon Redshift migrates your cluster to be behind a proxy. Doing this helps implement location-independent access to a cluster's compute resources. The migration causes the cluster to be rebooted. When a cluster is relocated to another Availability Zone, an outage occurs while the new cluster is brought back online in the new Availability Zone. However, you don't have to make any changes to your applications because the cluster endpoint remains unchanged even after the cluster is relocated to the new Availability Zone. 

Cluster relocation is enabled by default on newly created or restored RA3 clusters whose subnet group includes multiple Availability Zones. Amazon Redshift assigns 5439 as the default port while creating a provisioned cluster. You can change to another port from the port range of 5431-5455 or 8191-8215. (Don't change to a port outside the ranges. It results in an error.) To change the default port for a provisioned cluster, use the Amazon Redshift console, Amazon CLI, or Amazon Redshift API. To change the default port for a serverless workgroup, use the Amazon CLI or the Amazon Redshift Serverless API.

If you turn on relocation and you currently use the leader node IP address to access your your cluster or Enhanced VPC Routing, make sure to change that access. Instead, use the IP address associated with the cluster's virtual private cloud (VPC) endpoint. To find this cluster IP address, find and use the VPC endpoint in the **Network and security** section of the cluster details page. To get more details on the VPC endpoint, sign in to the Amazon VPC console. 

You can also use the Amazon Command Line Interface (Amazon CLI) command `describe-vpc-endpoints` to get the elastic network interface associated with the endpoint. You can use the `describe-network-interfaces` command to get the associated IP address. For more information on Amazon Redshift Amazon CLI commands, see [ Available commands](https://docs.amazonaws.cn/cli/latest/reference/redshift/index.html) in the *Amazon CLI Command Reference.* 

## Limitations


When using Amazon Redshift relocation, be aware of the following limitations:
+ Cluster relocation might not be possible in all scenarios due to potential resource limitations in a given Availability Zone. If this happens, Amazon Redshift doesn't change the original cluster.
+ Relocation isn't supported on DC2 instance families of products.
+ You can't perform a relocation across Amazon Regions.
+ Amazon Redshift relocation defaults to port number 5439. You can also change to another port in the ranges 5431-5455 or 8191-8215.

## Managing relocation using the console


You can manage the settings for cluster relocation using the Amazon Redshift console.

### Turning off relocation when creating a new cluster


Use the following procedure to turn off relocation when creating a new cluster. 

**To turn off relocation for a new cluster**

1. Sign in to the Amazon Web Services Management Console and open the Amazon Redshift console at [https://console.amazonaws.cn/redshiftv2/](https://console.amazonaws.cn/redshiftv2/).

1. On the navigation menu, choose **Clusters**. 

1. Choose **Create cluster** to create a new cluster. For more information on how to create a cluster, see [Get started with Amazon Redshift provisioned data warehouses](https://docs.amazonaws.cn/redshift/latest/gsg/new-user.html) in *Amazon Redshift Getting Started Guide*.

1. Under **Backup**, for **Cluster relocation**, choose **Disabled**. Relocation is turned on by default.

1. Choose **Create cluster**.

### Modifying relocation for an existing cluster


Use the following procedure to change the relocation setting for an existing cluster.

**To modify the relocation setting for an existing cluster**

1. Sign in to the Amazon Web Services Management Console and open the Amazon Redshift console at [https://console.amazonaws.cn/redshiftv2/](https://console.amazonaws.cn/redshiftv2/).

1. On the navigation menu, choose **Clusters**. The clusters for your account in the current Amazon Region are listed. A subset of properties of each cluster is displayed in columns in the list.

1. Choose the name of the cluster that you want to modify from the list. The cluster details page appears.

1. Choose the **Maintenance** tab, then in the **Backup details** section choose **Edit**.

1. Under **Backup**, choose **Disabled**. Relocation is turned on by default. 

1. Choose **Modify cluster**.

### Relocating a cluster


Use the following procedure to manually relocate a cluster to another Availability Zone. This is especially useful when you want to test your network setup in secondary Availability Zones or when you are running into resource constraints in the current Availability Zone. 

**To relocate a cluster to another Availability Zone**

1. Sign in to the Amazon Web Services Management Console and open the Amazon Redshift console at [https://console.amazonaws.cn/redshiftv2/](https://console.amazonaws.cn/redshiftv2/).

1. On the navigation menu, choose **Clusters**. The clusters for your account in the current Amazon Region are listed. A subset of properties of each cluster is displayed in columns in the list.

1. Choose the name of the cluster that you want to move from the list. The cluster details page appears.

1. For **Actions**, choose **Relocate**. The **Relocate cluster** page appears.

1. (Optional) Choose an **Availability Zone**. If you don't choose an Availability Zone, Amazon Redshift chooses one for you.

Amazon Redshift starts the relocation and displays the cluster as relocating. After the relocation completes, the cluster status changes to available.

## Managing relocation using the Amazon Redshift CLI


You can manage the settings for cluster relocation using the Amazon Command Line Interface (CLI).

With the Amazon CLI, the following example command creates an Amazon Redshift cluster named **mycluster** that has relocation turned on.

```
aws redshift create-cluster --cluster-identifier mycluster --number-of-nodes 2 --master-username enter a username --master-user-password enter a password --node-type ra3.4xlarge --port 5439 --no-availability-zone-relocation
```

If your current cluster is using a different port, you must modify it to use from the port range of 5431-5455 or 8191-8215 before modifying it to turn on relocation. The default is 5439. The following example command modifies the port in case your cluster doesn't use one from the given range.

```
aws redshift modify-cluster --cluster-identifier mycluster --port 5439
```

The following example command includes the availability-zone-relocation parameter on the Amazon Redshift cluster.

```
aws redshift modify-cluster --cluster-identifier mycluster --availability-zone-relocation
```

The following example command turns off the availability-zone-relocation parameter on the Amazon Redshift cluster.

```
aws redshift modify-cluster --cluster-identifier mycluster --no-availability-zone-relocation
```

The following example command invokes relocation on the Amazon Redshift cluster.

```
aws redshift modify-cluster --cluster-identifier mycluster --availability-zone us-east-1b
```

# Setting a usage limit on a cluster
Setting a usage limit

You can add up to four usage limits to control usage for each of the following:
+  Concurrency scaling 
+  Automatic optimizations run using extra compute resources 
+  Redshift Spectrum usage 
+  Cross-Region data sharing 

## Setting a usage limit for a provisioned cluster


Following is the procedure for setting a usage limit on a provisioned cluster:

**To set a usage limit for a cluster**

1. Sign in to the Amazon Web Services Management Console and open the Amazon Redshift console at [https://console.amazonaws.cn/redshiftv2/](https://console.amazonaws.cn/redshiftv2/).

1. Navigate to the provisioned cluster that you want to set a limit for.

1.  From the cluster’s details page, select **Manage usage limit** from the **Actions** drop-down menu. You can also select the **Maintenance** tab for a cluster, then scroll down and select **Create usage limits**. 

1.  Select **Add limit** for the usage limit that you want to set. You can add up to 4 limits for a given feature. 

1.  Set a **Time period** for the usage limit, which is either **Daily**, **Weekly**, or **Monthly**. 

1.  Set a **Usage limit**. 
   +  For concurrency scaling and automatic optimizations run using extra compute resources limits, the usage limit is the amount of time that Amazon Redshift spends using the feature in the given time period. In this case, the usage limit is set in hours and minutes. 
   +  For Redshift Spectrum, the usage limit is the amount of data scanned from Amazon S3. In this case, the usage limit is set in terabytes (TB). 
   +  For cross-Region data sharing, the usage limit is the amount of data transferred from the producer Region to consumer Regions that consumers can query. In this case, the usage limit is set in terabytes (TB). 

1.  Set the **Action** for Amazon Redshift to take when your cluster reaches the limit. These are the following: 
   +  **Log to system table** ‐ Adds a record to the system view [ SYS\$1QUERY\$1HISTORY](https://docs.amazonaws.cn/redshift/latest/dg/SYS_QUERY_HISTORY.html). You can query the usage\$1limit column in this view to determine if a query exceeded the limit. 
   +  **Alert** ‐ Uses Amazon SNS to set up notification subscriptions and send notifications if a limit is breached. You can choose an existing Amazon SNS topic, create a new topic, or proceed without one. 
   +  **Disable feature** ‐ Disables the feature. You can also choose to use Amazon SNS to send a notification. Users can continue to use the cluster for other tasks. 

   The first two actions are informational, but the last turns off use of the feature.

1.  Choose **Save changes** at the bottom of the page to save the limit. If you set more than one limit at once, **Save changes** will save all of them at once. 

# Shutting down and deleting a cluster


You can shut down your cluster if you want to stop it from running and incurring charges. When you shut it down, you can optionally create a final snapshot. If you create a final snapshot, Amazon Redshift will create a manual snapshot of your cluster before shutting it down. If you plan to provision a new cluster with the same data and configuration as the one you are deleting, you need a manual snapshot. By using a manual snapshot, you can restore the snapshot later and resume using the cluster. 

If you no longer need your cluster and its data, you can shut it down without creating a final snapshot. In this case, the cluster and data are deleted permanently.

Regardless of whether you shut down your cluster with a final manual snapshot, all automated snapshots associated with the cluster will be deleted after the cluster is shut down. Any manual snapshots associated with the cluster are retained. Any manual snapshots that are retained, including the optional final snapshot, are charged at the Amazon Simple Storage Service storage rate if you have no other clusters running when you shut down the cluster, or if you exceed the available free storage that is provided for your running Amazon Redshift clusters. For more information about snapshot storage charges, see the [Amazon Redshift pricing page](http://www.amazonaws.cn/redshift/pricing/). 

Deleting a cluster also deletes any associated Amazon Secrets Manager secrets.

**To delete a cluster**

1. Sign in to the Amazon Web Services Management Console and open the Amazon Redshift console at [https://console.amazonaws.cn/redshiftv2/](https://console.amazonaws.cn/redshiftv2/).

1. On the navigation menu, choose **Clusters**.

1. Choose the cluster to delete. 

1. For **Actions**, choose **Delete**. The **Delete cluster** page appears. 

1. Choose **Delete cluster**. 

**Note**  
When you delete a cluster and choose to create a final snapshot, Amazon Redshift will stop the delete request if a restore operation is in progress on the cluster. If this occurs, you can delete the cluster without a final snapshot, or you can delete it with a final snapshot after the restore completes. 

# Amazon Redshift snapshots and backups
Snapshots and backups

Snapshots are point-in-time backups of a cluster. There are two types of snapshots: *automated* and *manual*. Amazon Redshift stores these snapshots internally in Amazon S3 by using an encrypted Secure Sockets Layer (SSL) connection. 

Amazon Redshift automatically takes incremental snapshots that track changes to the cluster since the previous automated snapshot. Automated snapshots retain all of the data required to restore a cluster from a snapshot. You can create a snapshot schedule to control when automated snapshots are taken, or you can take a manual snapshot any time.

When you restore from a snapshot, Amazon Redshift creates a new cluster and makes the new cluster available before all of the data is loaded, so you can begin querying the new cluster immediately. The cluster streams data on demand from the snapshot in response to active queries, then loads the remaining data in the background. 

When you launch a cluster, you can set the retention period for automated and manual snapshots. You can change the default retention period for automated and manual snapshots by modifying the cluster. You can change the retention period for a manual snapshot when you create the snapshot or by modifying the snapshot. 

You can monitor the progress of snapshots by viewing the snapshot details in the Amazon Web Services Management Console, or by calling [describe-cluster-snapshots](https://docs.amazonaws.cn/cli/latest/reference/redshift/describe-cluster-snapshots.html) in the CLI or the [DescribeClusterSnapshots](https://docs.amazonaws.cn/redshift/latest/APIReference/API_DescribeClusterSnapshots.html) API action. For an in-progress snapshot, these display information such as the size of the incremental snapshot, the transfer rate, the elapsed time, and the estimated time remaining. 

To ensure that your backups are always available to your cluster, Amazon Redshift stores snapshots in an internally managed Amazon S3 bucket that is managed by Amazon Redshift. To manage storage charges, evaluate how many days you need to keep automated snapshots and configure their retention period accordingly. Delete any manual snapshots that you no longer need. For more information about the cost of backup storage, see the [Amazon Redshift pricing](http://www.amazonaws.cn/redshift/pricing/) page. 

You can also create and restore snapshots using Amazon Backup, a fully managed service that helps you centralize and automate data protection across Amazon services, in the cloud, and on premises. For more information, see [Amazon Backup integration with Amazon Redshift](managing-aws-backup.md). For information on Amazon Backup, see [What is Amazon Backup?](https://docs.amazonaws.cn/aws-backup/latest/devguide/whatisbackup.html) in the *Amazon Backup Developer Guide*. 

## Working with snapshots and backups in Amazon Redshift Serverless


Amazon Redshift Serverless, like a provisioned cluster, enables you to take a backup as a point-in-time representation of the objects and data in the namespace. There are two types of backups in Amazon Redshift Serverless: snapshots that are manually created and recovery points that Amazon Redshift Serverless creates automatically. You can find more information about working with snapshots for Amazon Redshift Serverless at [Snapshots and recovery points](https://docs.amazonaws.cn/redshift/latest/mgmt/serverless-snapshots-recovery-points.html). 

You can also restore a snapshot from a provisioned cluster to a serverless namespace. For more information, see [Restoring a serverless namespace from a snapshot](https://docs.amazonaws.cn/redshift/latest/mgmt/serverless-snapshot-restore.html).

## Automated snapshots


When automated snapshots are enabled for a cluster, Amazon Redshift periodically takes snapshots of that cluster. By default Amazon Redshift takes a snapshot about every eight hours or following every 5 GB per node of data changes, or whichever comes first. If your data is larger than 5 GB \$1 the number of nodes, the shortest amount of time in between automated snapshot creation is 15 minutes. Alternatively, you can create a snapshot schedule to control when automated snapshots are taken. If you're using custom schedules, the minimum amount of time between automated snapshots is one hour. Automated snapshots are enabled by default when you create a cluster.

Automated snapshots are deleted at the end of a retention period. The default retention period is one day, but you can modify it by using the Amazon Redshift console or programmatically by using the Amazon Redshift API or CLI.

To disable automated snapshots, set the retention period to zero. If you disable automated snapshots, Amazon Redshift stops taking snapshots and deletes any existing automated snapshots for the cluster. You can't disable automated snapshots for RA3 node types. You can set an RA3 node type automated retention period from 1–35 days. 

Only Amazon Redshift can delete an automated snapshot; you cannot delete them manually. Amazon Redshift deletes automated snapshots at the end of a snapshot's retention period, when you disable automated snapshots for the cluster, or when you delete the cluster. *Amazon Redshift retains the latest automated snapshot until you disable automated snapshots or delete the cluster.*

If you want to keep an automated snapshot for a longer period, you can create a copy of it as a manual snapshot. The automated snapshot is retained until the end of the retention period, but the corresponding manual snapshot is retained until you manually delete it or until the end of the retention period.

## Automated snapshot schedules


To precisely control when snapshots are taken, you can create a snapshot schedule and attach it to one or more clusters. When you modify a snapshot schedule, the schedule is modified for all associated clusters. If a cluster doesn't have a snapshot schedule attached, the cluster uses the default automated snapshot schedule. 

A *snapshot schedule* is a set of schedule rules. You can define a simple schedule rule based on a specified interval, such as every 8 hours or every 12 hours. You can also add rules to take snapshots on certain days of the week, at specific times, or during specific periods. Rules can also be defined using Unix-like cron expressions. 

## Snapshot schedule format


On the Amazon Redshift console, you can create a snapshot schedule. Then, you can attach a schedule to a cluster to trigger the creation of a system snapshot. A schedule can be attached to multiple clusters, and you can create multiple cron definitions in a schedule to trigger a snapshot.

You can define a schedule for your snapshots using a cron syntax. The definition of these schedules uses a modified Unix-like [cron](http://en.wikipedia.org/wiki/Cron) syntax. You specify time in [Coordinated universal time (UTC)](http://en.wikipedia.org/wiki/Coordinated_Universal_Time). You can create schedules with a maximum frequency of one hour and minimum precision of one minute.

Amazon Redshift modified cron expressions have 3 required fields, which are separated by white space. 

**Syntax**

```
cron(Minutes Hours Day-of-month Month Day-of-week Year)
```


| **Fields** | **Values** | **Wildcards** | 
| --- | --- | --- | 
|  Minutes  |  0–59  |  , - \$1 /   | 
|  Hours  |  0–23  |  , - \$1 /   | 
|  Day-of-month  |  1–31  |  , - \$1 ? / L W  | 
|  Month  |  1–12 or JAN-DEC  |  , - \$1 /  | 
|  Day-of-week  |  1–7 or SUN-SAT  |  , - \$1 ? L \$1  | 
|  Year  |  1970–2199  |  , - \$1 /  | 

**Wildcards**
+ The **,** (comma) wildcard includes additional values. In the `Day-of-week` field, `MON,WED,FRI` would include Monday, Wednesday, and Friday. Total values are limited to 24 per field.
+ The **-** (dash) wildcard specifies ranges. In the `Hour` field, 1–15 would include hours 1 through 15 of the specified day.
+ The **\$1** (asterisk) wildcard includes all values in the field. In the `Hours` field, **\$1** would include every hour.
+ The **/** (forward slash) wildcard specifies increments. In the `Hours` field, you could enter **1/10** to specify every 10th hour, starting from the first hour of the day (for example, the 01:00, 11:00, and 21:00).
+ The **?** (question mark) wildcard specifies one or another. In the `Day-of-month` field you could enter **7**, and if you didn't care what day of the week the seventh was, you could enter **?** in the Day-of-week field.
+ The **L** wildcard in the `Day-of-month` or `Day-of-week` fields specifies the last day of the month or week.
+ The **W** wildcard in the `Day-of-month` field specifies a weekday. In the `Day-of-month` field, `3W` specifies the day closest to the third weekday of the month.
+ The **\$1** wildcard in the Day-of-week field specifies a certain instance of the specified day of the week within a month. For example, 3\$12 would be the second Tuesday of the month: the 3 refers to Tuesday because it is the third day of each week, and the 2 refers to the second day of that type within the month.
**Note**  
If you use a '\$1' character, you can define only one expression in the day-of-week field. For example, "3\$11,6\$13" is not valid because it is interpreted as two expressions. 

**Limits**
+ You can't specify the `Day-of-month` and `Day-of-week` fields in the same cron expression. If you specify a value in one of the fields, you must use a **?** (question mark) in the other.
+ Snapshot schedules don't support the following frequencies: 
  + Snapshots scheduled more frequently than 1 per hour.
  + Snapshots scheduled less frequently than 1 per day (24 hours).

  If you have overlapping schedules that result in scheduling snapshots within a 1 hour window, a validation error results.

When creating a schedule, you can use the following sample cron strings.


| Minutes | Hours | Day of week | Meaning | 
| --- | --- | --- | --- | 
|  0  |  14-20/1  |  TUE  |  Every hour between 2pm and 8pm on Tuesday.  | 
|  0  |  21  |  MON-FRI  |  Every night at 9pm Monday–Friday.  | 
|  30  |  0/6  |  SAT-SUN  |  Every 6 hour increment on Saturday and Sunday starting at 30 minutes after midnight (00:30) that day. This results in a snapshot at [00:30, 06:30, 12:30, and 18:30] each day.  | 
|  30  |  12/4  |  \$1  |  Every 4 hour increment starting at 12:30 each day. This resolves to [12:30, 16:30, 20:30].  | 

For example to run on a schedule on an every 2 hour increment starting at 15:15 each day. This resolves to [15:15, 17:15, 19:15, 21:15, 23:15] , specify:

```
cron(15 15/2 *)   
```

You can create multiple cron schedule definitions within as schedule. For example the following Amazon CLI command contains two cron schedules in one schedule.

```
create-snapshot-schedule --schedule-identifier "my-test" --schedule-definition "cron(0 17 SAT,SUN)" "cron(0 9,17 MON-FRI)"   
```

## Manual snapshots


You can take a manual snapshot any time. By default, manual snapshots are retained indefinitely, even after you delete your cluster. You can specify the retention period when you create a manual snapshot, or you can change the retention period by modifying the snapshot. For more information about changing the retention period, see [Modifying the manual snapshot retention period](snapshot-manual-retention-period.md).

If a snapshot is deleted, you can't start any new operations that reference that snapshot. However, if a restore operation is in progress, that restore operation will run to completion. 

Amazon Redshift has a quota that limits the total number of manual snapshots that you can create; this quota is per Amazon account per Amazon Region. The default quota is listed at [Quotas and limits in Amazon Redshift](amazon-redshift-limits.md). 

## Snapshot storage


Because snapshots accrue storage charges, it's important that you delete them when you no longer need them. Amazon Redshift deletes automated and manual snapshots at the end of their respective snapshot retention periods. You can also delete manual snapshots using the Amazon Web Services Management Console or with the [batch-delete-cluster-snapshots](https://docs.amazonaws.cn/cli/latest/reference/redshift/batch-delete-cluster-snapshots.html) CLI command. 

You can change the retention period for a manual snapshot by modifying the manual snapshot settings. 

You can get information about how much storage your snapshots are consuming using the Amazon Redshift Console or using the [describe-storage](https://docs.amazonaws.cn/cli/latest/reference/redshift/describe-storage.html) CLI command. 

## Excluding tables from snapshots


By default, all user-defined permanent tables are included in snapshots. If a table, such as a staging table, doesn't need to be backed up, you can significantly reduce the time needed to create snapshots and restore from snapshots. You also reduce storage space on Amazon S3 by using a no-backup table. To create a no-backup table, include the BACKUP NO parameter when you create the table. For more information, see [CREATE TABLE](https://docs.amazonaws.cn/redshift/latest/dg/r_CREATE_TABLE_NEW.html) and [CREATE TABLE AS](https://docs.amazonaws.cn/redshift/latest/dg/r_CREATE_TABLE_AS.html) in the *Amazon Redshift Database Developer Guide*.

**Note**  
No-backup tables aren't supported for RA3 provisioned clusters and Amazon Redshift Serverless workgroups. A table marked as no-backup in an RA3 cluster or serverless workgroup is treated as a permanent table that will always be backed up while taking a snapshot, and always restored when restoring from a snapshot. To avoid snapshot costs for no-backup tables, truncate them before taking a snapshot.

# Creating a manual snapshot


You can create a manual snapshot of a cluster from the snapshots list as follows. Or, you can take a snapshot of a cluster in the cluster configuration pane. For more information, see [Amazon Redshift snapshots and backups](working-with-snapshots.md).

**Note**  
No-backup tables aren't supported for RA3 provisioned clusters and Amazon Redshift Serverless workgroups. A table marked as no-backup in an RA3 cluster or serverless workgroup is treated as a permanent table that will always be backed up while taking a snapshot, and always restored when restoring from a snapshot. To avoid snapshot costs for no-backup tables, truncate them before taking a snapshot.

**To create a manual snapshot**

1. Sign in to the Amazon Web Services Management Console and open the Amazon Redshift console at [https://console.amazonaws.cn/redshiftv2/](https://console.amazonaws.cn/redshiftv2/).

1. On the navigation menu, choose **Clusters**, **Snapshots**, then choose **Create snapshot**. The snapshot page to create a manual snapshot is displayed. 

1. Enter the properties of the snapshot definition, then choose **Create snapshot**. It might take some time for the snapshot to be available. 

# Creating a snapshot schedule


Amazon Redshift takes automatic, incremental snapshots of your data periodically and saves them to Amazon S3. Additionally, you can take manual snapshots of your data whenever you want. 

All snapshot tasks in the Amazon Redshift console start from the snapshot list. You can filter the list by using a time range, the snapshot type, and the cluster associated with the snapshot. In addition, you can sort the list by date, size, and snapshot type. Depending on the snapshot type that you select, you might have different options available for working with the snapshot. 

To precisely control when snapshots are taken, you can create a snapshot schedule and attach it to one or more clusters. You can attach a schedule when you create a cluster or by modifying the cluster. For more information, see [Automated snapshot schedules](working-with-snapshots.md#automated-snapshot-schedules).

**To create a snapshot schedule**

1. Sign in to the Amazon Web Services Management Console and open the Amazon Redshift console at [https://console.amazonaws.cn/redshiftv2/](https://console.amazonaws.cn/redshiftv2/).

1. On the navigation menu, choose **Clusters**, **Snapshots**, then choose the **Snapshot schedules** tab. The snapshot schedules are displayed. 

1. Choose **Add schedule** to display the page to add a schedule. 

1. Enter the properties of the schedule definition, then choose **Add schedule**. 

1. On the page that appears, you can attach clusters to your new snapshot schedule, then choose **OK**. 

# Sharing a snapshot


You can share an existing manual snapshot with other Amazon customer accounts by authorizing access to the snapshot. You can authorize up to 20 for each snapshot and 100 for each Amazon Key Management Service (Amazon KMS) key. That is, if you have 10 snapshots that are encrypted with a single KMS key, then you can authorize 10 Amazon accounts to restore each snapshot, or other combinations that add up to 100 accounts and do not exceed 20 accounts for each snapshot. A person logged in as a user in one of the authorized accounts can then describe the snapshot or restore it to create a new Amazon Redshift cluster under their account. For example, if you use separate Amazon customer accounts for production and test, a user can log on using the production account and share a snapshot with users in the test account. Someone logged on as a test account user can then restore the snapshot to create a new cluster that is owned by the test account for testing or diagnostic work. 

A manual snapshot is permanently owned by the Amazon customer account under which it was created. Only users in the account owning the snapshot can authorize other accounts to access the snapshot, or to revoke authorizations. Users in the authorized accounts can only describe or restore any snapshot that has been shared with them; they cannot copy or delete snapshots that have been shared with them. An authorization remains in effect until the snapshot owner revokes it. If an authorization is revoked, the previously authorized user loses visibility of the snapshot and cannot launch any new actions referencing the snapshot. If the account is in the process of restoring the snapshot when access is revoked, the restore runs to completion. You cannot delete a snapshot while it has active authorizations; you must first revoke all of the authorizations.

Amazon customer accounts are always authorized to access snapshots owned by the account. Attempts to authorize or revoke access to the owner account will receive an error. You cannot restore or describe a snapshot that is owned by an inactive Amazon customer account. 

After you have authorized access to an Amazon customer account, no users in that account can perform any actions on the snapshot unless they assume a role with policies that allow them to do so.
+ Users in the snapshot owner account can authorize and revoke access to a snapshot only if they assume a role with an IAM policy that allows them to perform those actions with a resource specification that includes the snapshot. For example, the following policy allows a user or role in Amazon account `012345678912` to authorize other accounts to access a snapshot named `my-snapshot20130829`:

------
#### [ JSON ]

****  

  ```
  {
    "Version":"2012-10-17",		 	 	 
    "Statement":[
      {
        "Effect":"Allow",
        "Action":[
            "redshift:AuthorizeSnapshotAccess",
            "redshift:RevokeSnapshotAccess"
            ],
        "Resource":[
             "arn:aws-cn:redshift:us-east-1:012345678912:snapshot:*/my-snapshot20130829"
            ]
      }
    ]
  }
  ```

------
+ Users in an Amazon account with which a snapshot has been shared cannot perform actions on that snapshot unless they have permissions allowing those actions. You can do this by assigning the policy to a role and assuming the role. 
  + To list or describe a snapshot, they must have an IAM policy that allows the `DescribeClusterSnapshots` action. The following code shows an example:

------
#### [ JSON ]

****  

    ```
    {
      "Version":"2012-10-17",		 	 	 
      "Statement":[
        {
          "Effect":"Allow",
          "Action":[
              "redshift:DescribeClusterSnapshots"
              ],
          "Resource":[
               "*"
              ]
        }
      ]
    }
    ```

------
  + To restore a snapshot, a user must assume a role with an IAM policy that allows the `RestoreFromClusterSnapshot` action and has a resource element that covers both the cluster they are attempting to create and the snapshot. For example, if a user in account `012345678912` has shared snapshot `my-snapshot20130829` with account `219876543210`, in order to create a cluster by restoring the snapshot, a user in account `219876543210` must assume a role with a policy such as the following:

------
#### [ JSON ]

****  

    ```
    {
      "Version":"2012-10-17",		 	 	 
      "Statement":[
        {
          "Effect":"Allow",
          "Action":[
              "redshift:RestoreFromClusterSnapshot"
              ],
          "Resource":[
               "arn:aws-cn:redshift:us-east-1:012345678912:snapshot:*/my-snapshot20130829",
               "arn:aws-cn:redshift:us-east-1:219876543210:cluster:from-another-account"
              ]
        }
      ]
    }
    ```

------
  + After access to a snapshot has been revoked from an Amazon account, no users in that account can access the snapshot. This is the case even if those accounts have IAM policies that allow actions on the previously shared snapshot resource.

## Sharing a cluster snapshot using the console


On the console, you can authorize other users to access a manual snapshot you own, and you can later revoke that access when it is no longer required.

**To share a snapshot with another account**

1. Sign in to the Amazon Web Services Management Console and open the Amazon Redshift console at [https://console.amazonaws.cn/redshiftv2/](https://console.amazonaws.cn/redshiftv2/).

1. On the navigation menu, choose **Clusters**, **Snapshots**, then choose the manual snapshot to share. 

1. For **Actions**, choose **Manual snapshot settings** to display the properties of the manual snapshot. 

1. Enter the account or accounts to share with in the **Manage access** section, then choose **Save**. 

## Security considerations for sharing encrypted snapshots


 When you provide access to an encrypted snapshot, Redshift requires that the Amazon KMS customer managed key used to create the snapshot is shared with the account or accounts performing the restore. If the key isn't shared, attempting to restore the snapshot results in an access-denied error. The receiving account doesn't need any extra permissions to restore a shared snapshot. When you authorize snapshot access and share the key, the identity authorizing access must have `kms:DescribeKey` permissions on the key used to encrypt the snapshot. This permission is described in more detail in [Amazon KMS permissions](https://docs.amazonaws.cn/kms/latest/developerguide/kms-api-permissions-reference.html). For more information, see [DescribeKey](https://docs.amazonaws.cn/kms/latest/APIReference/API_DescribeKey.html) in the Amazon Redshift API reference documentation. 

The customer managed key policy can be updated programmatically or in the Amazon Key Management Service console.

**Note**  
If you're using a default KMS key, you don't need to take action or change anything in Amazon KMS in order to share a snapshot.

### Allowing access to the Amazon KMS key for an encrypted snapshot


To share the Amazon KMS customer managed key for an encrypted snapshot, update the key policy by performing the following steps:

1. Update the KMS key policy with the Amazon Resource Name (ARN) of the Amazon account that you are sharing to as `Principal` in the KMS key policy.

1.  Allow the `kms:Decrypt` action. 

In the following key-policy example, user `111122223333` is the owner of the KMS key, and user `444455556666` is the account that the key is shared with. This key policy gives the Amazon account access to the sample KMS key by including the ARN for the root Amazon account identity for user `444455556666` as a `Principal` for the policy, and by allowing the `kms:Decrypt` action. 

------
#### [ JSON ]

****  

```
{
    "Id": "key-policy-1",
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "Allow use of the key",
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "arn:aws-cn:iam::111122223333:user/KeyUser",
                    "arn:aws-cn:iam::444455556666:root"
                ]
            },
            "Action": [
                "kms:Decrypt"
            ],
            "Resource": "*"
        }
    ]
}
```

------

After access is granted to the customer managed KMS key, the account that restores the encrypted snapshot must create an Amazon Identity and Access Management (IAM) role, or user, if it doesn't already have one. In addition, that Amazon account must also attach an IAM policy to that IAM role or user that allows them to restore an encrypted database snapshot, using your KMS key. 

For more information about giving access to an Amazon KMS key, see [Allowing users in other accounts to use a KMS key](https://docs.amazonaws.cn/kms/latest/developerguide/key-policy-modifying-external-accounts.html#cross-account-console), in the Amazon Key Management Service developer guide.

For an overview of key policies, see [How Amazon Redshift uses Amazon KMS](https://docs.amazonaws.cn/kms/latest/developerguide/services-redshift.html).

# Copying an automated snapshot


Automated snapshots are automatically deleted when their retention period expires, when you disable automated snapshots, or when you delete a cluster. If you want to keep an automated snapshot, you can copy it to a manual snapshot. 

**To copy an automated snapshot**

1. Sign in to the Amazon Web Services Management Console and open the Amazon Redshift console at [https://console.amazonaws.cn/redshiftv2/](https://console.amazonaws.cn/redshiftv2/).

1. On the navigation menu, choose **Clusters**, **Snapshots**, then choose the snapshot to copy. 

1. For **Actions**, choose **Copy automated snapshot** to copy the snapshot. 

1. Update the properties of the new snapshot, then choose **Copy**. 

# Copying a snapshot to another Amazon Region


You can configure Amazon Redshift to automatically copy snapshots (automated or manual) for a cluster to another Amazon Region. When a snapshot is created in the cluster's primary Amazon Region, it's copied to a secondary Amazon Region. The two Amazon Regions are known respectively as the *source Amazon Region* and *destination Amazon Region*. If you store a copy of your snapshots in another Amazon Region, you can restore your cluster from recent data if anything affects the primary Amazon Region. You can configure your cluster to copy snapshots to only one destination Amazon Region at a time. For a list of Amazon Redshift Regions, see [Regions and endpoints](https://docs.amazonaws.cn/general/latest/gr/rande.html) in the *Amazon Web Services General Reference*.

When you enable Amazon Redshift to automatically copy snapshots to another Amazon Region, you specify the destination Amazon Region to copy the snapshots to. For automated snapshots, you can also specify the retention period to keep them in the destination Amazon Region. After an automated snapshot is copied to the destination Amazon Region and it reaches the retention time period there, it's deleted from the destination Amazon Region. Doing this keeps your snapshot usage low. To keep the automated snapshots for a shorter or longer time in the destination Amazon Region, change this retention period.

The retention period that you set for automated snapshots that are copied to the destination Amazon Region is separate from the retention period for automated snapshots in the source Amazon Region. The default retention period for copied snapshots is seven days. That seven-day period applies only to automated snapshots. In both the source and destination Amazon Regions, manual snapshots are deleted at the end of the snapshot retention period or when you manually delete them.

You can disable automatic snapshot copy for a cluster at any time. When you disable this feature, snapshots are no longer copied from the source Amazon Region to the destination Amazon Region. Any automated snapshots copied to the destination Amazon Region are deleted as they reach the retention period limit, unless you create manual snapshot copies of them. These manual snapshots, and any manual snapshots that were copied from the destination Amazon Region, are kept in the destination Amazon Region until you manually delete them.

To change the destination Amazon Region that you copy snapshots to, first disable the automatic copy feature. Then re-enable it, specifying the new destination Amazon Region.

After a snapshot is copied to the destination Amazon Region, it becomes active and available for restoration purposes.

To copy snapshots for Amazon KMS–encrypted clusters to another Amazon Region, create a grant for Amazon Redshift to use a customer managed key in the destination Amazon Region. Then choose that grant when you enable copying of snapshots in the source Amazon Region. For more information about configuring snapshot copy grants, see [Copying Amazon KMS–encrypted snapshots to another Amazon Web Services Region](working-with-db-encryption.md#configure-snapshot-copy-grant).

# Restoring a cluster from a snapshot


A snapshot contains data from any databases that are running on your cluster. It also contains information about your cluster, including the number of nodes, node type, and admin user name. If you restore your cluster from a snapshot, Amazon Redshift uses the cluster information to create a new cluster. Then it restores all the databases from the snapshot data.

**Note**  
No-backup tables aren't supported for RA3 provisioned clusters and Amazon Redshift Serverless workgroups. A table marked as no-backup in an RA3 cluster or serverless workgroup is treated as a permanent table that will always be backed up while taking a snapshot, and always restored when restoring from a snapshot.

For the new cluster created from the original snapshot, you can choose the configuration, such as node type and number of nodes. The cluster is restored in the same Amazon Region and a random, system-chosen Availability Zone, unless you specify another Availability Zone in your request. When you restore a cluster from a snapshot, you can optionally choose a compatible maintenance track for the new cluster.

**Note**  
When you restore a snapshot to a cluster with a different configuration, the snapshot must have been taken on a cluster with cluster version 1.0.10013, or later. 

When a restore is in progress, events are typically emitted in the following order:

1. RESTORE\$1STARTED – REDSHIFT-EVENT-2008 sent when the restore process begins. 

1. RESTORE\$1SUCCEEDED – REDSHIFT-EVENT-3003 sent when the new cluster has been created. 

   The cluster is available for queries. 

1. DATA\$1TRANSFER\$1COMPLETED – REDSHIFT-EVENT-3537 sent when data transfer complete. 

**Note**  
RA3 clusters only emit RESTORE\$1STARTED and RESTORE\$1SUCCEEDED events. There is no explicit data transfer to be done after a RESTORE succeeds because RA3 node types store data in Amazon Redshift managed storage. With RA3 nodes, data is continuously transferred between RA3 nodes and Amazon Redshift managed storage as part of normal query processing. RA3 nodes cache hot data locally and keep less frequently queried blocks in Amazon Redshift managed storage automatically. 

You can monitor the progress of a restore by either calling the [DescribeClusters](https://docs.amazonaws.cn/redshift/latest/APIReference/API_DescribeClusters.html) API operation, or viewing the cluster details in the Amazon Web Services Management Console. For an in-progress restore, these display information such as the size of the snapshot data, the transfer rate, the elapsed time, and the estimated time remaining. For a description of these metrics, see [RestoreStatus](https://docs.amazonaws.cn/redshift/latest/APIReference/API_RestoreStatus.html).

You can't use a snapshot to revert an active cluster to a previous state.

**Note**  
When you restore a snapshot into a new cluster, the default security group and parameter group are used unless you specify different values. 

You might want to restore a snapshot to a cluster with a different configuration for these reasons:
+ When a cluster is made up of smaller node types and you want to consolidate it into a larger node type with fewer nodes. 
+ When you have monitored your workload and determined the need to move to a node type with more CPU and storage. 
+ When you want to measure performance of test workloads with different node types. 

Restore has the following constraints: 
+ The new node configuration must have enough storage for existing data. Even when you add nodes, your new configuration might not have enough storage because of the way that data is redistributed. 
+ The restore operation checks if the snapshot was created on a cluster version that is compatible with the cluster version of the new cluster. If the new cluster has a version level that is too early, then the restore operation fails and reports more information in an error message.
+ The possible configurations (number of nodes and node type) you can restore to is determined by the number of nodes in the original cluster and the target node type of the new cluster. To determine the possible configurations available, you can use the Amazon Redshift console or the `describe-node-configuration-options` Amazon CLI command with `action-type restore-cluster`. For more information about the restoring using the Amazon Redshift console, see [Restoring a cluster from a snapshot](#working-with-snapshot-restore-cluster-from-snapshot). 

The following steps take a cluster with many nodes and consolidate it into a bigger node type with a smaller number of nodes using the Amazon CLI. For this example, we start with a source cluster of 24 nodes. In this case, suppose that we already created a snapshot of this cluster and want to restore it into a bigger node type.

1.  Run the following command to get the details of our 24-node cluster. 

   ```
   aws redshift describe-clusters --region eu-west-1 --cluster-identifier mycluster-123456789012
   ```

1. Run the following command to get the details of the snapshot. 

   ```
   aws redshift describe-cluster-snapshots --region eu-west-1 --snapshot-identifier mycluster-snapshot
   ```

1. Run the following command to describe the options available for this snapshot. 

   ```
   aws redshift describe-node-configuration-options --snapshot-identifier mycluster-snapshot --region eu-west-1 --action-type restore-cluster
   ```

   This command returns an option list with recommended node types, number of nodes, and disk utilization for each option. For this example, the preceding command lists the following possible node configurations. We choose to restore into a three-node cluster.

   ```
   {
       "NodeConfigurationOptionList": [
           {
               "EstimatedDiskUtilizationPercent": 65.26134808858235,
               "NodeType": "dc2.large",
               "NumberOfNodes": 24
           },
           {
               "EstimatedDiskUtilizationPercent": 32.630674044291176,
               "NodeType": "dc2.large",
               "NumberOfNodes": 48
           },
           {
               "EstimatedDiskUtilizationPercent": 65.26134808858235,
               "NodeType": "dc2.8xlarge",
               "NumberOfNodes": 3
           },
           {
               "EstimatedDiskUtilizationPercent": 48.94601106643677,
               "NodeType": "dc2.8xlarge",
               "NumberOfNodes": 4
           },
           {
               "EstimatedDiskUtilizationPercent": 39.156808853149414,
               "NodeType": "dc2.8xlarge",
               "NumberOfNodes": 5
           },
           {
               "EstimatedDiskUtilizationPercent": 32.630674044291176,
               "NodeType": "dc2.8xlarge",
               "NumberOfNodes": 6
           }
       ]
   }
   ```

1. Run the following command to restore the snapshot into the cluster configuration that we chose. After this cluster is restored, we have the same content as the source cluster, but the data has been consolidated into three `dc2.8xlarge` nodes. 

   ```
   aws redshift restore-from-cluster-snapshot --region eu-west-1 --snapshot-identifier mycluster-snapshot --cluster-identifier mycluster-123456789012-x --node-type dc2.8xlarge --number-of-nodes 3
   ```

If you have reserved nodes, for example DC2 reserved nodes, you can upgrade to RA3 reserved nodes. You can do this when you restore from a snapshot or perform an elastic resize. You can use the console to guide you through this process. For more information about upgrading to RA3 nodes, see [Upgrading to RA3 node types](https://docs.amazonaws.cn/redshift/latest/mgmt/working-with-clusters.html#rs-upgrading-to-ra3). 

**To restore a cluster from a snapshot on the console**

1. Sign in to the Amazon Web Services Management Console and open the Amazon Redshift console at [https://console.amazonaws.cn/redshiftv2/](https://console.amazonaws.cn/redshiftv2/).

1. On the navigation menu, choose **Clusters**, **Snapshots**, then choose the snapshot to restore. 

1. Choose **Restore from snapshot** to view the **Cluster configuration** and **Cluster details** values of the new cluster to be created using the snapshot information. 

1. Update the properties of the new cluster, then choose **Restore cluster from snapshot**. 

After restoring your cluster snapshot, the restored data warehouse is encrypted with the same custom Amazon KMS key that it was using at the time that the snapshot was taken. If the snapshot didn't have a custom KMS key, Amazon Redshift's backup encryption logic depends on the following factors:
+ The type of Amazon Redshift data warehouse you're restoring the snapshot to.
+ The encryption type of the cluster at the time the snapshot was taken.

To learn how your data warehouse is encrypted after you restore it from your cluster snapshot, see the following table:


| Destination type | Snapshot encryption type | Destination encryption type | 
| --- | --- | --- | 
|  Provisioned cluster  |  Encrypted with an Amazon managed key  |  Encrypted with an Amazon managed key  | 
|  Provisioned cluster  |  Encrypted with an Amazon owned key  |  Encrypted with an Amazon owned key  | 
|  Serverless namespace  |  Encrypted with an Amazon managed key  |  Encrypted with an Amazon owned key  | 
|  Serverless namespace  |  Encrypted with an Amazon owned key  |  Encrypted with an Amazon owned key  | 

If Amazon Secrets Manager managed your cluster's admin password at the time the snapshot was taken, you must continue using Amazon Secrets Manager to manage the admin password. You can opt out of using a secret after restoring the cluster by updating the cluster's admin credentials in the cluster detail page.

If you have reserved nodes, you can upgrade to RA3 reserved nodes. You can do this when you restore from a snapshot or perform an elastic resize. You can use the console to guide you through this process. For more information about upgrading to RA3 nodes, see [Upgrading to RA3 node types](https://docs.amazonaws.cn/redshift/latest/mgmt/working-with-clusters.html#rs-upgrading-to-ra3). 

# Restoring a table from a snapshot


You can restore a single table from a snapshot instead of restoring an entire cluster. When you restore a single table from a snapshot, you specify the source snapshot, database, schema, and table name, and the target database, schema, and a new table name for the restored table.

**Note**  
No-backup tables aren't supported for RA3 provisioned clusters and Amazon Redshift Serverless workgroups. A table marked as no-backup in an RA3 cluster or serverless workgroup is treated as a permanent table that will always be backed up while taking a snapshot, and always restored when restoring from a snapshot. However, selective restoration of no-backup tables isn't supported.

The new table name cannot be the name of an existing table. To replace an existing table with a restored table from a snapshot, rename or drop the existing table before you restore the table from the snapshot.

The target table is created using the source table's column definitions, table attributes, and column attributes except for foreign keys. To prevent conflicts due to dependencies, the target table doesn't inherit foreign keys from the source table. Any dependencies, such as views or permissions granted on the source table, aren't applied to the target table. 

If the owner of the source table exists, then that database user is the owner of the restored table, provided that the user has sufficient permissions to become the owner of a relation in the specified database and schema. Otherwise, the restored table is owned by the admin user that was created when the cluster was launched.

The restored table returns to the state it was in at the time the backup was taken. This includes transaction visibility rules defined by the Amazon Redshift adherence to [serializable isolation](https://docs.amazonaws.cn/redshift/latest/dg/c_serial_isolation.html), meaning that data will be immediately visible to in flight transactions started after the backup.

Restoring a table from a snapshot has the following limitations:
+ You can restore a table only to the current, active running cluster and from a snapshot that was taken of that cluster.
+ You can restore only one table at a time.
+ You can't restore a table from a cluster snapshot that was taken prior to a cluster being resized. An exception is that you can restore a table after an elastic resize if the node type didn't change. 
+ Any dependencies, such as views or permissions granted on the source table, aren't applied to the target table.
+ If row-level security is turned on for a table being restored, Amazon Redshift restores the table with row-level security turned on. 

**To restore a table from a snapshot**

1. Sign in to the Amazon Web Services Management Console and open the Amazon Redshift console at [https://console.amazonaws.cn/redshiftv2/](https://console.amazonaws.cn/redshiftv2/).

1. On the navigation menu, choose **Clusters**, then choose the cluster that you want to use to restore a table. 

1. For **Actions**, choose **Restore table** to display the **Restore table** page. 

1. Enter the information about which snapshot, source table, and target table to use, and then choose **Restore table**. 

**Example: Restoring a table from a snapshot using the Amazon CLI**  
The following example uses the `restore-table-from-cluster-snapshot` Amazon CLI command to restore the `my-source-table` table from the `sample-database` schema in the `my-snapshot-id`. You can use the Amazon CLI command `describe-table-restore-status` to review the status of your restore operation. The example restores the snapshot to the `mycluster-example` cluster with a new table name of `my-new-table`.  

```
aws redshift restore-table-from-cluster-snapshot --cluster-identifier mycluster-example 
                                                 --new-table-name my-new-table 
                                                 --snapshot-identifier my-snapshot-id 
                                                 --source-database-name sample-database 
                                                 --source-table-name my-source-table
```

# Restoring a serverless namespace from a snapshot


 Restoring a serverless namespace from a snapshot replaces all of the namespace’s databases with databases in the snapshot. For more information about serverless snapshots, see [Snapshots and recovery points](https://docs.amazonaws.cn/redshift/latest/mgmt/serverless-snapshots-recovery-points.html). Amazon Redshift automatically converts tables with interleaved keys into compound keys when you restore a provisioned cluster snapshot to an Amazon Redshift Serverless namespace. For more information about sort keys, see [Working with sort keys](https://docs.amazonaws.cn/redshift/latest/dg/t_Sorting_data.html). 

To restore a snapshot from your provisioned cluster to your serverless namespace.

1. Sign in to the Amazon Web Services Management Console and open the Amazon Redshift console at [https://console.amazonaws.cn/redshiftv2/](https://console.amazonaws.cn/redshiftv2/).

1. On the navigation menu, choose **Clusters**, **Snapshots**, then choose the snapshot to use.

1. Choose **Restore from snapshot**, **Restore to serverless namespace**.

1. Choose the namespace you want to restore to.

1. Confirm you want to restore from your snapshot. Choose **restore**. This action replaces all the databases in serverless namespace with the data from your provisioned cluster.

# Configuring cross-Region snapshot copy for a nonencrypted cluster


You can configure Amazon Redshift to copy snapshots for a cluster to another Amazon Region. To configure cross-Region snapshot copy, you need to enable this copy feature for each cluster and configure where to copy snapshots and how long to keep copied automated or manual snapshots in the destination Amazon Region. When cross-Region copy is enabled for a cluster, all new manual and automated snapshots are copied to the specified Amazon Region. Copied snapshot names are prefixed with **copy:**.

**To configure a cross-Region snapshot**

1. Sign in to the Amazon Web Services Management Console and open the Amazon Redshift console at [https://console.amazonaws.cn/redshiftv2/](https://console.amazonaws.cn/redshiftv2/).

1. On the navigation menu, choose **Clusters**, then choose the cluster that you want to move snapshots for.

1. For **Actions**, choose **Configure cross-region snapshot**.

   The Configure cross-Region dialog box appears.

1. For **Copy snapshots**, choose **Yes**.

1. In **Destination Amazon Region**, choose the Amazon Region to which to copy snapshots.

1. In **Automated snapshot retention period (days)**, choose the number of days for which you want automated snapshots to be retained in the destination Amazon Region before they are deleted.

1. In **Manual snapshot retention period**, choose the value that represents the number of days for which you want manual snapshots to be retained in the destination Amazon Region before they are deleted. If you choose **Custom value**, the retention period must be between 1 to 3653 days.

1. Choose **Save**.

# Configuring cross-Region snapshot copy for an Amazon KMS–encrypted cluster


 When you launch an Amazon Redshift cluster, you can configure a snapshot copy grant for a root key in your account in the destination Amazon Web Services Region. If you don't configure a grant, snapshots in the destination region are encrypted with a default Amazon-owned key. By doing this, you enable Amazon Redshift to perform encryption operations in the destination Amazon Region.

The following procedure describes the process of enabling cross-Region snapshot copy for an Amazon KMS-encrypted cluster. For more information about encryption in Amazon Redshift and snapshot copy grants, see [Copying Amazon KMS–encrypted snapshots to another Amazon Web Services Region](working-with-db-encryption.md#configure-snapshot-copy-grant). 

**To configure a cross-Region snapshot for an Amazon KMS–encrypted cluster**

1. Sign in to the Amazon Web Services Management Console and open the Amazon Redshift console at [https://console.amazonaws.cn/redshiftv2/](https://console.amazonaws.cn/redshiftv2/).

1. On the navigation menu, choose **Clusters**, then choose the cluster that you want to move snapshots for.

1. For **Actions**, choose **Configure cross-region snapshot**.

   The Configure cross-Region dialog box appears.

1. For **Copy snapshots**, choose **Yes**.

1. In **Destination Amazon Region**, choose the Amazon Region to which to copy snapshots.

1. In **Automated snapshot retention period (days)**, choose the number of days for which you want automated snapshots to be retained in the destination Amazon Region before they are deleted.

1. In **Manual snapshot retention period**, choose the value that represents the number of days for which you want manual snapshots to be retained in the destination Amazon Region before they are deleted. If you choose **Custom value**, the retention period must be between 1 to 3653 days.

1. Choose **Save**.

# Modifying the manual snapshot retention period


You can change the retention period for a manual snapshot by modifying the snapshot settings.

**To change the manual snapshot retention period**

1. Sign in to the Amazon Web Services Management Console and open the Amazon Redshift console at [https://console.amazonaws.cn/redshiftv2/](https://console.amazonaws.cn/redshiftv2/).

1. On the navigation menu, choose **Clusters**, **Snapshots**, then choose the manual snapshot to change. 

1. For **Actions**, choose **Manual snapshot settings** to display the properties of the manual snapshot. 

1. Enter the revised properties of the snapshot definition, then choose **Save**. 

# Modifying the retention period for cross-Region snapshot copy


After you configure cross-Region snapshot copy, you might want to change the settings. You can easily change the retention period by selecting a new number of days and saving the changes. 

**Warning**  
You can't modify the destination Amazon Region after cross-Region snapshot copy is configured.   
If you want to copy snapshots to a different Amazon Region, first disable cross-Region snapshot copy. Then re-enable it with a new destination Amazon Region and retention period. Any copied automated snapshots are deleted after you disable cross-Region snapshot copy. Thus, you should determine if there are any that you want to keep and copy them to manual snapshots before disabling cross-Region snapshot copy.

**To modify a cross-Region snapshot**

1. Sign in to the Amazon Web Services Management Console and open the Amazon Redshift console at [https://console.amazonaws.cn/redshiftv2/](https://console.amazonaws.cn/redshiftv2/).

1. On the navigation menu, choose **Clusters**, then choose the cluster that you want to modify snapshots for.

1. For **Actions**, choose **Configure cross-region snapshot** to display the properties of the snapshot. 

1. Enter the revised properties of the snapshot definition, then choose **Save**. 

# Deleting a manual snapshot


You can delete manual snapshots by selecting one or more snapshots in the snapshot list.

**To delete a manual snapshot**

1. Sign in to the Amazon Web Services Management Console and open the Amazon Redshift console at [https://console.amazonaws.cn/redshiftv2/](https://console.amazonaws.cn/redshiftv2/).

1. On the navigation menu, choose **Clusters**, **Snapshots**, then choose the snapshot to delete. 

1. For **Actions**, choose **Delete snapshot** to delete the snapshot. 

1. Confirm the deletion of the listed snapshots, then choose **Delete**. 

# Registering a cluster to the Amazon Glue Data Catalog


You can register entire clusters to the Amazon Glue Data Catalog and create catalogs managed by Amazon Glue. You can access these catalogs with any SQL engine that supports the Apache Iceberg REST API. For more information on creating Apache Iceberg-compatible catalogs from Amazon Redshift see [ Apache Iceberg compatibility for Amazon Redshift](https://docs.amazonaws.cn/redshift/latest/dg/iceberg-integration_overview.html) in the Amazon Redshift Database Developer Guide.

**To register a cluster to the Amazon Glue Data Catalog**

1. Sign in to the Amazon Web Services Management Console and open the Amazon Redshift console at [https://console.amazonaws.cn/redshiftv2/](https://console.amazonaws.cn/redshiftv2/).

1. On the navigation menu, choose **Clusters**. The clusters for your account in the current Amazon Web Services Region are listed. A subset of properties of each cluster is displayed in columns in the list. If you don't have any clusters, choose **Create cluster** to create one.

1. Choose the name of the cluster that you want to register.

1.  From **Actions**, choose **Register to Amazon Glue Data Catalog**. The **Register to Amazon Glue Data Catalog** pop-up box appears. 

1. Enter the Amazon account ID that you want to register the cluster to under **Destination account ID**. This is the account ID that will hold the catalog in the Amazon Glue Data Catalog.

1.  Enter a name under **Register namespace as**. This will be the cluster’s name in the Data Catalog. 

1.  Choose **Register**. You’ll be taken to the Amazon Lake Formation console. 

1.  Follow the catalog creation process in Amazon Lake Formation. For information about creating a catalog, see [ Bringing Amazon Redshift data into the Amazon Glue Data Catalog](https://docs.amazonaws.cn/lake-formation/latest/dg/managing-namespaces-datacatalog.html) in the Amazon Lake Formation Developer Guide. 