

# Working with Amazon Keyspaces (for Apache Cassandra) features
<a name="working-with-overview"></a>

This chapter provides details about working with Amazon Keyspaces and various database features, for example backup and restore, Time to Live, and multi-Region replication. 
+ **Time to Live** – Amazon Keyspaces expires data from tables automatically based on the Time to Live value you set. Learn how to configure TTL and how to use it in your tables.
+ **PITR** – Protect your Amazon Keyspaces tables from accidental write or delete operations by creating continuous backups of your table data. Learn how to configure PITR on your tables and how to restore a table to a specific point in time or how to restore a table that has been accidentally deleted.
+ **Working with multi-Region tables** – Multi-Region tables in Amazon Keyspaces must have write throughput capacity configured in either on-demand or provisioned capacity mode with auto scaling. Plan the throughput capacity needs by estimating the required write capacity units (WCUs) for each Region, and provision the sum of writes from all Regions to ensure sufficient capacity for replicated writes. 
+ **Amazon Keyspaces change data capture (CDC)** – Amazon Keyspaces CDC streams record row-level change events from your table in near-real time. Learn how to use the Kinesis Client Library (KCL) to consume and process data from Amazon Keyspaces CDC streams.
+ **Queries and pagination** – Amazon Keyspaces supports advanced querying capabilities like using the `IN` operator with `SELECT` statements, ordering results with `ORDER BY`, and automatic pagination of large result sets. This section explains how Amazon Keyspaces processes these queries and provides examples.
+ **Partitioners** – Amazon Keyspaces provides three partitioners: `Murmur3Partitioner` (default), `RandomPartitioner`, and `DefaultPartitioner`. You can change the partitioner per Region at the account level using the Amazon Web Services Management Console or Cassandra Query Language (CQL). 
+ **Client-side timestamps** – Client-side timestamps are Cassandra-compatible timestamps that Amazon Keyspaces persists for each cell in your table. Use client-side timestamps for conflict resolution and to let your client application determine the order of writes. 
+ **User-defined types (UDTs)** – With UDTs you can define data structures in your applications that represent real-world data hierarchies. 
+ **Tagging resources** – You can label Amazon Keyspaces resources like keyspaces and tables using tags. Tags help categorize resources, enable cost tracking, and let you configure access control based on tags. This section covers tagging restrictions, operations, and best practices for Amazon Keyspaces.
+ **Amazon CloudFormation templates** – Amazon CloudFormation helps you model and set up your Amazon Keyspaces keyspaces and tables so that you can spend less time creating and managing your resources and infrastructure.

**Topics**
+ [

# System keyspaces in Amazon Keyspaces
](working-with-keyspaces.md)
+ [

# User-defined types (UDTs) in Amazon Keyspaces
](udts.md)
+ [

# Working with CQL queries in Amazon Keyspaces
](working-with-queries.md)
+ [

# Working with change data capture (CDC) streams in Amazon Keyspaces
](cdc.md)
+ [

# Working with partitioners in Amazon Keyspaces
](working-with-partitioners.md)
+ [

# Client-side timestamps in Amazon Keyspaces
](client-side-timestamps.md)
+ [

# Multi-Region replication for Amazon Keyspaces (for Apache Cassandra)
](multiRegion-replication.md)
+ [

# Backup and restore data with point-in-time recovery for Amazon Keyspaces
](PointInTimeRecovery.md)
+ [

# Expire data with Time to Live (TTL) for Amazon Keyspaces (for Apache Cassandra)
](TTL.md)
+ [

# Using this service with an Amazon SDK
](sdk-general-information-section.md)
+ [

# Working with tags and labels for Amazon Keyspaces resources
](tagging-keyspaces.md)
+ [

# Create Amazon Keyspaces resources with Amazon CloudFormation
](creating-resources-with-cloudformation.md)

# System keyspaces in Amazon Keyspaces
<a name="working-with-keyspaces"></a>

System keyspaces and system tables in Amazon Keyspaces (for Apache Cassandra) are read-only resources that store metadata about your Amazon Keyspaces resources. System keyspaces are present in every Amazon Web Services account, regardless of whether you have created any keyspaces or tables. They are a compatibility feature with Apache Cassandra, and they are provided at no additional charge.

You cannot modify or delete system keyspaces. The Amazon Keyspaces console displays only user-created keyspaces. System keyspaces are accessible programmatically through CQL and appear in services such as Amazon CloudFormation and Amazon Config.

Amazon Keyspaces uses four system keyspaces: 
+ `system`
+ `system_schema`
+ `system_schema_mcs`
+ `system_multiregion_info`

The following sections provide details about the system keyspaces and the system tables that are supported in Amazon Keyspaces.

## `system`
<a name="keyspace_system_list"></a>

This is a Cassandra keyspace. Amazon Keyspaces uses the following tables.


| Table names | Column names | Comments | 
| --- | --- | --- | 
|  `local`  |  `key, bootstrapped, broadcast_address, cluster_name, cql_version, data_center, gossip_generation, host_id, listen_address, native_protocol_version, partitioner, rack, release_version, rpc_address, schema_version, thrift_version, tokens, truncated_at`  |  Information about the local keyspace.  | 
|  `peers`  |  `peer, data_center, host_id, preferred_ip, rack, release_version, rpc_address, schema_version, tokens`  |  Query this table to see the available endpoints. For example, if you're connecting through a public endpoint, you see a list of nine available IP addresses. If you're connecting through a FIPS endpoint, you see a list of three IP addresses. If you're connecting through an Amazon PrivateLink VPC endpoint, you see the list of IP addresses that you have configured. For more information, see [Populating `system.peers` table entries with interface VPC endpoint information](vpc-endpoints.md#system_peers).  | 
|  `size_estimates`  |  `keyspace_name, table_name, range_start, range_end, mean_partition_size, partitions_count`  | This table defines the total size and number of partitions for each token range for every table. This is needed for the Apache Cassandra Spark Connector, which uses the estimated partition size to distribute the work. | 
|  `prepared_statements`  |  `prepared_id, logged_keyspace, query_string`  |  This table contains information about saved queries.  | 

## `system_schema`
<a name="keyspace_system_schema"></a>

This is a Cassandra keyspace. Amazon Keyspaces uses the following tables.


| Table names | Column names | Comments | 
| --- | --- | --- | 
|  `keyspaces`  |  `keyspace_name, durable_writes, replication`  |  Information about a specific keyspace.  | 
|  `tables`  |  `keyspace_name, table_name, bloom_filter_fp_chance, caching, comment, compaction, compression, crc_check_chance, dclocal_read_repair_chance, default_time_to_live, extensions, flags, gc_grace_seconds, id, max_index_interval, memtable_flush_period_in_ms, min_index_interval, read_repair_chance, speculative_retry`  |  Information about a specific table.  | 
|  `types`  |  `keyspace_name, type_name, field_names, field_types`  |  Information about a specific user-defined type (UDT).  | 
|  `columns`  |  `keyspace_name, table_name, column_name, clustering_order, column_name_bytes, kind, position, type`  |  Information about a specific column.  | 

## `system_schema_mcs`
<a name="keyspace_system_schema_mcs"></a>

This is an Amazon Keyspaces keyspace that stores information about Amazon or Amazon Keyspaces specific settings.


| Table names | Column names | Comments | 
| --- | --- | --- | 
|  `keyspaces`  |  `keyspace_name, durable_writes, replication`  |  Query this table to find out programmatically if a keyspace has been created. For more information, see [Check keyspace creation status in Amazon Keyspaces](keyspaces-create.md).  | 
|  `tables`  |  `keyspace_name, creation_time, speculative_retry, cdc, gc_grace_seconds, crc_check_chance, min_index_interval, bloom_filter_fp_chance, flags, custom_properties, dclocal_read_repair_chance, table_name, caching, default_time_to_live, read_repair_chance, max_index_interval, extensions, compaction, comment, id, compression, memtable_flush_period_in_ms, cdc_specification, latest_stream_arn status`  |  Query this table to find out the status of a specific table. For more information, see [Check table creation status in Amazon Keyspaces](tables-create.md). You can also query this table to list settings that are specific to Amazon Keyspaces and are stored as `custom_properties`. For example: [\[See the AWS documentation website for more details\]](http://docs.amazonaws.cn/en_us/keyspaces/latest/devguide/working-with-keyspaces.html)  | 
|  `tables_history`  |  `keyspace_name, table_name, event_time, creation_time, custom_properties, event`  |  Query this table to learn about schema changes for a specific table.  | 
|  `columns`  |  `keyspace_name, table_name, column_name, clustering_order, column_name_bytes, kind, position, type`  |  This table is identical to the Cassandra table in the `system_schema` keyspace.  | 
|  `tags`  |  `resource_id, keyspace_name, resource_name, resource_type, tags`  |  Query this table to find out if a keyspace has tags. For more information, see [View the tags of a table](Tagging.Operations.view.table.md).  | 
|  `types`  |  `keyspace_name, type_name, field_names, field_types, max_nesting_depth, last_modified_timestamp, status, direct_referring_tables, direct_parent_types`  |  Query this table to find out information about user-defined types (UDTs). For example you can query this table to list all UDTs for a given keyspace. For more information, see [User-defined types (UDTs) in Amazon Keyspaces](udts.md).  | 
|  `autoscaling`  |  `keyspace_name, table_name, provisioned_read_capacity_autoscaling_update, provisioned_write_capacity_autoscaling_update`  |  Query this table to get the auto scaling settings of a provisioned table. Note that these settings won't be available until the table is active. To query this table, you have to specify `keyspace_name` and `table_name` in the `WHERE` clause. For more information, see [View your table's Amazon Keyspaces auto scaling configuration](autoscaling.viewPolicy.md).  | 

## `system_multiregion_info`
<a name="keyspace_system_multiregion_info"></a>

This is an Amazon Keyspaces keyspace that stores information about multi-Region replication.


| Table names | Column names | Comments | 
| --- | --- | --- | 
|  `tables`  |  `keyspace_name, table_name, region, status`   |  This table contains information about multi-Region tables—for example, the Amazon Web Services Regions that the table is replicated in and the table's status. You can also query this table to list settings that are specific to Amazon Keyspaces that are stored as `custom_properties`. For example: [\[See the AWS documentation website for more details\]](http://docs.amazonaws.cn/en_us/keyspaces/latest/devguide/working-with-keyspaces.html) To query this table, you have to specify `keyspace_name` and `table_name` in the `WHERE` clause. For more information, see [Create a multi-Region keyspace in Amazon Keyspaces](keyspaces-mrr-create.md).  | 
|  `keyspaces`  |  `keyspace_name, region, status, tables_replication_progress`   |  This table contains information about the progress of an `ALTER KEYSPACE` operation that adds a replica to a keyspace — for example, how many tables have already been created in the new Region, and how many tables are still in progress. For an examples, see [Check the replication progress when adding a new Region to a keyspace](keyspaces-multi-region-replica-status.md).  | 
|  `autoscaling`  |  `keyspace_name, table_name, provisioned_read_capacity_autoscaling_update, provisioned_write_capacity_autoscaling_update, region`  |  Query this table to get the auto scaling settings of a multi-Region provisioned table. Note that these settings won't be available until the table is active. To query this table, you have to specify `keyspace_name` and `table_name` in the `WHERE` clause. For more information, see [Update the provisioned capacity and auto scaling settings for a multi-Region table in Amazon Keyspaces](tables-mrr-autoscaling.md).  | 
|  `types`  |  `keyspace_name, type_name, field_names, field_types, max_nesting_depth, last_modified_timestamp, status, direct_referring_tables, direct_parent_types, region`  |  Query this table to find out information about user-defined types (UDTs) in multi-Region keyspaces. For example, you can query this table to list all table replicas and their respective Amazon Regions that use UDTs for a given keyspace. For more information, see [User-defined types (UDTs) in Amazon Keyspaces](udts.md).  | 

# User-defined types (UDTs) in Amazon Keyspaces
<a name="udts"></a>

A user-defined type (UDT) is a grouping of fields and data types that you can use to define a single column in Amazon Keyspaces. Valid data types for UDTs are all supported Cassandra data types, including collections and other UDTs that you've already created in the same keyspace. For more information about supported Cassandra data types, see [Cassandra data type support](cassandra-apis.md#cassandra-data-type).

You can use user-defined types (UDTs) in Amazon Keyspaces to organize data in a more efficient way. For example, you can create UDTs with nested collections which allows you to implement more complex data modeling in your applications. You can also use the frozen keyword for defining UDTs.

UDTs are bound to a keyspace and available to all tables and UDTs in the same keyspace. You can create UDTs in single-Region and multi-Region keyspaces.

You can create new tables or alter existing tables and add new columns that use a UDT. To create a UDT with a nested UDT, the nested UDT has to be frozen.

To review how many UDTs are supported per keyspace, supported levels of nesting, and other default values and quotas related to UDTs, see [Quotas and default values for user-defined types (UDTs) in Amazon Keyspaces](quotas.md#quotas-udts).

For information about how to calculate the encoded size of UDTs, see [Estimate the encoded size of data values based on data type](calculating-row-size.md#calculating-row-size-data-types).

For more information about CQL syntax, see [User-defined types (UDTs)](cql.ddl.type.md).

To learn more about UDTs and point-in time restore, see [PITR restore of tables with user-defined types (UDTs)](PointInTimeRecovery_HowItWorks.md#howitworks_backup_udt).

**Topics**
+ [Configure permissions](configure-udt-permissions.md)
+ [Create a UDT](keyspaces-create-udt.md)
+ [View UDTs](keyspaces-view-udt.md)
+ [Delete a UDT](keyspaces-delete-udt.md)

# Configure permissions to work with user-defined types (UDTs) in Amazon Keyspaces
<a name="configure-udt-permissions"></a>

Like tables, UDTs are bound to a specific keyspace. But unlike tables, you can't define permissions directly for UDTs. UDTs are not considered resources in Amazon and they have no unique identifiers in the format of an Amazon Resource Name (ARN). Instead, to give an IAM principal permissions to perform specific actions on a UDT, you have to define permissions for the keyspace that the UDT is bound to. To work with UDTs in multi-Region keyspaces, additional permissions are required.

To be able to create, view, or delete UDTs, the principal, for example an IAM user or role, needs the same permissions that are required to perform the same action on the keyspace that the UDT is bound to.

For more information about Amazon Identity and Access Management, see [Amazon Identity and Access Management for Amazon Keyspaces](security-iam.md).

## Permissions to create a UDT
<a name="udt-permissions-create"></a>

To create a UDT in a single-Region keyspace, the principal needs `Create` permissions for the keyspace.

The following IAM policy is an example of this.

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "cassandra:Create",
            "Resource": [
                "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/my_keyspace/"
            ]
        }
    ]
}
```

To create a UDT in a multi-Region keyspace, in addition to `Create` permissions the principal also needs permissions for the action `CreateMultiRegionResource` for the specified keyspace.

The following IAM policy is an example of this.

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action":  [ "cassandra:Create", "cassandra:CreateMultiRegionResource" ],
            "Resource": [
                "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/my_keyspace/"
            ]
        }
    ]
}
```

## Permissions to view a UDT
<a name="udt-permissions-view"></a>

To view or list UDTs in a single-Region keyspace, the principal needs read permissions for the system keyspace. For more information, see [`system_schema_mcs`](working-with-keyspaces.md#keyspace_system_schema_mcs).

The following IAM policy is an example of this.

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":"cassandra:Select",
         "Resource":[
             "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/system*"
         ]
      }
   ]
}
```

To view or list UDTs for a multi-Region keyspace, the principal needs permissions for the actions `SELECT` and `SelectMultiRegionResource` for the system keyspace. For more information, see [`system_multiregion_info`](working-with-keyspaces.md#keyspace_system_multiregion_info).

The following IAM policy is an example of this.

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action": ["cassandra:Select", "cassandra:SelectMultiRegionResource"],
         "Resource":[
             "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/system*"
         ]
      }
   ]
}
```

## Permissions to delete a UDT
<a name="udt-permissions-drop"></a>

To delete a UDT from a single-Region keyspace, the principal needs permissions for the `Drop` action for the specified keyspace.

The following IAM policy is an example of this.

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "cassandra:Drop",
            "Resource": [
                "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/my_keyspace/"
            ]
        }
    ]
}
```

To delete a UDT from a multi-Region keyspace, the principal needs permissions for the `Drop` action and for the `DropMultiRegionResource` action for the specified keyspace.

The following IAM policy is an example of this.

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action":  [ "cassandra:Drop", "cassandra:DropMultiRegionResource" ],
            "Resource": [
                "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/my_keyspace/"
            ]
        }
    ]
}
```

# Create a user-defined type (UDT) in Amazon Keyspaces
<a name="keyspaces-create-udt"></a>

To create a UDT in a single-Region keyspace, you can use the `CREATE TYPE` statement in CQL, the `create-type` command with the Amazon CLI, or the console.

UDT names must contain 48 characters or less, must begin with an alphabetic character, and can only contain alpha-numeric characters and underscores. Amazon Keyspaces converts upper case characters automatically into lower case characters. 

Alternatively, you can declare a UDT name in double quotes. When declaring a UDT name inside double quotes, Amazon Keyspaces preserves upper casing and allows special characters.

You can also use double quotes as part of the name when you create the UDT, but you must escape each double quote character with an additional double quote character.

The following table shows examples of allowed UDT names. The first columns shows how to enter the name when you create the type, the second column shows how Amazon Keyspaces formats the name internally. Amazon Keyspaces expects the formatted name for operations like `GetType`.


| Entered name | Formatted name | Note | 
| --- | --- | --- | 
|  MY\$1UDT  | my\$1udt | Without double-quotes, Amazon Keyspaces converts all upper-case characters to lower-case. | 
|  "MY\$1UDT"  | MY\$1UDT | With double-quotes, Amazon Keyspaces respects the upper-case characters, and removes the double-quotes from the formatted name. | 
|  "1234"  | 1234 | With double-quotes, the name can begin with a number, and Amazon Keyspaces removes the double-quotes from the formatted name. | 
|  "Special\$1Ch@r@cters<>\$1\$1"  | Special\$1Ch@r@cters<>\$1\$1 | With double-quotes, the name can contain special characters, and Amazon Keyspaces removes the double-quotes from the formatted name. | 
|  "nested""""""quotes"  | nested"""quotes | Amazon Keyspaces removes the outer double-quotes and the escape double-quotes from the formatted name. | 

------
#### [ Console ]

**Create a user-defined type (UDT) with the Amazon Keyspaces console**

1. Sign in to the Amazon Web Services Management Console, and open the Amazon Keyspaces console at [https://console.amazonaws.cn/keyspaces/home](https://console.amazonaws.cn/keyspaces/home).

1. In the navigation pane, choose **Keyspaces**, and then choose a keyspace from the list.

1. Choose the **UDTs** tab.

1. Choose **Create UDT**

1. Under **UDT details**, enter the name for the UDT. Under **UDT fields** you define the schema of the UDT.

1. To finish, choose **Create UDT**.

------
#### [ Cassandra Query Language (CQL) ]

**Create a user-defined type (UDT) with CQL**

In this example we create a new version of the book awards table used in [Create a table in Amazon Keyspaces](getting-started.tables.md). In this table, we store all awards an author receives for a given book. We create two UDTs that are nested and contain information about the book that received an award. 

1. Create a keyspace with the name `catalog`. 

   ```
   CREATE KEYSPACE catalog WITH REPLICATION = {'class': 'SingleRegionStrategy'};
   ```

1. Create the first type. This type stores *BISAC* codes, which are used to define the genre of books. A BISAC code consists out of an alpha-numeric code and up to four subject matter areas.

   ```
   CREATE TYPE catalog.bisac (
       bisac_code text,
       subject1 text,
       subject2 text,
       subject3 text,
       subject4 text
   );
   ```

1. Create a second type for book awards that uses the first UDT. The nested UDT has to be frozen.

   ```
   CREATE TYPE catalog.book (
       award_title text,
       book_title text,
       publication_date date,
       page_count int,
       ISBN text,
       genre FROZEN <bisac> 
   );
   ```

1. Create a table with a column for the author's name and uses a list type for the book awards. Note that the UDT used in the list has to be frozen.

   ```
   CREATE TABLE catalog.authors (
       author_name text PRIMARY KEY,
       awards list <FROZEN <book>>
   );
   ```

1. In this step we insert one row of data into the new table.

   ```
   CONSISTENCY LOCAL_QUORUM;
   ```

   ```
   INSERT INTO catalog.authors (author_name, awards) VALUES (
   'John Stiles' , 
   [{
         award_title: 'Wolf',
         book_title: 'Yesterday',
         publication_date: '2020-10-10',
         page_count: 345,
         ISBN: '026204630X',
         genre: { bisac_code:'FIC014090', subject1: 'FICTION', subject2: 'Historical', subject3: '20th Century', subject4: 'Post-World War II'}
         },
         {award_title: 'Richard Roe',
         book_title: 'Who ate the cake?',
         publication_date: '2019-05-13',
         page_count: 193,
         ISBN: '9780262046305',
         genre: { bisac_code:'FIC022130', subject1: 'FICTION', subject2: 'Mystery & Detective', subject3: 'Cozy', subject4: 'Culinary'}
         }]
   );
   ```

1. In the last step we read the data from the table.

   ```
   SELECT * FROM catalog.authors;
   ```

   The output of the command should look like this.

   ```
    author_name | awards
   -------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    John Stiles | [{award_title: 'Wolf', book_title: 'Yesterday', publication_date: 2020-10-10, page_count: 345, isbn: '026204630X', genre: {bisac_code: 'FIC014090', subject1: 'FICTION', subject2: 'Historical', subject3: '20th Century', subject4: 'Post-World War II'}}, {award_title: 'Richard Roe', book_title: 'Who ate the cake?', publication_date: 2019-05-13, page_count: 193, isbn: '9780262046305', genre: {bisac_code: 'FIC022130', subject1: 'FICTION', subject2: 'Mystery & Detective', subject3: 'Cozy', subject4: 'Culinary'}}]
   
   (1 rows)
   ```

   For more information about CQL syntax, see [CREATE TYPE](cql.ddl.type.md#cql.ddl.type.create).

------
#### [ CLI ]

**Create a user-defined type (UDT) with the Amazon CLI**

1. To create a type you can use the following syntax.

   ```
   aws keyspaces create-type
   --keyspace-name 'my_keyspace'
   --type-name 'my_udt'
   --field-definitions
       '[
           {"name" : "field1", "type" : "int"},
           {"name" : "field2", "type" : "text"}
       ]'
   ```

1. The output of that command looks similar to this example. Note that `typeName` returns the formatted name of the UDT.

   ```
   {
       "keyspaceArn": "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/my_keyspace/",
       "typeName": "my_udt"
   }
   ```

------

# View user-defined types (UDTs) in Amazon Keyspaces
<a name="keyspaces-view-udt"></a>

To view or list all UDTs in a single-Region keyspace, you can query the table `system_schema_mcs.types` in the system keyspace using a statement in CQL, or use the `get-type` and `list-type` commands with the Amazon CLI, or the console.

For either option, the IAM principal needs read permissions to the system keyspace. For more information, see [Configure permissions to work with user-defined types (UDTs) in Amazon Keyspaces](configure-udt-permissions.md).

------
#### [ Console ]

**View user-defined types (UDT) with the Amazon Keyspaces console**

1. Sign in to the Amazon Web Services Management Console, and open the Amazon Keyspaces console at [https://console.amazonaws.cn/keyspaces/home](https://console.amazonaws.cn/keyspaces/home).

1. In the navigation pane, choose **Keyspaces**, and then choose a keyspace from the list.

1. Choose the **UDTs** tab to review the list of all UDTs in the keyspace.

1. To review one UDT in detail, choose a **UDT** from the list.

1. On the **Schema**tab you can review the schema. On the **Used in** tab you can see if this UDT is used in tables or other UDTs. Note that you can only delete UDTs that are not in use by either tables or other UDTs.

------
#### [ Cassandra Query Language (CQL) ]

**View the user-defined types (UDTs) of a single-Region keyspace with CQL**

1. To see the types that are available in a given keyspace, you can use the following statement.

   ```
   SELECT type_name
   FROM system_schema_mcs.types
   WHERE keyspace_name = 'my_keyspace';
   ```

1. To view the details about a specific type, you can use the following statement.

   ```
   SELECT 
       keyspace_name,
       type_name,
       field_names,
       field_types,
       max_nesting_depth,
       last_modified_timestamp,
       status,
       direct_referring_tables,
       direct_parent_types
   FROM system_schema_mcs.types
   WHERE keyspace_name = 'my_keyspace' AND type_name = 'my_udt';
   ```

1. You can list all UDTs that exist in the account using `DESC TYPE`. 

   ```
   DESC TYPES;
                               
    Keyspace my_keyspace
    ---------------------------
    my_udt1  my_udt2
                               
    Keyspace my_keyspace2
    ---------------------------
    my_udt1
   ```

1. You can list all UDTs in the current selected keyspace using `DESC TYPE`.

   ```
   USE my_keyspace;
   my_keyspace DESC TYPES;
                               
   my_udt1  my_udt2
   ```

1. To list all UDTs in a multi-Region keyspace, you can query the system table `types` in the `system_multiregion_info` keyspace. The following query is an example of this.

   ```
   SELECT keyspace_name, type_name, region, status FROM system_multiregion_info.types WHERE keyspace_name = 'mykeyspace' AND table_name = 'mytable';
   ```

   The output of this command looks similar to this.

   ```
   keyspace_name     | table_name         | region                 | status
   mykeyspace        | mytable            | us-east-1              | ACTIVE
   mykeyspace        | mytable            | ap-southeast-1         | ACTIVE
   mykeyspace        | mytable            | eu-west-1              | ACTIVE
   ```

------
#### [ CLI ]

**View user-defined types (UDTs) with the Amazon CLI**

1. To list the types available in a keyspace, you can use the `list-types` command.

   ```
   aws keyspaces list-types
   --keyspace-name 'my_keyspace'
   ```

   The output of that command looks similar to this example.

   ```
   {
       "types": [
           "my_udt",
           "parent_udt"
       ]
   }
   ```

1. To view the details about a given type you can use the `get-type` command.

   ```
   aws keyspaces get-type
   --type-name 'my_udt'
   --keyspace-name 'my_keyspace'
   ```

   The output of this command looks similar to this example.

   ```
   {
       "keyspaceName": "my_keyspace",
       "typeName": "my_udt",
       "fieldDefinitions": [
           {
               "name": "a",
               "type": "int"
           },
           {
               "name": "b",
               "type": "text"
           }
       ],
       "lastModifiedTimestamp": 1721328225776,
       "maxNestingDepth": 3
       "status": "ACTIVE",
       "directReferringTables": [],
       "directParentTypes": [
           "parent_udt"
       ],
       "keyspaceArn": "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/my_keyspace/"
   }
   ```

------

# Delete a user-defined type (UDT) in Amazon Keyspaces
<a name="keyspaces-delete-udt"></a>

To delete a UDT in a keyspace, you can use the `DROP TYPE` statement in CQL, the `delete-type` command with the Amazon CLI, or the console.

------
#### [ Console ]

**Delete a user-defined type (UDT) with the Amazon Keyspaces console**

1. Sign in to the Amazon Web Services Management Console, and open the Amazon Keyspaces console at [https://console.amazonaws.cn/keyspaces/home](https://console.amazonaws.cn/keyspaces/home).

1. In the navigation pane, choose **Keyspaces**, and then choose a keyspace from the list.

1. Choose the **UDTs** tab.

1. Choose the UDT that you want to delete. On the **Used in** you can confirm that the type you want to delete isn't currently used by a table or other UDT.

1. Choose **Delete** above the **Summary**. 

1. Type `Delete` in the dialog that appears, and choose **Delete UDT**.

------
#### [ Cassandra Query Language (CQL) ]

**Delete a user-defined type (UDT) with CQL**
+ To delete a type, you can use the following statement.

  ```
  DROP TYPE my_keyspace.my_udt;
  ```

  For more information about CQL syntax, see [DROP TYPE](cql.ddl.type.md#cql.ddl.type.drop).

------
#### [ CLI ]

**Delete a user-defined type (UDT) with the Amazon CLI**

1. To delete a type, you can use the following command.

   ```
   aws keyspaces delete-type
   --keyspace-name 'my_keyspace'
   --type-name 'my_udt'
   ```

1. The output of the command looks similar to this example.

   ```
   {
       "keyspaceArn": "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/my_keyspace/",
       "typeName": "my_udt"
   }
   ```

------

# Working with CQL queries in Amazon Keyspaces
<a name="working-with-queries"></a>

This section gives an introduction into working with queries in Amazon Keyspaces (for Apache Cassandra). The CQL statements available to query, transform, and manage data are `SELECT`, `INSERT`, `UPDATE`, and `DELETE`. The following topics outline some of the more complex options available when working with queries. For the complete language syntax with examples, see [DML statements (data manipulation language) in Amazon Keyspaces](cql.dml.md).

**Topics**
+ [

# Use the `IN` operator with the `SELECT` statement in a query in Amazon Keyspaces
](in.select.md)
+ [

# Use batch statements in Amazon Keyspaces
](batchStatements.md)
+ [

# Order results with `ORDER BY` in Amazon Keyspaces
](ordering-results.md)
+ [

# Paginate results in Amazon Keyspaces
](paginating-results.md)

# Use the `IN` operator with the `SELECT` statement in a query in Amazon Keyspaces
<a name="in.select"></a>

**SELECT IN**

You can query data from tables using the `SELECT` statement, which reads one or more columns for one or more rows in a table and returns a result-set containing the rows matching the request. A `SELECT` statement contains a `select_clause` that determines which columns to read and to return in the result-set. The clause can contain instructions to transform the data before returning it. The optional `WHERE` clause specifies which rows must be queried and is composed of relations on the columns that are part of the primary key. Amazon Keyspaces supports the `IN` keyword in the `WHERE` clause. This section uses examples to show how Amazon Keyspaces processes `SELECT` statements with the `IN` keyword.

This examples demonstrates how Amazon Keyspaces breaks down the `SELECT` statement with the `IN` keyword into *subqueries*. In this example we use a table with the name `my_keyspace.customers`. The table has one primary key column `department_id`, two clustering columns `sales_region_id` and `sales_representative_id`, and one column that contains the name of the customer in the `customer_name` column.

```
SELECT * FROM my_keyspace.customers;

         department_id | sales_region_id | sales_representative_id | customer_name
        ---------------+-----------------+-------------------------+--------------
          0            |        0        |            0            |    a
          0            |        0        |            1            |    b
          0            |        1        |            0            |    c
          0            |        1        |            1            |    d
          1            |        0        |            0            |    e
          1            |        0        |            1            |    f
          1            |        1        |            0            |    g
          1            |        1        |            1            |    h
```

Using this table, you can run the following `SELECT` statement to find the customers in the departments and sales regions that you are interested in with the `IN` keyword in the `WHERE` clause. The following statement is an example of this.

```
SELECT * FROM my_keyspace.customers WHERE department_id IN (0, 1) AND sales_region_id IN (0, 1);
```

Amazon Keyspaces divides this statement into four subqueries as shown in the following output.

```
SELECT * FROM my_keyspace.customers WHERE department_id = 0 AND sales_region_id = 0;

 department_id | sales_region_id | sales_representative_id | customer_name
---------------+-----------------+-------------------------+--------------
  0            |        0        |           0             |    a
  0            |        0        |           1             |    b

SELECT * FROM my_keyspace.customers WHERE department_id = 0 AND sales_region_id = 1;

 department_id | sales_region_id | sales_representative_id | customer_name
---------------+-----------------+-------------------------+--------------
  0            |        1        |          0              |    c
  0            |        1        |          1              |    d

SELECT * FROM my_keyspace.customers WHERE department_id = 1 AND sales_region_id = 0;

 department_id | sales_region_id | sales_representative_id | customer_name
---------------+-----------------+-------------------------+--------------
  1            |        0        |          0              |    e
  1            |        0        |          1              |    f

SELECT * FROM my_keyspace.customers WHERE department_id = 1 AND sales_region_id = 1;

 department_id | sales_region_id | sales_representative_id | customer_name
---------------+-----------------+-------------------------+--------------
  1            |        1        |           0             |    g
  1            |        1        |           1             |    h
```

When the `IN` keyword is used, Amazon Keyspaces automatically paginates the results in any of the following cases:
+ After every 10th subquery is processed.
+ After processing 1MB of logical IO.
+ If you configured a `PAGE SIZE`, Amazon Keyspaces paginates after reading the number of queries for processing based on the set `PAGE SIZE`.
+ When you use the `LIMIT` keyword to reduce the number of rows returned, Amazon Keyspaces paginates after reading the number of queries for processing based on the set `LIMIT`.

 The following table is used to illustrate this with an example.

For more information about pagination, see [Paginate results in Amazon Keyspaces](paginating-results.md).

```
SELECT * FROM my_keyspace.customers;

         department_id | sales_region_id | sales_representative_id | customer_name
        ---------------+-----------------+-------------------------+--------------
          2            |        0        |          0              |    g
          2            |        1        |          1              |    h
          2            |        2        |          2              |    i
          0            |        0        |          0              |    a
          0            |        1        |          1              |    b
          0            |        2        |          2              |    c
          1            |        0        |          0              |    d
          1            |        1        |          1              |    e
          1            |        2        |          2              |    f
          3            |        0        |          0              |    j
          3            |        1        |          1              |    k
          3            |        2        |          2              |    l
```

You can run the following statement on this table to see how pagination works.

```
SELECT * FROM my_keyspace.customers WHERE department_id IN (0, 1, 2, 3) AND sales_region_id IN (0, 1, 2) AND sales_representative_id IN (0, 1);
```

Amazon Keyspaces processes this statement as 24 subqueries, because the cardinality of the Cartesian product of all the `IN` terms contained in this query is 24.

```
 department_id | sales_region_id | sales_representative_id | customer_name
---------------+-----------------+-------------------------+--------------
  0            |        0        |          0              |    a
  0            |        1        |          1              |    b
  1            |        0        |          0              |    d
  1            |        1        |          1              |    e

---MORE---
 department_id | sales_region_id | sales_representative_id | customer_name
---------------+-----------------+-------------------------+--------------
  2            |        0        |          0              |    g
  2            |        1        |          1              |    h
  3            |        0        |          0              |    j

---MORE---
 department_id | sales_region_id | sales_representative_id | customer_name
---------------+-----------------+-------------------------+--------------
  3            |        1        |          1              |    k
```

This example shows how you can use the `ORDER BY` clause in a `SELECT` statement with the `IN` keyword.

```
SELECT * FROM my_keyspace.customers WHERE department_id IN (3, 2, 1) ORDER BY sales_region_id DESC;
        
         department_id | sales_region_id | sales_representative_id | customer_name
        ---------------+-----------------+-------------------------+--------------
          3            |        2        |          2              |    l
          3            |        1        |          1              |    k
          3            |        0        |          0              |    j
          2            |        2        |          2              |    i
          2            |        1        |          1              |    h
          2            |        0        |          0              |    g
          1            |        2        |          2              |    f
          1            |        1        |          1              |    e
          1            |        0        |          0              |    d
```

Subqueries are processed in the order in which the partition key and clustering key columns are presented in the query. In the example below, subqueries for partition key value ”2“ are processed first, followed by subqueries for partition key value ”3“ and ”1“. Results of a given subquery are ordered according to the query's ordering clause, if present, or the table's clustering order defined during table creation. 

```
SELECT * FROM my_keyspace.customers WHERE department_id IN (2, 3, 1) ORDER BY sales_region_id DESC;

         department_id | sales_region_id | sales_representative_id | customer_name
        ---------------+-----------------+-------------------------+--------------
          2            |        2        |          2              |    i
          2            |        1        |          1              |    h
          2            |        0        |          0              |    g
          3            |        2        |          2              |    l
          3            |        1        |          1              |    k
          3            |        0        |          0              |    j
          1            |        2        |          2              |    f
          1            |        1        |          1              |    e
          1            |        0        |          0              |    d
```

# Use batch statements in Amazon Keyspaces
<a name="batchStatements"></a>

You can combine multiple `INSERT`, `UPDATE`, and `DELETE` operations into a `BATCH` statement. `LOGGED` batches are the default.

```
batch_statement ::=     BEGIN [ UNLOGGED ] BATCH
                        [ USING update_parameter( AND update_parameter)* ]
                        modification_statement ( ';' modification_statement )*
                        APPLY BATCH
modification_statement ::= insert_statement | update_statement | delete_statement
```

When you run a batch statement, the driver combines all statements in the batch into a single batch operation.

To decide which type of batch operation to use, you can consider the following guidelines.

Use logged batches when:  
+ You need atomic transaction guarantees.
+ Slightly higher latencies are an acceptable trade-off.

Use unlogged batches when:  
+ You need to optimize single-partition operations.
+ You want to reduce network overhead.
+ You have high-throughput requirements.

For information about batch statement quotas, see [Quotas for Amazon Keyspaces (for Apache Cassandra)](quotas.md). 

## Unlogged batches
<a name="batchStatements-unlogged"></a>

With **unlogged batches**, Amazon Keyspaces processes multiple operations as a single request without maintaining a batch log. With an unlogged batch operation, it's possible that some of the actions succeed while others fail. Unlogged batches are useful when you want to:
+ Optimize operations within a single partition.
+ Reduce network traffic by grouping related requests.

The syntax for an unlogged batch is similar to that of a logged batch, with the addition of the `UNLOGGED` keyword.

```
BEGIN UNLOGGED BATCH
    INSERT INTO users (id, firstname, lastname) VALUES (1, 'John', 'Doe');
    INSERT INTO users (id, firstname, lastname) VALUES (2, 'Jane', 'Smith');
APPLY BATCH;
```

## Logged batches
<a name="batchStatements-logged"></a>

A **logged** batch combines multiple write actions into a single atomic operation. When you run a logged batch:
+ All actions either succeed together or fail together.
+ The operation is synchronous and idempotent.
+ You can write to multiple Amazon Keyspaces tables, as long as they are in the same Amazon account and Amazon Web Services Region.

Logged batches may have slightly higher latencies. For high-throughput applications, consider using unlogged batches.

There is no additional cost to use logged batches in Amazon Keyspaces. You pay only for the writes that are part of your batch operations. Amazon Keyspaces performs two underlying writes of every row in the batch: one to prepare the row for the batch and one to commit the batch. When planning capacity for tables that use logged batches, remember that each row in a batch requires twice the capacity of a standard write operation. For example, if your application runs one logged batch per second with three 1KB rows, you need to provision six write capacity units (WCUs) compared to only three WCUs for individual writes or unlogged batches. 

For information about pricing, see [Amazon Keyspaces (for Apache Cassandra) pricing](http://www.amazonaws.cn/keyspaces/pricing).

### Best practices for batch operations
<a name="batchStatements-best-practice"></a>

Consider the following recommended practices when using Amazon Keyspaces batch operations.
+ Enable automatic scaling so that you have sufficient throughput capacity for your tables to handle batch operations and the additional throughput requirements of logged batches.
+ Use individual operations or unlogged batches when operations can run independently without affecting application correctness.
+ Design your application to minimize concurrent updates to the same rows, as simultaneous batch operations can conflict and fail.
+ For high-throughput bulk data ingestion without atomicity requirements, use individual write operations or unlogged batches.

### Consistency and concurrency
<a name="batchStatements-consistency"></a>

Amazon Keyspaces enforces the following consistency and concurrency rules for logged batches:
+ All batch operations use `LOCAL_QUORUM` consistency level.
+ Concurrent batches affecting different rows can execute simultaneously.
+ Concurrent `INSERT`, `UPDATE`, or `DELETE` operations on rows involved in an ongoing batch fail with a conflict.

### Supported operators and conditions
<a name="batchStatements-operators"></a>

Supported `WHERE` clause operators:  
+ Equality (=)

Unsupported operators:  
+ Range operators (>, <, >=, <=)
+ `IN` operator
+ `LIKE` operator
+ `BETWEEN` operator

Not supported in logged batches:  
+ Multiple statements affecting the same row
+ Counter operations
+ Range deletes

### Failure conditions of logged batch statements
<a name="batchStatement-failures"></a>

A logged batch operation may fail in any of the following cases:
+ Condition expressions (like `IF NOT EXISTS` or `IF`) evaluate to false.
+ One or more operations contain invalid parameters.
+ The request conflicts with another batch operation running on the same rows.
+ The table lacks sufficient provisioned capacity.
+ A row exceeds the maximum size limit.
+ The input data format is invalid.

### Batch statements and multi-Region replication
<a name="batchStatements-multiregion"></a>

In multi-Region deployments:
+ Source Region operations are synchronous and atomic.
+ Destination Region operations are asynchronous.
+ All batch operations replicate to destination Regions, but may not maintain isolation during application.

### Monitor batch operations
<a name="batchStatements-monitoring"></a>

You can monitor batch operations using Amazon CloudWatch metrics to track performance, errors, and usage patterns. Amazon Keyspaces provides the following CloudWatch metrics for monitoring batch operations per table:
+ `SuccessfulRequestCount` – Track successful batch operations.
+ `Latency` – Measure batch operation performance.
+ `ConsumedWriteCapacityUnits` – Monitor capacity consumption of batch operations.

For more information, see [Amazon Keyspaces metrics](metrics-dimensions.md#keyspaces-metrics-dimensions).

In addition to CloudWatch metrics, you can use Amazon CloudTrail to log all Amazon Keyspaces API actions. Each API action in the batch is logged in CloudTrail making it easier to track and audit batch operations in your Amazon Keyspaces tables.

### Batch operation examples
<a name="batchStatements-examples"></a>

The following is an example of a basic logged batch statement.

```
BEGIN BATCH
    INSERT INTO users (id, firstname, lastname) VALUES (1, 'John', 'Doe');
    INSERT INTO users (id, firstname, lastname) VALUES (2, 'Jane', 'Smith');
APPLY BATCH;
```

This is an example of a batch that includes `INSERT`, `UPDATE`, and `DELETE` statements.

```
BEGIN BATCH
    INSERT INTO users (id, firstname, lastname) VALUES (1, 'John', 'Doe');
    UPDATE users SET firstname = 'Johnny' WHERE id = 2;
    DELETE FROM users WHERE id = 3;
APPLY BATCH;
```

This is an example of a batch using client-side timestamps.

```
BEGIN BATCH
    INSERT INTO users (id, firstname, lastname) VALUES (1, 'John', 'Stiles') USING TIMESTAMP 1669069624;
    INSERT INTO users (id, firstname, lastname) VALUES (2, 'Jane', 'Doe') USING TIMESTAMP 1669069624;
APPLY BATCH;

BEGIN BATCH
    UPDATE users USING TIMESTAMP 1669069624 SET firstname = 'Carlos' WHERE id = 1;
    UPDATE users USING TIMESTAMP 1669069624 SET firstname = 'Diego' WHERE id = 2;
APPLY BATCH;
```

This is an example of a conditional batch.

```
BEGIN BATCH
    INSERT INTO users (id, firstname, lastname) VALUES (1, 'Jane', 'Doe') IF NOT EXISTS;
    INSERT INTO users (id, firstname, lastname) VALUES (2, 'John', 'Doe') IF NOT EXISTS;
APPLY BATCH;


BEGIN BATCH
    UPDATE users SET lastname = 'Stiles' WHERE id = 1 IF lastname = 'Doe';
    UPDATE users SET lastname = 'Stiles' WHERE id = 2 IF lastname = 'Doe';
APPLY BATCH;
```

This is an example of a batch using Time to Live (TTL).

```
BEGIN BATCH
    INSERT INTO users (id, firstname, lastname) VALUES (1, 'John', 'Doe') USING TTL 3600;
    INSERT INTO users (id, firstname, lastname) VALUES (2, 'Jane', 'Smith') USING TTL 7200;
APPLY BATCH;
```

This is an example of a batch statement that updates multiple tables.

```
BEGIN BATCH
    INSERT INTO users (id, firstname) VALUES (1, 'John');
    INSERT INTO user_emails (user_id, email) VALUES (1, 'john@example.com');
APPLY BATCH;
```

This is an example of a batch operation using user-defined types (UDTs). The example assumes that the UDT `address` exists.

```
BEGIN BATCH
    INSERT INTO users (id, firstname, address)
    VALUES (1, 'John', {street: '123 Main St', city: 'NYC', zip: '10001'});
    INSERT INTO users (id, firstname, address)
    VALUES (2, 'Jane', {street: '456 Oak Ave', city: 'LA', zip: '90210'});
APPLY BATCH;

BEGIN BATCH
    UPDATE users SET address.zip = '10002' WHERE id = 1;
    UPDATE users SET address.city = 'Boston' WHERE id = 2;
APPLY BATCH;
```

# Order results with `ORDER BY` in Amazon Keyspaces
<a name="ordering-results"></a>

The `ORDER BY` clause specifies the sort order of the results returned in a `SELECT` statement. The statement takes a list of column names as arguments and for each column you can specify the sort order for the data. You can only specify clustering columns in ordering clauses, non-clustering columns are not allowed.

The two available sort order options for the returned results are `ASC` for ascending and `DESC` for descending sort order. 

```
SELECT * FROM my_keyspace.my_table ORDER BY (col1 ASC, col2 DESC, col3 ASC);

         col1 | col2 | col3  
        ------+------+------
          0   |  6   |  a   
          1   |  5   |  b   
          2   |  4   |  c   
          3   |  3   |  d   
          4   |  2   |  e   
          5   |  1   |  f   
          6   |  0   |  g
```

```
SELECT * FROM my_keyspace.my_table ORDER BY (col1 DESC, col2 ASC, col3 DESC);

         col1 | col2 | col3  
        ------+------+------
          6   |  0   |  g   
          5   |  1   |  f   
          4   |  2   |  e   
          3   |  3   |  d   
          2   |  4   |  c   
          1   |  5   |  b   
          0   |  6   |  a
```

If you don't specify the sort order in the query statement, the default ordering of the clustering column is used. 

The possible sort orders you can use in an ordering clause depend on the sort order assigned to each clustering column at table creation. Query results can only be sorted in the order defined for all clustering columns at table creation or the inverse of the defined sort order. Other possible combinations are not allowed.

For example, if the table's `CLUSTERING ORDER` is (col1 ASC, col2 DESC, col3 ASC), then the valid parameters for `ORDER BY` are either (col1 ASC, col2 DESC, col3 ASC) or (col1 DESC, col2 ASC, col3 DESC). For more information on `CLUSTERING ORDER`, see `table_options` under [CREATE TABLE](cql.ddl.table.md#cql.ddl.table.create).

# Paginate results in Amazon Keyspaces
<a name="paginating-results"></a>

Amazon Keyspaces automatically *paginates* the results from `SELECT` statements when the data read to process the `SELECT` statement exceeds 1 MB. With pagination, the `SELECT` statement results are divided into "pages" of data that are 1 MB in size (or less). An application can process the first page of results, then the second page, and so on. Clients should always check for pagination tokens when processing `SELECT` queries that return multiple rows.

 If a client supplies a `PAGE SIZE` that requires reading more than 1 MB of data, Amazon Keyspaces breaks up the results automatically into multiple pages based on the 1 MB data-read increments.

For example, if the average size of a row is 100 KB and you specify a `PAGE SIZE` of 20, Amazon Keyspaces paginates data automatically after it reads 10 rows (1000 KB of data read). 

Because Amazon Keyspaces paginates results based on the number of rows that it reads to process a request and not the number of rows returned in the result set, some pages may not contain any rows if you are running filtered queries. 

For example, if you set `PAGE SIZE` to 10 and Keyspaces evaluates 30 rows to process your `SELECT` query, Amazon Keyspaces will return three pages. If only a subset of the rows matched your query, some pages may have less than 10 rows. For an example how the `PAGE SIZE` of `LIMIT` queries can affect read capacity, see [Estimate the read capacity consumption of limit queries](limit_queries.md).

For a comparison with Apache Cassandra pagination, see [Pagination](functional-differences.md#functional-differences.paging).

# Working with change data capture (CDC) streams in Amazon Keyspaces
<a name="cdc"></a>

Amazon Keyspaces change data capture (CDC) records row-level change events from an Amazon Keyspaces table in near-real time. 

Amazon Keyspaces CDC enables event-driven use cases such as industrial IoT and fraud detection as well as data processing use cases like full-text search and data archival. The change events that Amazon Keyspaces CDC captures in streams can be consumed by downstream applications that perform business-critical functions such as data analytics, text search, ML training/inference, and continuous data backups for archival. For example, you can transfer stream data to Amazon analytics and storage services like Amazon OpenSearch Service, Amazon Redshift, and Amazon S3 for further processing.

Amazon Keyspaces CDC offers time-ordered and de-duplicated change records for tables, with automatic scaling of data throughput and retention time of up to 24 hours. 

Amazon Keyspaces CDC streams are completely serverless, and you don't need to manage the data infrastructure for capturing change events. In addition, Amazon Keyspaces CDC doesn't consume any table capacity for either compute or storage. For more information, see [How change data capture (CDC) streams work in Amazon Keyspaces](cdc_how-it-works.md).

You can use the Amazon Keyspaces Streams API to build applications that consume Amazon Keyspaces CDC streams and take action based on the contents. For available endpoints, see [How to access CDC stream endpoints in Amazon Keyspaces](CDC_access-endpoints.md).

For a complete listing of all operations available for Amazon Keyspaces in the Streams API, see [https://docs.amazonaws.cn/keyspaces/latest/StreamsAPIReference/Welcome.html](https://docs.amazonaws.cn/keyspaces/latest/StreamsAPIReference/Welcome.html).

**Topics**
+ [

# How change data capture (CDC) streams work in Amazon Keyspaces
](cdc_how-it-works.md)
+ [

# How to use change data capture (CDC) streams in Amazon Keyspaces
](cdc_how-to-use.md)

# How change data capture (CDC) streams work in Amazon Keyspaces
<a name="cdc_how-it-works"></a>

This section provides an overview of how change data capture (CDC) streams work in Amazon Keyspaces. 

Amazon Keyspaces change data capture (CDC) records an ordered sequence of row-level modifications in Amazon Keyspaces tables and stores this information in a log called *stream* for up to 24 hours. Every row-level modification generates a new CDC record that holds the primary key column information as well as the “before” and “after” states of the row including all the columns. Applications can access the stream and view the mutations in near-real time.

When you enable CDC on your table, Amazon Keyspaces creates a new CDC stream and starts to capture information about every modification in the table. The CDC stream has an Amazon Resource Name (ARN) with the following format: 

```
arn:${Partition}:cassandra:{Region}:${Account}:/keyspace/${keyspaceName}/table/${tableName}/stream/${streamLabel}
```

You can select the type of information or the *view type* that the CDC stream collects for each record when you first enable the CDC stream. You can't change the view type of the stream afterward. Amazon Keyspaces supports the following view types:
+ `NEW_AND_OLD_IMAGES` – Captures the versions of the row before as well as after the mutation. This is the default.
+ `NEW_IMAGE` – Captures the version of the row after the mutation.
+ `OLD_IMAGE` – Captures the version of the row before the mutation.
+ `KEYS_ONLY` – Captures the partition and clustering keys of the row that was mutated.

Every CDC stream consists of records. Each record represents a single row modification in an Amazon Keyspaces table. Records are logically organized into groups known as *shards*. These groups are logically organized by ranges of the primary key (combination of partition key, clustering key ranges) and are an internal construct of Amazon Keyspaces. Each shard acts as a container for multiple records, and contains information required for accessing and iterating through these records.

![\[An Amazon Keyspaces CDC stream consists of shards that represent a CDC record of a collection of row mutations.\]](http://docs.amazonaws.cn/en_us/keyspaces/latest/devguide/images/keyspaces_cdc.png)


Each CDC record is assigned a sequence number, reflecting the order in which the record was published within the shard. The sequence number is guaranteed to be increasing and unique within each shard.

Amazon Keyspaces creates and deletes shards automatically. Based on traffic loads Amazon Keyspaces can also split or merge shards over time. For example, Amazon Keyspaces can split one shard into multiple new shards or merge shards into a new single shard. Amazon Keyspaces APIs publish the shard and CDC stream information to allow consuming applications to process records in the right order by accessing the entire lineage graph of a shard. 

Amazon Keyspaces CDC is based on the following principles that you can rely on when building your application:
+ Each row-level mutation record appears exactly once in the CDC stream.
+ When you consume shards in order of lineage, each row-level mutation record appears in the same sequence as the actual mutation order on the primary key.

**Topics**
+ [Data retention](#CDC_how-it-works-data-retention)
+ [TTL data expiration](#CDC_how-it-works-ttl)
+ [Batch operations](#CDC_how-it-works-batch-operations)
+ [Static columns](#CDC_how-it-works-static)
+ [Encryption at rest](#CDC_how-it-works-encryption)
+ [Multi-Region replication](#CDC_how-it-works-mrr)
+ [Integration with Amazon services](#howitworks_integration)

## How data retention works for CDC streams in Amazon Keyspaces
<a name="CDC_how-it-works-data-retention"></a>

Amazon Keyspaces retains the records in the CDC stream for a period of 24 hours. You can't change the retention period. If you disable CDC on a table, the data in the stream continues to be readable for 24 hours. After this time, the data expires and the records are automatically deleted. 

## How Time to Live (TTL) data expiration works with CDC streams in Amazon Keyspaces
<a name="CDC_how-it-works-ttl"></a>

Amazon Keyspaces shows the expiration time at the column/cell level as well as the row level in a metadata field called `expirationTime` in the CDC change records. When Amazon Keyspaces TTL detects expiration of a cell, CDC creates a new change record that shows TTL as the origin of the change. For more information about TTL, see [Expire data with Time to Live (TTL) for Amazon Keyspaces (for Apache Cassandra)](TTL.md).

## How batch operations work for CDC streams in Amazon Keyspaces
<a name="CDC_how-it-works-batch-operations"></a>

Batch operations are internally divided into individual row-level modifications. Amazon Keyspaces retains all records within CDC streams at the row-level, even if the modification occurred in a batch operation. Amazon Keyspaces maintains the order of records within the CDC stream in the same sequence as the mutation order that occurred at the row-level or on the primary key.

## How static columns work in CDC streams in Amazon Keyspaces
<a name="CDC_how-it-works-static"></a>

Static column values are shared among all rows in a partition in Cassandra. Due to this behavior, Amazon Keyspaces captures any updates to a static column as a separate record in the CDC stream. The following examples summarize the behavior of static column mutations: 
+ When only the static column is updated, the CDC stream contains a row-modification for the static column as the only column in the row.
+ When a row is updated without any change to the static column, the CDC stream contains a row-modification that contains all columns except the static column.
+ When a row is updated along with the static column, the CDC stream contains two separate row-modifications, one for the static column and the other for the rest of the row. 

## How encryption at rest works for CDC streams in Amazon Keyspaces
<a name="CDC_how-it-works-encryption"></a>

To encrypt the data at rest in the CDC ordered log, Amazon Keyspaces uses the same encryption key that is already used for the table. For more information about encryption at rest, see [Encryption at rest in Amazon Keyspaces](EncryptionAtRest.md).

## How multi-Region replication works for CDC streams in Amazon Keyspaces
<a name="CDC_how-it-works-mrr"></a>

You can enable and disable CDC streams for individual replicas of a multi-Region table by using either the `update-table` API or the `ALTER TABLE` CQL command. Due to asynchronous replication and conflict resolution, CDC streams for multi-Region tables are not consistent across Amazon Web Services Regions. Therefore, the records that Amazon Keyspaces captures in the stream might appear in a different order in different Regions.

For more information about multi-Region replication, see [Multi-Region replication for Amazon Keyspaces (for Apache Cassandra)](multiRegion-replication.md).

## CDC streams and integration with Amazon services
<a name="howitworks_integration"></a>

### How to work with VPC endpoints for CDC streams in Amazon Keyspaces
<a name="CDC_how-it-works-vpc"></a>

You can use VPC endpoints to access Amazon Keyspaces CDC streams. For information about how to create and access VPC endpoints for streams, see [Using Amazon Keyspaces CDC streams with interface VPC endpoints](vpc-endpoints-streams.md).

### How monitoring with CloudWatch works for CDC streams in Amazon Keyspaces
<a name="CDC_how-it-works-monitoring"></a>

You can use Amazon CloudWatch to monitor API calls made to the Amazon Keyspaces CDC endpoint. For more information about the available metrics, see [Metrics for Amazon Keyspaces change data capture (CDC)](metrics-dimensions.md#keyspaces-cdc-metrics).

### How logging with CloudTrail works for CDC streams in Amazon Keyspaces
<a name="CDC_how-it-works-logging"></a>

Amazon Keyspaces CDC is integrated with Amazon CloudTrail, a service that provides a record of actions taken by a user, role, or an Amazon service in Amazon Keyspaces. CloudTrail captures Data Definition Language (DDL) API calls and Data Manipulation Language (DML) API calls for Amazon Keyspaces as events. The calls that are captured include calls from the Amazon Keyspaces console and programmatic calls to the Amazon Keyspaces API operations.

For more information about the CDC events captured by CloudTrail, see [Logging Amazon Keyspaces API calls with Amazon CloudTrail](logging-using-cloudtrail.md).

### How tagging works for CDC streams in Amazon Keyspaces
<a name="CDC_how-it-works-tagging"></a>

Amazon Keyspaces CDC streams are a taggable resource. You can tag a stream when you create a table programmatically using CQL, the Amazon SDK, or the Amazon CLI. You can also tag existing streams, delete tags, or view tags of a stream. For more information, see [Tag keyspaces, tables, and streams in Amazon Keyspaces](Tagging.Operations.md).

# How to use change data capture (CDC) streams in Amazon Keyspaces
<a name="cdc_how-to-use"></a>

**Topics**
+ [Configure permissions](configure-cdc-permissions.md)
+ [Access CDC stream endpoints](CDC_access-endpoints.md)
+ [Enable a CDC stream for a new table](keyspaces-enable-cdc-new-table.md)
+ [Enable a CDC stream for an existing table](keyspaces-enable-cdc-alter-table.md)
+ [Disable a CDC stream](keyspaces-delete-cdc.md)
+ [View CDC streams](keyspaces-view-cdc.md)
+ [Access CDC streams](keyspaces-records-cdc.md)
+ [Use KCL for processing streams](cdc_how-to-use-kcl.md)

# Configure permissions to work with CDC streams in Amazon Keyspaces
<a name="configure-cdc-permissions"></a>

To enable CDC streams, the principal, for example an IAM user or role, needs the following permissions.

For more information about Amazon Identity and Access Management, see [Amazon Identity and Access Management for Amazon Keyspaces](security-iam.md).

## Permissions to enable a CDC stream for a table
<a name="cdc-permissions-enable"></a>

To enable a CDC stream for an Amazon Keyspaces table, the principal first needs permissions to create or alter a table and second the permissions to create the service linked role [AWSServiceRoleForAmazonKeyspacesCDC](using-service-linked-roles-CDC-streams.md#service-linked-role-permissions-CDC-streams). Amazon Keyspaces uses the service linked role to publish CloudWatch metrics into your account on your behalf

The following IAM policy is an example of this.

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement":[
        {
            "Effect":"Allow",
            "Action":[
                "cassandra:Create",
                "cassandra:CreateMultiRegionResource",
                "cassandra:Alter",
                "cassandra:AlterMultiRegionResource"
            ],
            "Resource":[
                "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/my_keyspace/*",
                "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/system*"
            ]
        },
        {
            "Sid": "KeyspacesCDCServiceLinkedRole",
            "Effect": "Allow",
            "Action": "iam:CreateServiceLinkedRole",
            "Resource": "arn:aws-cn:iam::*:role/aws-service-role/cassandra-streams.amazonaws.com/AWSServiceRoleForAmazonKeyspacesCDC",
            "Condition": {
              "StringLike": {
                "iam:AWSServiceName": "cassandra-streams.amazonaws.com"
              }
            }
        }
    ]
}
```

To disable a stream, only `ALTER TABLE` permissions are required.

## Permissions to view a CDC stream
<a name="cdc-permissions-view"></a>

To view or list CDC streams, the principal needs read permissions for the system keyspace. For more information, see [`system_schema_mcs`](working-with-keyspaces.md#keyspace_system_schema_mcs).

The following IAM policy is an example of this.

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":"cassandra:Select",
         "Resource":[
             "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/system*"
         ]
      }
   ]
}
```

To view or list CDC streams with the Amazon CLI or the Amazon Keyspaces API, the principal needs additional permissions for the actions `cassandra:ListStreams` and `cassandra:GetStream`.

The following IAM policy is an example of this.

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "cassandra:Select",
        "cassandra:ListStreams",
        "cassandra:GetStream"
      ],
      "Resource": "*"
    }
  ]
}
```

## Permissions to read a CDC stream
<a name="cdc-permissions-read"></a>

To read CDC streams, the principal needs the following permissions.

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "cassandra:GetStream",
            "cassandra:GetShardIterator",
            "cassandra:GetRecords"
         ],
         "Resource":[
            "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/my_keyspace/table/my_table/stream/stream_label"
         ]
      }
   ]
}
```

## Permissions to process Amazon Keyspaces CDC streams with the Kinesis Client Library (KCL)
<a name="cdc-permissions-kcl"></a>

To process Amazon Keyspaces CDC streams with KCL, the IAM principal needs the following permissions. 
+ `Amazon Keyspaces` – Read-only access to a specified Amazon Keyspaces CDC stream.
+ `DynamoDB` – Permissions to create `shard lease` tables, read and write access to the tables, and read-access to the index as required for KCL stream processing.
+ `CloudWatch` – Permissions to publish metric data from Amazon Keyspaces CDC streams processing with KCL into the namespace of your KCL client application in your CloudWatch account. For more information about monitoring, see [Monitor the Kinesis Client Library with Amazon CloudWatch](https://docs.amazonaws.cn/streams/latest/dev/monitoring-with-kcl.html).

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "cassandra:GetStream",
            "cassandra:GetShardIterator",
            "cassandra:GetRecords"
         ],
         "Resource":[
            "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/my_keyspace/table/my_table/stream/stream_label"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "dynamodb:CreateTable",
            "dynamodb:DescribeTable",
            "dynamodb:UpdateTable",
            "dynamodb:GetItem",
            "dynamodb:UpdateItem",
            "dynamodb:PutItem",
            "dynamodb:DeleteItem",
            "dynamodb:Scan"
         ],
         "Resource":[
            "arn:aws-cn:dynamodb:us-east-1:111122223333:table/KCL_APPLICATION_NAME"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "dynamodb:CreateTable",
            "dynamodb:DescribeTable",
            "dynamodb:GetItem",
            "dynamodb:UpdateItem",
            "dynamodb:PutItem",
            "dynamodb:DeleteItem",
            "dynamodb:Scan"
         ],
         "Resource":[
            "arn:aws-cn:dynamodb:us-east-1:111122223333:table/KCL_APPLICATION_NAME-WorkerMetricStats",
            "arn:aws-cn:dynamodb:us-east-1:111122223333:table/KCL_APPLICATION_NAME-CoordinatorState"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "dynamodb:Query"
         ],
         "Resource":[
            "arn:aws-cn:dynamodb:us-east-1:111122223333:table/KCL_APPLICATION_NAME/index/*"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "cloudwatch:PutMetricData"
         ],
         "Resource":"*"
      }
   ]
}
```

# How to access CDC stream endpoints in Amazon Keyspaces
<a name="CDC_access-endpoints"></a>

Amazon Keyspaces maintains separate  for keyspaces/tables and for CDC streams in each Amazon Web Services Region where Amazon Keyspaces is available. To access a CDC stream, select the Region of the table and replace the `cassandra` prefix with `cassandra-streams` in the endpoint name as shown in the following example:

[\[See the AWS documentation website for more details\]](http://docs.amazonaws.cn/en_us/keyspaces/latest/devguide/CDC_access-endpoints.html)

The following table contains a complete list of available public endpoints for Amazon Keyspaces change data capture streams. Amazon Keyspaces CDC streams supports both IPv4 and IPv6. All public endpoints, for example `cassandra-streams.us-east-1.api.aws`, are dual stack endpoints that can be configured for IPv4 and IPv6. 

[\[See the AWS documentation website for more details\]](http://docs.amazonaws.cn/en_us/keyspaces/latest/devguide/CDC_access-endpoints.html)

# Enable a CDC stream when creating a new table in Amazon Keyspaces
<a name="keyspaces-enable-cdc-new-table"></a>

To enable a CDC stream when you create a table, you can use the `CREATE TABLE` statement in CQL or the `create-table` command with the Amazon CLI. 

For each changed row in the table, Amazon Keyspaces can capture the following changes based on the `view_type` of the `cdc_specification` you select:
+ `NEW_AND_OLD_IMAGES` – both versions of the row, before and after the change. This is the default.
+ `NEW_IMAGE` – the version of the row after the change.
+ `OLD_IMAGE` – the version of the row before the change.
+ `KEYS_ONLY` – the partition and clustering keys of the row that was changed.

For information about how to tag a stream, see [Add tags to a new stream when creating a table](Tagging.Operations.new.table.stream.md).

**Note**  
Amazon Keyspaces CDC requires the presence of a service-linked role (`AWSServiceRoleForAmazonKeyspacesCDC`) that publishes metric data from Amazon Keyspaces CDC streams into the `"cloudwatch:namespace": "AWS/Cassandra"` in your CloudWatch account on your behalf. This role is created automatically for you. For more information, see [Using roles for Amazon Keyspaces CDC streams](using-service-linked-roles-CDC-streams.md).

------
#### [ Cassandra Query Language (CQL) ]

**Enable a CDC stream when you create a table with CQL**

1. 

   ```
   CREATE TABLE mykeyspace.mytable (a text, b text, PRIMARY KEY(a)) 
   WITH CUSTOM_PROPERTIES={'cdc_specification': {'view_type': 'NEW_IMAGE'}} AND CDC = TRUE;
   ```

1. To confirm the stream settings, you can use the following statement.

   ```
   SELECT keyspace_name, table_name, cdc, custom_properties FROM system_schema_mcs.tables WHERE keyspace_name = 'mykeyspace' AND table_name = 'mytable';
   ```

   The output of that statement should look similar to this.

   ```
   SELECT keyspace_name, table_name, cdc, custom_properties FROM system_schema_mcs.tables WHERE keyspace_name = 'mykeyspace' AND table_name = 'mytable';keyspace_name | table_name | cdc  | custom_properties
   ---------------+------------+------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
       mykeyspace |   mytable  | True | {'capacity_mode': {'last_update_to_pay_per_request_timestamp': '1741383893782', 'throughput_mode': 'PAY_PER_REQUEST'}, 'cdc_specification': {'latest_stream_arn': 'arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/mykeyspace/table/mytable/stream/2025-03-07T21:44:53.783', 'status': 'ENABLED', 'view_type': 'NEW_IMAGE'}, 'encryption_specification': {'encryption_type': 'AWS_OWNED_KMS_KEY'}, 'point_in_time_recovery': {'status': 'disabled'}}>
   ```

------
#### [ CLI ]

**Enable a CDC stream when you create a table with the Amazon CLI**

1. To create a stream you can use the following syntax. 

   ```
   aws keyspaces create-table \
   --keyspace-name 'mykeyspace' \
   --table-name 'mytable' \
   --schema-definition 'allColumns=[{name=a,type=text},{name=b,type=text}],partitionKeys=[{name=a}]' \
   --cdc-specification status=ENABLED,viewType=NEW_IMAGE
   ```

1. The output of that command shows the standard `create-table` response and looks similar to this example. 

   ```
   { "resourceArn": "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/mykeyspace/table/mytable" }
   ```

------

# Enable a CDC stream for an existing table in Amazon Keyspaces
<a name="keyspaces-enable-cdc-alter-table"></a>

To enable a CDC stream for an existing table, you can use the `ALTER TABLE` statement in CQL, the `update-table` command with the Amazon CLI, or you can use the console.

For each changed row in the table, Amazon Keyspaces can capture the following changes based on the `view_type` of the `cdc_specification` you select:
+ `NEW_AND_OLD_IMAGES` – both versions of the row, before and after the change. This is the default.
+ `NEW_IMAGE` – the version of the row after the change.
+ `OLD_IMAGE` – the version of the row before the change.
+ `KEYS_ONLY` – the partition and clustering keys of the row that was changed.

For information about how to tag a stream, see [Add new tags to a stream](Tagging.Operations.existing.stream.md).

**Note**  
Amazon Keyspaces CDC requires the presence of a service-linked role (`AWSServiceRoleForAmazonKeyspacesCDC`) that publishes metric data from Amazon Keyspaces CDC streams into the `"cloudwatch:namespace": "AWS/Cassandra"` in your CloudWatch account on your behalf. This role is created automatically for you. For more information, see [Using roles for Amazon Keyspaces CDC streams](using-service-linked-roles-CDC-streams.md).

------
#### [ Cassandra Query Language (CQL) ]

**Enable a stream (CDC stream) with CQL**

You can use `ALTER TABLE` to enable a stream for an existing table.

1. The following example creates a stream that only captures changes to partition and clustering keys of a changed row.

   ```
   ALTER TABLE mykeyspace.mytable
   WITH cdc = TRUE
   AND CUSTOM_PROPERTIES={'cdc_specification': {'view_type': 'KEYS_ONLY'}};
   ```

1. To verify the stream settings, you can use the following statement.

   ```
   SELECT keyspace_name, table_name, cdc, custom_properties FROM system_schema_mcs.tables WHERE keyspace_name = 'mykeyspace' AND table_name = 'mytable';
   ```

   The output of the statement looks similar to this.

   ```
    keyspace_name | table_name | cdc  | custom_properties
   ---------------+------------+------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
       mykeyspace |    mytable | True | {'capacity_mode': {'last_update_to_pay_per_request_timestamp': '1741385897045', 'throughput_mode': 'PAY_PER_REQUEST'}, 'cdc_specification': {'latest_stream_arn': 'arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/mykeyspace/table/mytable/stream/2025-03-07T22:20:10.454', 'status': 'ENABLED', 'view_type': 'KEYS_ONLY'}, 'encryption_specification': {'encryption_type': 'AWS_OWNED_KMS_KEY'}, 'point_in_time_recovery': {'status': 'disabled'}}
   ```

------
#### [ CLI ]

**Create a CDC stream with the Amazon CLI**

1. To create a stream for an existing table you can use the following syntax.

   ```
   aws keyspaces update-table \
   --keyspace-name 'mykeyspace' \
   --table-name 'mytable' \
   --cdc-specification status=ENABLED,viewType=NEW_AND_OLD_IMAGES
   ```

1. The output of that command shows the standard `create-table` response and looks similar to this example.

   ```
   { "resourceArn": "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/mykeyspace/table/mytable" }
   ```

------
#### [ Console ]

**Enable a CDC stream with the Amazon Keyspaces console**

1. Sign in to the Amazon Web Services Management Console, and open the Amazon Keyspaces console at [https://console.amazonaws.cn/keyspaces/home](https://console.amazonaws.cn/keyspaces/home).

1. In the navigation pane, choose **Tables**, and then choose a table from the list.

1. Choose the **Streams** tab.

1. Choose **Edit** to enable a stream.

1. Select **Turn on streams**.

1. Choose **View type** of the stream. The following options are available. Note that you can't change the view type of a stream after it's been created.
   + **New and old images** – Amazon Keyspaces captures both versions of the row, before and after the change. This is the default.
   + **New image** – Amazon Keyspaces captures only the version of the row after the change.
   + **Old image** – Amazon Keyspaces captures only the version of the row before the change.
   + **Primary key only** – Amazon Keyspaces captures only the partition and clustering key columns of the row that was changed.

1. To finish, choose **Save changes**.

------

# Disable a CDC stream in Amazon Keyspaces
<a name="keyspaces-delete-cdc"></a>

To disable a CDC stream in a keyspace, you can use the `ALTER TABLE` statement in CQL, the `update-table` command with the Amazon CLI, or the console.

------
#### [ Cassandra Query Language (CQL) ]

**Disable a stream (CDC stream) with CQL**

1. To disable a stream, you can use the following statement.

   ```
   ALTER TABLE mykeyspace.mytable
   WITH cdc = FALSE;
   ```

1. To confirm that the stream is disabled, you can use the following statement.

   ```
   SELECT keyspace_name, table_name, cdc, custom_properties FROM system_schema_mcs.tables WHERE keyspace_name = 'mykeyspace' AND table_name = 'mytable';
   ```

   The output of that statement looks similar to this.

   ```
    keyspace_name | table_name | cdc   | custom_properties
   ---------------+------------+-------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
      mykeyspace  |   mytable  | False | {'capacity_mode': {'last_update_to_pay_per_request_timestamp': '1741385668642', 'throughput_mode': 'PAY_PER_REQUEST'}, 'encryption_specification': {'encryption_type': 'AWS_OWNED_KMS_KEY'}, 'point_in_time_recovery': {'status': 'disabled'}}
   ```

------
#### [ CLI ]

**Disable a stream (CDC stream) with the Amazon CLI**

1. To disable a stream, you can use the following command.

   ```
   aws keyspaces update-table \
   --keyspace-name 'mykeyspace' \
   --table-name 'mytable' \
   --cdc-specification status=DISABLED
   ```

1. The output of the command looks similar to this example.

   ```
   {
       "keyspaceArn": "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/my_keyspace/",
       "streamName": "my_stream"
   }
   ```

------
#### [ Console ]

**Disable a stream (CDC stream) with the Amazon Keyspaces console**

1. Sign in to the Amazon Web Services Management Console, and open the Amazon Keyspaces console at [https://console.amazonaws.cn/keyspaces/home](https://console.amazonaws.cn/keyspaces/home).

1. In the navigation pane, choose **Tables**, and then choose a table from the list.

1. Choose the **Streams** tab.

1. Choose **Edit**.

1. Unselect **Turn on streams**. 

1. Choose **Save changes** to disable the stream.

------

# View CDC streams in Amazon Keyspaces
<a name="keyspaces-view-cdc"></a>

To view or list all streams in keyspace, you can query the table `system_schema_mcs.streams` in the system keyspace using a statement in CQL, or use the `get-stream` and `list-stream` commands with the Amazon CLI, or the console.

For the required permissions, see [Configure permissions to work with CDC streams in Amazon Keyspaces](configure-cdc-permissions.md).

------
#### [ Cassandra Query Language (CQL) ]

**View CDC streams with CQL**
+ To monitor the CDC status of your table, you can use the following statement.

  ```
  SELECT custom_properties
  FROM system_schema_mcs.tables 
  WHERE keyspace_name='my_keyspace' and table_name='my_table';
  ```

  The output of the command looks similar to this.

  ```
  ...
  custom_properties
  ----------------------------------------------------------------------------------
  {'cdc_specification':{'status': 'Enabled', 'view_type': 'NEW_IMAGE', 'latest_stream_arn': 'arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/my_keyspace/table/my_table/stream/stream_label''}}
  ...
  ```

------
#### [ CLI ]

**View CDC streams with the Amazon CLI**

1. This example shows how to see the stream information for a table.

   ```
   aws keyspaces get-table \
   --keyspace-name 'my_keyspace' \
   --table-name 'my_table'
   ```

   The output of the command looks like this.

   ```
   {
       "keyspaceName": "my_keyspace",
       "tableName": "my_table",
       ... Other fields ...,
       "latestStreamArn": "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/my_keyspace/table/my_table/stream/stream_label",
       "cdcSpecification": {
           "status": "ENABLED",
           "viewType": "NEW_AND_OLD_IMAGES"    
       }
   }
   ```

1. You can list all streams in your account in a specified Amazon Web Services Region. The following command is an example of this.

   ```
   aws keyspacesstreams list-streams --region us-east-1
   ```

   The output of the command could look similar to this.

   ```
   {
       "Streams": [
           {
               "StreamArn": "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/ks_1/table/t1/stream/2023-05-11T21:21:33.291",
               "StreamLabel": "2023-05-11T21:21:33.291",
               "KeyspaceName": "ks_1"
               "TableName": "t1",
           },
           {
               "StreamArn": "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/ks_1/table/t2/stream/2023-05-11T21:21:33.291",
               "StreamLabel": "2023-05-11T21:21:33.291",
               "KeyspaceName": "ks_1"Create a keyspace with the name catalog. Note
                                   that streams are not supported in multi-Region keyspaces.
               "TableName": "t2",
           },
           {
               "StreamArn": "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/ks_2/table/t1/stream/2023-05-11T21:21:33.291",
               "StreamLabel": "2023-05-11T21:21:33.291",
               "KeyspaceName": "ks_3"
               "TableName": "t1",
           }
       ]
   }
   ```

1. You can also list the CDC streams for a given keyspace using the following parameters. 

   ```
   aws keyspacesstreams list-streams --keyspace-name ks_1 --region us-east-1
   ```

   The output of the command looks similar to this.

   ```
   {
       "Streams": [
           {
               "StreamArn": "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/ks_1/table/t1/stream/2023-05-11T21:21:33.291",
               "StreamLabel": "2023-05-11T21:21:33.291",
               "KeyspaceName": "ks_1"
               "TableName": "t1",
           },
           {
               "StreamArn": "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/ks_1/table/t2/stream/2023-05-11T21:21:33.291",
               "StreamLabel": "2023-05-11T21:21:33.291",
               "KeyspaceName": "ks_1"
               "TableName": "t2",
           }
       ]
   }
   ```

1. You can also list the CDC streams for a given table using the following parameters. 

   ```
   aws keyspacesstreams list-streams --keyspace-name ks_1 --table-name t2 --region us-east-1
   ```

   The output of the command looks similar to this.

   ```
   {
       "Streams": [
           {
               "StreamArn": "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/ks_1/table/t2/stream/2023-05-11T21:21:33.291",
               "StreamLabel": "2023-05-11T21:21:33.291",
               "KeyspaceName": "ks_1"
               "TableName": "t2",
           }
       ]
   }
   ```

------
#### [ Console ]

**View CDC streams in the Amazon Keyspaces console**

1. Sign in to the Amazon Web Services Management Console, and open the Amazon Keyspaces console at [https://console.amazonaws.cn/keyspaces/home](https://console.amazonaws.cn/keyspaces/home).

1. In the navigation pane, choose **Tables**, and then choose a table from the list.

1. Choose the **Streams** tab to review the stream details.

------

# Access records in CDC streams in Amazon Keyspaces
<a name="keyspaces-records-cdc"></a>

To access the records in a stream, you use the [Amazon Keyspaces Streams API](https://docs.amazonaws.cn/keyspaces/latest/StreamsAPIReference/Welcome.html). The following section contains examples on how to access records using the Amazon CLI.

For the required permissions, see [Configure permissions to work with CDC streams in Amazon Keyspaces](configure-cdc-permissions.md).

**Access records in a stream using the Amazon CLI**

1. You can use the Amazon Keyspaces Streams API to access the change records of the stream. For more information, see [https://docs.amazonaws.cn/keyspaces/latest/StreamsAPIReference/Welcome.html](https://docs.amazonaws.cn/keyspaces/latest/StreamsAPIReference/Welcome.html). To retrieve the shards within the stream, you can use the `get-stream` API as shown in the following example.

   ```
   aws keyspacesstreams get-stream \
   --stream-arn 'arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/mykeyspace/table/mytable/stream/STREAM_LABEL'
   ```

   The following is an example of the output.

   ```
   {
      "StreamArn": "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/mykeyspace/table/mytable/stream/2023-05-11T21:21:33.291",
      "StreamStatus": "ENABLED",
      "StreamViewType": "NEW_AND_OLD_IMAGES",
      "CreationRequestDateTime": "<CREATION_TIME>",
      "KeyspaceName": "mykeyspace",
      "TableName": "mytable",
      "StreamLabel": "2023-05-11T21:21:33.291",
       "Shards": [
           {
               "SequenceNumberRange": {
                   "EndingSequenceNumber": "<END_SEQUENCE_NUMBER>",
                   "StartingSequenceNumber": "<START_SEQUENCE_NUMBER>"
               },
               "ShardId": "<SHARD_ID>"
           },
       ]
   }
   ```

1. To retrieve records from the stream, you start with getting an iterator that provides you with the starting point for accessing records. To do this, you can use the shards within the CDC stream returned by the API in the previous step. To gather the iterator, you can use the `get-shard-iterator` API. For this example, you use an iterator of type `TRIM_HORIZON` that retrieves from the last trimmed point or beginning) of the shard.

   ```
   aws keyspacesstreams get-shard-iterator \
   --stream-arn 'arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/mykeyspace/table/mytable/stream/STREAM_LABEL' \
   --shard-id 'SHARD_ID' \
   --shard-iterator-type 'TRIM_HORIZON'
   ```

   The output of the command looks like in the following example.

   ```
   {
       "ShardIterator": "<SHARD_ITERATOR>" 
   }
   ```

1. To retrieve the CDC records using the `get-records` API, you can use the iterator returned in the last step. The following command is an example of this.

   ```
   aws keyspacesstreams get-records \
   --shard-iterator 'SHARD_ITERATOR' \
   --limit 100
   ```

# Use the Kinesis Client Library (KCL) to process Amazon Keyspaces streams
<a name="cdc_how-to-use-kcl"></a>

This topic describes how to use the Kinesis Client Library (KCL) to consume and process data from Amazon Keyspaces change data capture (CDC) streams.

Instead of working directly with the Amazon Keyspaces Streams API, working with the Kinesis Client Library (KCL) provides many benefits, for example:
+ Built in shard lineage tracking and iterator handling. 
+ Automatic load balancing across workers.
+ Fault tolerance and recovery from worker failures.
+ Checkpointing to track processing progress.
+ Adaptation to changes in stream capacity.
+ Simplified distributed computing for processing CDC records.

The following section outlines why and how to use the Kinesis Client Library (KCL) to process streams and provides an example for processing an Amazon Keyspaces CDC stream with the KCL.

For information about pricing, see [Amazon Keyspaces (for Apache Cassandra) pricing](http://www.amazonaws.cn/keyspaces/pricing).

## What is the Kinesis Client Library?
<a name="cdc-kcl-what-is"></a>

The Kinesis Client Library (KCL) is a standalone Java software library designed to simplify the process of consuming and processing data from streams. KCL handles many of the complex tasks associated with distributed computing, letting you focus on implementing your business logic when processing stream data. KCL manages activities such as load balancing across multiple workers, responding to worker failures, checkpointing processed records, and responding to changes in the number of shards in the stream.

To process Amazon Keyspaces CDC streams, you can use the design patterns found in the KCL for working with stream shards and stream records. The KCL simplifies coding by providing useful abstractions above the low-level Kinesis Data Streams API. For more information about the KCL, see [Develop consumers with KCL](https://docs.amazonaws.cn/kinesis/latest/dev/develop-kcl-consumers.html) in the *Amazon Kinesis Data Streams Developer Guide*.

 To write applications using the KCL, you use the Amazon Keyspaces Streams Kinesis Adapter. The Kinesis Adapter implements the Kinesis Data Streams interface so that you can use the KCL for consuming and processing records from Amazon Keyspaces streams. For instructions on how to set up and install the Amazon Keyspaces streams Kinesis adapter, visit the [GitHub](https://github.com/aws/keyspaces-streams-kinesis-adapter) repository.

The following diagram shows how these libraries interact with each other.

![\[Interaction between a client applications and Kinesis Data Streams, KCL, the Amazon Keyspaces Streams Kinesis Adapter, and Amazon Keyspaces APIs when processing Amazon Keyspaces CDC stream records.\]](http://docs.amazonaws.cn/en_us/keyspaces/latest/devguide/images/keyspaces-streams-kinesis-adapter.png)


KCL is frequently updated to incorporate newer versions of underlying libraries, security improvements, and bug fixes. We recommend that you use the latest version of KCL to avoid known issues and benefit from all latest improvements. To find the latest KCL version, see [KCL GitHub repository](https://github.com/awslabs/amazon-kinesis-client).

## KCL concepts
<a name="cdc-kcl-concepts"></a>

Before you implement a consumer application using KCL, you should understand the following concepts:

**KCL consumer application**  
A KCL consumer application is a program that processes data from an Amazon Keyspaces CDC stream. The KCL acts as an intermediary between your consumer application code and the Amazon Keyspaces CDC stream.

**Worker**  
A worker is an execution unit of your KCL consumer application that processes data from the Amazon Keyspaces CDC stream. Your application can run multiple workers distributed across multiple instances.

**Record processor**  
A record processor is the logic in your application that processes data from a shard in the Amazon Keyspaces CDC stream. A record processor is instantiated by a worker for each shard it manages.

**Lease**  
A lease represents the processing responsibility for a shard. Workers use leases to coordinate which worker is processing which shard. KCL stores lease data in a table in Amazon DynamoDB.

**Checkpoint**  
A checkpoint is a record of the position in the shard up to which the record processor has successfully processed records. Checkpointing enables your application to resume processing from where it left off if a worker fails.

With the Amazon Keyspaces Kinesis adapter in place, you can begin developing against the KCL interface, with the API calls seamlessly directed at the Amazon Keyspaces stream endpoint. For a list of available endpoints, see [How to access CDC stream endpoints in Amazon Keyspaces](CDC_access-endpoints.md).

When your application starts, it calls the KCL to instantiate a worker. You must provide the worker with configuration information for the application, such as the stream descriptor and Amazon credentials, and the name of a record processor class that you provide. As it runs the code in the record processor, the worker performs the following tasks:
+ Connects to the stream
+ Enumerates the shards within the stream
+ Coordinates shard associations with other workers (if any)
+ Instantiates a record processor for every shard it manages
+ Pulls records from the stream
+ Pushes the records to the corresponding record processor
+ Checkpoints processed records
+ Balances shard-worker associations when the worker instance count changes
+ Balances shard-worker associations when shards are split

# Implement a KCL consumer application for Amazon Keyspaces CDC streams
<a name="cdc-kcl-implementation"></a>

This topic provides a step-by-step guide to implementing a KCL consumer application to process Amazon Keyspaces CDC streams.

1. Prerequisites: Before you begin, ensure you have:
   + An Amazon Keyspaces table with a CDC stream
   + Required IAM permissions for the IAM principal to access the Amazon Keyspaces CDC stream, create and access DynamoDB tables for KCL stream processing, and permissions to publish metrics to CloudWatch. For more information and a policy example, see [Permissions to process Amazon Keyspaces CDC streams with the Kinesis Client Library (KCL)](configure-cdc-permissions.md#cdc-permissions-kcl).
   + Ensure that valid Amazon credentials are set up in your local configuration. For more information, see [Store access keys for programmatic access](aws.credentials.manage.md).
   + Java Development Kit (JDK) 8 or later
   + Requirements listed in the [Readme](https://github.com/aws/keyspaces-streams-kinesis-adapter) on Github.

1. <a name="cdc-kcl-add-dependencies"></a>In this step, you add the KCL dependency to your project. For Maven, add the following to your pom.xml:

   ```
   <dependencies>
           <dependency>
               <groupId>software.amazon.kinesis</groupId>
               <artifactId>amazon-kinesis-client</artifactId>
               <version>3.1.0</version>
           </dependency>
           <dependency>
               <groupId>software.amazon.keyspaces</groupId>
               <artifactId>keyspaces-streams-kinesis-adapter</artifactId>
               <version>1.0.0</version>
           </dependency>
       </dependencies>
   ```
**Note**  
Always check for the latest version of KCL at the [KCL GitHub repository](https://github.com/awslabs/amazon-kinesis-client).

1. <a name="cdc-kcl-factory"></a>Create a factory class that produces record processor instances:

   ```
   import software.amazon.awssdk.services.keyspacesstreams.model.Record;
   import software.amazon.keyspaces.streamsadapter.adapter.KeyspacesStreamsClientRecord;
   import software.amazon.keyspaces.streamsadapter.model.KeyspacesStreamsProcessRecordsInput;
   import software.amazon.keyspaces.streamsadapter.processor.KeyspacesStreamsShardRecordProcessor;
   import software.amazon.kinesis.lifecycle.events.InitializationInput;
   import software.amazon.kinesis.lifecycle.events.LeaseLostInput;
   import software.amazon.kinesis.lifecycle.events.ShardEndedInput;
   import software.amazon.kinesis.lifecycle.events.ShutdownRequestedInput;
   import software.amazon.kinesis.processor.RecordProcessorCheckpointer;
   
   public class RecordProcessor implements KeyspacesStreamsShardRecordProcessor {
       private String shardId;
   
       @Override
       public void initialize(InitializationInput initializationInput) {
           this.shardId = initializationInput.shardId();
           System.out.println("Initializing record processor for shard: " + shardId);
       }
   
       @Override
       public void processRecords(KeyspacesStreamsProcessRecordsInput processRecordsInput) {
           try {
               for (KeyspacesStreamsClientRecord record : processRecordsInput.records()) {
                   Record keyspacesRecord = record.getRecord();
                   System.out.println("Received record: " + keyspacesRecord);
               }
   
               if (!processRecordsInput.records().isEmpty()) {
                   RecordProcessorCheckpointer checkpointer = processRecordsInput.checkpointer();
                   try {
                       checkpointer.checkpoint();
                       System.out.println("Checkpoint successful for shard: " + shardId);
                   } catch (Exception e) {
                       System.out.println("Error while checkpointing for shard: " + shardId + " " + e);
                   }
               }
           } catch (Exception e) {
               System.out.println("Error processing records for shard: " + shardId + " " + e);
           }
       }
   
       @Override
       public void leaseLost(LeaseLostInput leaseLostInput) {
           System.out.println("Lease lost for shard: " + shardId);
       }
   
       @Override
       public void shardEnded(ShardEndedInput shardEndedInput) {
           System.out.println("Shard ended: " + shardId);
           try {
               // This is required. Checkpoint at the end of the shard
               shardEndedInput.checkpointer().checkpoint();
               System.out.println("Final checkpoint successful for shard: " + shardId);
           } catch (Exception e) {
               System.out.println("Error while final checkpointing for shard: " + shardId + " " + e);
               throw new RuntimeException("Error while final checkpointing", e);
           }
       }
   
       @Override
       public void shutdownRequested(ShutdownRequestedInput shutdownRequestedInput) {
           System.out.println("Shutdown requested for shard " + shardId);
           try {
               shutdownRequestedInput.checkpointer().checkpoint();
           } catch (Exception e) {
               System.out.println("Error while checkpointing on shutdown for shard: " + shardId + " " + e);
           }
       }
   }
   ```

1. <a name="cdc-kcl-record-factory"></a>Create a record factory as shown in the following example.

   ```
   import software.amazon.kinesis.processor.ShardRecordProcessor;
   import software.amazon.kinesis.processor.ShardRecordProcessorFactory;
   
   import java.util.Queue;
   import java.util.concurrent.ConcurrentLinkedQueue;
   
   public class RecordProcessorFactory implements ShardRecordProcessorFactory {
       private final Queue<RecordProcessor> processors = new ConcurrentLinkedQueue<>();
   
       @Override
       public ShardRecordProcessor shardRecordProcessor() {
           System.out.println("Creating new RecordProcessor");
           RecordProcessor processor = new RecordProcessor();
           processors.add(processor);
           return processor;
       }
   }
   ```

1. <a name="cdc-kcl-consumer"></a>In this step you create the base class to configure KCLv3 and the Amazon Keyspaces adapter.

   ```
   import com.example.KCLExample.utils.RecordProcessorFactory;
   import software.amazon.keyspaces.streamsadapter.AmazonKeyspacesStreamsAdapterClient;
   import software.amazon.keyspaces.streamsadapter.StreamsSchedulerFactory;
   import java.util.Arrays;
   import java.util.List;
   import java.util.concurrent.ExecutionException;
   
   import software.amazon.awssdk.regions.Region;
   import software.amazon.awssdk.services.cloudwatch.CloudWatchAsyncClient;
   import software.amazon.awssdk.services.dynamodb.DynamoDbAsyncClient;
   import software.amazon.awssdk.services.dynamodb.model.DeleteTableRequest;
   import software.amazon.awssdk.services.dynamodb.model.DeleteTableResponse;
   import software.amazon.awssdk.services.keyspacesstreams.KeyspacesStreamsClient;
   import software.amazon.awssdk.services.kinesis.KinesisAsyncClient;
   import software.amazon.kinesis.common.ConfigsBuilder;
   import software.amazon.kinesis.common.InitialPositionInStream;
   import software.amazon.kinesis.common.InitialPositionInStreamExtended;
   import software.amazon.kinesis.coordinator.CoordinatorConfig;
   import software.amazon.kinesis.coordinator.Scheduler;
   import software.amazon.kinesis.leases.LeaseManagementConfig;
   import software.amazon.kinesis.processor.ProcessorConfig;
   import software.amazon.kinesis.processor.StreamTracker;
   import software.amazon.kinesis.retrieval.polling.PollingConfig;
   
   public class KCLTestBase {
   
       protected KeyspacesStreamsClient streamsClient;
       protected KinesisAsyncClient adapterClient;
       protected DynamoDbAsyncClient dynamoDbAsyncClient;
       protected CloudWatchAsyncClient cloudWatchClient;
       protected Region region;
       protected RecordProcessorFactory recordProcessorFactory;
       protected Scheduler scheduler;
       protected Thread schedulerThread;
   
       public void baseSetUp() {
           recordProcessorFactory = new RecordProcessorFactory();
           setupKCLBase();
       }
   
       protected void setupKCLBase() {
           region = Region.US_EAST_1;
   
           streamsClient = KeyspacesStreamsClient.builder()
                   .region(region)
                   .build();
           adapterClient = new AmazonKeyspacesStreamsAdapterClient(
                   streamsClient,
                   region);
           dynamoDbAsyncClient = DynamoDbAsyncClient.builder()
                   .region(region)
                   .build();
           cloudWatchClient = CloudWatchAsyncClient.builder()
                   .region(region)
                   .build();
       }
   
       protected void startScheduler(Scheduler scheduler) {
           this.scheduler = scheduler;
           schedulerThread = new Thread(() -> scheduler.run());
           schedulerThread.start();
       }
   
       protected void shutdownScheduler() {
           if (scheduler != null) {
               scheduler.shutdown();
               try {
                   schedulerThread.join(30000);
               } catch (InterruptedException e) {
                   System.out.println("Error while shutting down scheduler " + e);
               }
           }
       }
   
       protected Scheduler createScheduler(String streamArn, String leaseTableName) {
           String workerId = "worker-" + System.currentTimeMillis();
   
           // Create ConfigsBuilder
           ConfigsBuilder configsBuilder = createConfigsBuilder(streamArn, workerId, leaseTableName);
   
           // Configure retrieval config for polling
           PollingConfig pollingConfig = new PollingConfig(streamArn, adapterClient);
   
           // Create the Scheduler
           return StreamsSchedulerFactory.createScheduler(
                   configsBuilder.checkpointConfig(),
                   configsBuilder.coordinatorConfig(),
                   configsBuilder.leaseManagementConfig(),
                   configsBuilder.lifecycleConfig(),
                   configsBuilder.metricsConfig(),
                   configsBuilder.processorConfig(),
                   configsBuilder.retrievalConfig().retrievalSpecificConfig(pollingConfig),
                   streamsClient,
                   region
           );
       }
   
       private ConfigsBuilder createConfigsBuilder(String streamArn, String workerId, String leaseTableName) {
           ConfigsBuilder configsBuilder = new ConfigsBuilder(
                   streamArn,
                   leaseTableName,
                   adapterClient,
                   dynamoDbAsyncClient,
                   cloudWatchClient,
                   workerId,
                   recordProcessorFactory);
   
           configureCoordinator(configsBuilder.coordinatorConfig());
           configureLeaseManagement(configsBuilder.leaseManagementConfig());
           configureProcessor(configsBuilder.processorConfig());
           configureStreamTracker(configsBuilder, streamArn);
   
           return configsBuilder;
       }
   
       private void configureCoordinator(CoordinatorConfig config) {
           config.skipShardSyncAtWorkerInitializationIfLeasesExist(true)
                   .parentShardPollIntervalMillis(1000)
                   .shardConsumerDispatchPollIntervalMillis(500);
       }
   
       private void configureLeaseManagement(LeaseManagementConfig config) {
           config.shardSyncIntervalMillis(0)
                   .leasesRecoveryAuditorInconsistencyConfidenceThreshold(0)
                   .leasesRecoveryAuditorExecutionFrequencyMillis(5000)
                   .leaseAssignmentIntervalMillis(1000L);
       }
   
       private void configureProcessor(ProcessorConfig config) {
           config.callProcessRecordsEvenForEmptyRecordList(true);
       }
   
       private void configureStreamTracker(ConfigsBuilder configsBuilder, String streamArn) {
           StreamTracker streamTracker = StreamsSchedulerFactory.createSingleStreamTracker(
                   streamArn,
                   InitialPositionInStreamExtended.newInitialPosition(InitialPositionInStream.TRIM_HORIZON)
           );
           configsBuilder.streamTracker(streamTracker);
       }
   
       public void deleteAllDdbTables(String baseTableName) {
           List<String> tablesToDelete = Arrays.asList(
                   baseTableName,
                   baseTableName + "-CoordinatorState",
                   baseTableName + "-WorkerMetricStats"
           );
   
           for (String tableName : tablesToDelete) {
               deleteTable(tableName);
           }
       }
   
       private void deleteTable(String tableName) {
           DeleteTableRequest deleteTableRequest = DeleteTableRequest.builder()
                   .tableName(tableName)
                   .build();
   
           try {
               DeleteTableResponse response = dynamoDbAsyncClient.deleteTable(deleteTableRequest).get();
               System.out.println("Table deletion response " + response);
           } catch (InterruptedException | ExecutionException e) {
               System.out.println("Error deleting table: " + tableName + " " + e);
           }
       }
   }
   ```

1. <a name="cdc-kcl-record-processor"></a>In this step you implement the record processor class for your application to start processing change events.

   ```
    import software.amazon.kinesis.coordinator.Scheduler;
   
   public class KCLTest {
   
       private static final int APP_RUNTIME_SECONDS = 1800;
       private static final int SLEEP_INTERNAL_MS = 60*1000;
   
       public static void main(String[] args) {
           KCLTestBase kclTestBase;
   
           kclTestBase = new KCLTestBase();
           kclTestBase.baseSetUp();
   
           // Create and start scheduler
           String leaseTableName = generateUniqueApplicationName();
   
           // Update below to your Stream ARN
           String streamArn = "arn:aws:cassandra:us-east-1:759151643516:/keyspace/cdc_sample_test/table/test_kcl_bool/stream/2025-07-01T15:52:57.529";
           Scheduler scheduler = kclTestBase.createScheduler(streamArn, leaseTableName);
           kclTestBase.startScheduler(scheduler);
   
           // Wait for specified time before shutting down - KCL applications are designed to run forever, however in this
           // example we will shut it down after APP_RUNTIME_SECONDS
           long startTime = System.currentTimeMillis();
           long endTime = startTime + (APP_RUNTIME_SECONDS * 1000);
           while (System.currentTimeMillis() < endTime) {
               try {
                   // Print and sleep every minute
                   Thread.sleep(SLEEP_INTERNAL_MS);
                   System.out.println("Application is running");
               } catch (InterruptedException e) {
                   System.out.println("Interrupted while waiting for records");
                   Thread.currentThread().interrupt();
                   break;
               }
           }
   
           // Stop the scheduler
           kclTestBase.shutdownScheduler();
           kclTestBase.deleteAllDdbTables(leaseTableName);
       }
   
       public static String generateUniqueApplicationName() {
           String timestamp = String.valueOf(System.currentTimeMillis());
           String randomString = java.util.UUID.randomUUID().toString().substring(0, 8);
           return String.format("KCL-App-%s-%s", timestamp, randomString);
       }
   }
   ```

## Best practices
<a name="cdc-kcl-best-practices"></a>

Follow these best practices when using KCL with Amazon Keyspaces CDC streams:

**Error handling**  
Implement robust error handling in your record processor to handle exceptions gracefully. Consider implementing retry logic for transient failures.

**Checkpointing frequency**  
Balance checkpointing frequency to minimize duplicate processing while ensuring reasonable progress tracking. Too frequent checkpointing can impact performance, while too infrequent checkpointing can lead to more reprocessing if a worker fails.

**Worker scaling**  
Scale the number of workers based on the number of shards in your CDC stream. A good starting point is to have one worker per shard, but you may need to adjust based on your processing requirements.

**Monitoring**  
Use CloudWatch metrics provided by KCL to monitor the health and performance of your consumer application. Key metrics include processing latency, checkpoint age, and lease counts.

**Testing**  
Test your consumer application thoroughly, including scenarios like worker failures, stream resharding, and varying load conditions.

## Using KCL with non-Java languages
<a name="cdc-kcl-non-java"></a>

While KCL is primarily a Java library, you can use it with other programming languages through the MultiLangDaemon. The MultiLangDaemon is a Java-based daemon that manages the interaction between your non-Java record processor and the KCL.

KCL provides support for the following languages:
+ Python
+ Ruby
+ Node.js
+ .NET

For more information about using KCL with non-Java languages, see the [KCL MultiLangDaemon documentation](https://github.com/awslabs/amazon-kinesis-client/tree/master/amazon-kinesis-client-multilang).

## Troubleshooting
<a name="cdc-kcl-troubleshooting"></a>

This section provides solutions to common issues you might encounter when using KCL with Amazon Keyspaces CDC streams.

**Slow processing**  
If your consumer application is processing records slowly, consider:  
+ Increasing the number of worker instances
+ Optimizing your record processing logic
+ Checking for bottlenecks in downstream systems

**Duplicate processing**  
If you're seeing duplicate processing of records, check your checkpointing logic. Ensure you're checkpointing after successfully processing records.

**Worker failures**  
If workers are failing frequently, check:  
+ Resource constraints (CPU, memory)
+ Network connectivity issues
+ Permissions issues

**Lease table issues**  
If you're experiencing issues with the KCL lease table:  
+ Check that your application has appropriate permissions to access the Amazon Keyspaces table
+ Verify that the table has sufficient provisioned throughput

# Working with partitioners in Amazon Keyspaces
<a name="working-with-partitioners"></a>

In Apache Cassandra, partitioners control which nodes data is stored on in the cluster. Partitioners create a numeric token using a hashed value of the partition key. Cassandra uses this token to distribute data across nodes. Clients can also use these tokens in `SELECT` operations and `WHERE` clauses to optimize read and write operations. For example, clients can efficiently perform parallel queries on large tables by specifying distinct token ranges to query in each parallel job. 

Amazon Keyspaces provides three different partitioners.

**Murmur3Partitioner (Default)**  
Apache Cassandra-compatible `Murmur3Partitioner`. The `Murmur3Partitioner` is the default Cassandra partitioner in Amazon Keyspaces and in Cassandra 1.2 and later versions.

**RandomPartitioner**  
Apache Cassandra-compatible `RandomPartitioner`. The `RandomPartitioner` is the default Cassandra partitioner for versions earlier than Cassandra 1.2.

**Keyspaces Default Partitioner**  
The `DefaultPartitioner` returns the same `token` function results as the `RandomPartitioner`.

The partitioner setting is applied per Region at the account level. For example, if you change the partitioner in US East (N. Virginia), the change is applied to all tables in the same account in this Region. You can safely change your partitioner at any time. Note that the configuration change takes approximately 10 minutes to complete. You do not need to reload your Amazon Keyspaces data when you change the partitioner setting. Clients will automatically use the new partitioner setting the next time they connect. 

# How to change the partitioner in Amazon Keyspaces
<a name="working-with-partitioners-change"></a>

You can change the partitioner by using the Amazon Web Services Management Console or Cassandra Query Language (CQL).

------
#### [ Amazon Web Services Management Console ]

**To change the partitioner using the Amazon Keyspaces console**

1. Sign in to the Amazon Web Services Management Console, and open the Amazon Keyspaces console at [https://console.amazonaws.cn/keyspaces/home](https://console.amazonaws.cn/keyspaces/home).

1. In the navigation pane, choose **Configuration**.

1. On the **Configuration** page, go to **Edit partitioner**.

1. Select the partitioner compatible with your version of Cassandra. The partitioner change takes approximately 10 minutes to apply.
**Note**  
After the configuration change is complete, you have to disconnect and reconnect to Amazon Keyspaces for requests to use the new partitioner.

------
#### [ Cassandra Query Language (CQL) ]

1. To see which partitioner is configured for the account, you can use the following query.

   ```
   SELECT partitioner from system.local;
   ```

   If the partitioner hasn't been changed, the query has the following output.

   ```
   partitioner
   --------------------------------------------
   com.amazonaws.cassandra.DefaultPartitioner
   ```

1. To update the partitioner to the `Murmur3` partitioner, you can use the following statement.

   ```
   UPDATE system.local set partitioner='org.apache.cassandra.dht.Murmur3Partitioner' where key='local';
   ```

1. Note that this configuration change takes approximately 10 minutes to complete. To confirm that the partitioner has been set, you can run the `SELECT` query again. Note that due to eventual read consistency, the response might not reflect the results of the recently completed partitioner change yet. If you repeat the `SELECT` operation again after a short time, the response should return the latest data.

   ```
   SELECT partitioner from system.local;
   ```
**Note**  
You have to disconnect and reconnect to Amazon Keyspaces so that requests use the new partitioner.

------

# Client-side timestamps in Amazon Keyspaces
<a name="client-side-timestamps"></a>

In Amazon Keyspaces, client-side timestamps are Cassandra-compatible timestamps that are persisted for each cell in your table. You can use client-side timestamps for conflict resolution by letting your client applications determine the order of writes. For example, when clients of a globally distributed application make updates to the same data, client-side timestamps persist the order in which the updates were made on the clients. Amazon Keyspaces uses these timestamps to process the writes. 

Amazon Keyspaces client-side timestamps are fully managed. You don’t have to manage low-level system settings such as clean-up and compaction strategies. 

When you delete data, the rows are marked for deletion with a tombstone. Amazon Keyspaces removes tombstoned data automatically (typically within 10 days) without impacting your application performance or availability. Tombstoned data isn't available for data manipulation language (DML) statements. As you continue to perform reads and writes on rows that contain tombstoned data, the tombstoned data continues to count towards storage, read capacity units (RCUs), and write capacity units (WCUs) until it's deleted from storage. 

After client-side timestamps have been turned on for a table, you can specify a timestamp with the `USING TIMESTAMP` clause in your Data Manipulation Language (DML) CQL query. For more information, see [Use client-side timestamps in queries in Amazon Keyspaces](client-side-timestamps-how-to-queries.md). If you do not specify a timestamp in your CQL query, Amazon Keyspaces uses the timestamp passed by your client driver. If the client driver doesn’t supply timestamps, Amazon Keyspaces assigns a cell-level timestamp automatically, because timestamps can't be `NULL`. To query for timestamps, you can use the `WRITETIME` function in your DML statement. 

Amazon Keyspaces doesn't charge extra to turn on client-side timestamps. However, with client-side timestamps you store and write additional data for each value in your row. This can lead to additional storage usage and in some cases additional throughput usage. For more information about Amazon Keyspaces pricing, see [Amazon Keyspaces (for Apache Cassandra) pricing](http://www.amazonaws.cn/keyspaces/pricing).

When client-side timestamps are turned on in Amazon Keyspaces, every column of every row stores a timestamp. These timestamps take up approximately 20–40 bytes (depending on your data), and contribute to the storage and throughput cost for the row. These metadata bytes also count towards your 1-MB row size quota. To determine the overall increase in storage space (to ensure that the row size stays under 1 MB), consider the number of columns in your table and the number of collection elements in each row. For example, if a table has 20 columns, with each column storing 40 bytes of data, the size of the row increases from 800 bytes to 1200 bytes. For more information on how to estimate the size of a row, see [Estimate row size in Amazon Keyspaces](calculating-row-size.md). In addition to the extra 400 bytes for storage, in this example, the number of write capacity units (WCUs) consumed per write increases from 1 WCU to 2 WCUs. For more information on how to calculate read and write capacity, see [Configure read/write capacity modes in Amazon Keyspaces](ReadWriteCapacityMode.md).

After client-side timestamps have been turned on for a table, you can't turn it off. 

To learn more about how to use client-side timestamps in queries, see [Use client-side timestamps in queries in Amazon Keyspaces](client-side-timestamps-how-to-queries.md).

**Topics**
+ [

## How Amazon Keyspaces client-side timestamps integrate with Amazon services
](#client-side-timestamps_integration)
+ [

# Create a new table with client-side timestamps in Amazon Keyspaces
](client-side-timestamps-create-new-table.md)
+ [

# Configure client-side timestamps for a table in Amazon Keyspaces
](client-side-timestamps-existing-table.md)
+ [

# Use client-side timestamps in queries in Amazon Keyspaces
](client-side-timestamps-how-to-queries.md)

## How Amazon Keyspaces client-side timestamps integrate with Amazon services
<a name="client-side-timestamps_integration"></a>

The following client-side timestamps metric is available in Amazon CloudWatch to enable continuous monitoring.
+ `SystemReconciliationDeletes` – The number of delete operations required to remove tombstoned data.

For more information about how to monitor CloudWatch metrics, see [Monitoring Amazon Keyspaces with Amazon CloudWatch](monitoring-cloudwatch.md).

When you use Amazon CloudFormation, you can enable client-side timestamps when creating a Amazon Keyspaces table. For more information, see the [Amazon CloudFormation User Guide](https://docs.amazonaws.cn/AWSCloudFormation/latest/UserGuide/aws-resource-cassandra-table.html). 

# Create a new table with client-side timestamps in Amazon Keyspaces
<a name="client-side-timestamps-create-new-table"></a>

Follow these examples to create a new Amazon Keyspaces table with client-side timestamps enabled using the Amazon Keyspaces Amazon Web Services Management Console, Cassandra Query Language (CQL), or the Amazon Command Line Interface

------
#### [ Console ]

**Create a new table with client-side timestamps (console)**

1. Sign in to the Amazon Web Services Management Console, and open the Amazon Keyspaces console at [https://console.amazonaws.cn/keyspaces/home](https://console.amazonaws.cn/keyspaces/home).

1. In the navigation pane, choose **Tables**, and then choose **Create table**.

1. On the **Create table** page in the **Table details** section, select a keyspace and provide a name for the new table.

1. In the **Schema** section, create the schema for your table.

1. In the **Table settings** section, choose **Customize settings**.

1. Continue to **Client-side timestamps**.

   Choose **Turn on client-side timestamps** to turn on client-side timestamps for the table. 

1. Choose **Create table**. Your table is created with client-side timestamps turned on.

------
#### [ Cassandra Query Language (CQL) ]

**Create a new table using CQL**

1. To create a new table with client-side timestamps enabled using CQL, you can use the following example.

   ```
   CREATE TABLE my_keyspace.my_table (
      userid uuid,
      time timeuuid,
      subject text,
      body text,
      user inet,
      PRIMARY KEY (userid, time)
   ) WITH CUSTOM_PROPERTIES = {'client_side_timestamps': {'status': 'enabled'}};
   ```

1. To confirm the client-side timestamps settings for the new table, use a `SELECT` statement to review the `custom_properties` as shown in the following example. 

   ```
   SELECT custom_properties from system_schema_mcs.tables where keyspace_name = 'my_keyspace' and table_name = 'my_table';
   ```

   The output of this statement shows the status for client-side timestamps.

   ```
   'client_side_timestamps': {'status': 'enabled'}
   ```

------
#### [ Amazon CLI ]

**Create a new table using the Amazon CLI**

1. To create a new table with client-side timestamps enabled, you can use the following example.

   ```
   ./aws keyspaces create-table \
   --keyspace-name my_keyspace \
   --table-name my_table \
   --client-side-timestamps 'status=ENABLED' \
   --schema-definition 'allColumns=[{name=id,type=int},{name=date,type=timestamp},{name=name,type=text}],partitionKeys=[{name=id}]'
   ```

1. To confirm that client-side timestamps are turned on for the new table, run the following code.

   ```
   ./aws keyspaces get-table \
   --keyspace-name my_keyspace \
   --table-name my_table
   ```

   The output should look similar to this example.

   ```
   {
       "keyspaceName": "my_keyspace",
       "tableName": "my_table",
       "resourceArn": "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/my_keyspace/table/my_table",
       "creationTimestamp": 1662681206.032,
       "status": "ACTIVE",
       "schemaDefinition": {
           "allColumns": [
               {
                   "name": "id",
                   "type": "int"
               },
               {
                   "name": "date",
                   "type": "timestamp"
               },
               {
                   "name": "name",
                   "type": "text"
               }
           ],
           "partitionKeys": [
               {
                   "name": "id"
               }
           ],
           "clusteringKeys": [],
           "staticColumns": []
       },
       "capacitySpecification": {
           "throughputMode": "PAY_PER_REQUEST",
           "lastUpdateToPayPerRequestTimestamp": 1662681206.032
       },
       "encryptionSpecification": {
           "type": "AWS_OWNED_KMS_KEY"
       },
       "pointInTimeRecovery": {
           "status": "DISABLED"
       },
       "clientSideTimestamps": {
           "status": "ENABLED"
       },
       "ttl": {
           "status": "ENABLED"
       },
       "defaultTimeToLive": 0,
       "comment": {
           "message": ""
       }
   }
   ```

------

# Configure client-side timestamps for a table in Amazon Keyspaces
<a name="client-side-timestamps-existing-table"></a>

Follow these examples to turn on client-side timestamps for existing tables using the Amazon Keyspaces Amazon Web Services Management Console, Cassandra Query Language (CQL), or the Amazon Command Line Interface.

------
#### [ Console ]

**To turn on client-side timestamps for an existing table (console)**

1. Sign in to the Amazon Web Services Management Console, and open the Amazon Keyspaces console at [https://console.amazonaws.cn/keyspaces/home](https://console.amazonaws.cn/keyspaces/home).

1. Choose the table that you want to update, and then choose **Additional settings** tab.

1. On the **Additional settings** tab, go to **Modify client-side timestamps** and select **Turn on client-side timestamps**

1. Choose **Save changes** to change the settings of the table.

------
#### [ Cassandra Query Language (CQL) ]

**Using a CQL statement**

1. Turn on client-side timestamps for an existing table with the `ALTER TABLE` CQL statement.

   ```
   ALTER TABLE my_table WITH custom_properties = {'client_side_timestamps': {'status': 'enabled'}};;
   ```

1. To confirm the client-side timestamps settings for the new table, use a `SELECT` statement to review the `custom_properties` as shown in the following example. 

   ```
   SELECT custom_properties from system_schema_mcs.tables where keyspace_name = 'my_keyspace' and table_name = 'my_table';
   ```

   The output of this statement shows the status for client-side timestamps.

   ```
   'client_side_timestamps': {'status': 'enabled'}
   ```

------
#### [ Amazon CLI ]

**Using the Amazon CLI**

1. You can turn on client-side timestamps for an existing table using the Amazon CLI using the following example.

   ```
   ./aws keyspaces update-table \
   --keyspace-name my_keyspace \
   --table-name my_table \
   --client-side-timestamps 'status=ENABLED'
   ```

1. To confirm that client-side timestamps are turned on for the table, run the following code.

   ```
   ./aws keyspaces get-table \
   --keyspace-name my_keyspace \
   --table-name my_table
   ```

   The output should look similar to this example and state the status for client-side timestamps as `ENABLED`.

   ```
   {
       "keyspaceName": "my_keyspace",
       "tableName": "my_table",
       "resourceArn": "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/my_keyspace/table/my_table",
       "creationTimestamp": 1662681312.906,
       "status": "ACTIVE",
       "schemaDefinition": {
           "allColumns": [
               {
                   "name": "id",
                   "type": "int"
               },
               {
                   "name": "date",
                   "type": "timestamp"
               },
               {
                   "name": "name",
                   "type": "text"
               }
           ],
           "partitionKeys": [
               {
                   "name": "id"
               }
           ],
           "clusteringKeys": [],
           "staticColumns": []
       },
       "capacitySpecification": {
           "throughputMode": "PAY_PER_REQUEST",
           "lastUpdateToPayPerRequestTimestamp": 1662681312.906
       },
       "encryptionSpecification": {
           "type": "AWS_OWNED_KMS_KEY"
       },
       "pointInTimeRecovery": {
           "status": "DISABLED"
       },
       "clientSideTimestamps": {
           "status": "ENABLED"
       },
       "ttl": {
           "status": "ENABLED"
       },
       "defaultTimeToLive": 0,
       "comment": {
           "message": ""
       }
   }
   ```

------

# Use client-side timestamps in queries in Amazon Keyspaces
<a name="client-side-timestamps-how-to-queries"></a>

After you have turned on client-side timestamps, you can pass the timestamp in your `INSERT`, `UPDATE`, and `DELETE` statements with the `USING TIMESTAMP` clause. 

The timestamp value is a `bigint` representing a number of microseconds since the standard base time known as the epoch: January 1 1970 at 00:00:00 GMT. A timestamp that is supplied by the client has to fall between the range of 2 days in the past and 5 minutes in the future from the current wall clock time.

Amazon Keyspaces keeps timestamp metadata for the life of the data. You can use the `WRITETIME` function to look up timestamps that occurred years in the past. For more information about CQL syntax, see [DML statements (data manipulation language) in Amazon Keyspaces](cql.dml.md).

The following CQL statement is an example of how to use a timestamp as an `update_parameter`. 

```
INSERT INTO catalog.book_awards (year, award, rank, category, book_title, author, publisher)
   VALUES (2022, 'Wolf', 4, 'Non-Fiction', 'Science Update', 'Ana Carolina Silva', 'SomePublisher') 
   USING TIMESTAMP 1669069624;
```

If you do not specify a timestamp in your CQL query, Amazon Keyspaces uses the timestamp passed by your client driver. If no timestamp is supplied by the client driver, Amazon Keyspaces assigns a server-side timestamp for your write operation. 

To see the timestamp value that is stored for a specific column, you can use the `WRITETIME` function in a `SELECT` statement as shown in the following example. 

```
SELECT year, award, rank, category, book_title, author, publisher, WRITETIME(year), WRITETIME(award), WRITETIME(rank),
  WRITETIME(category), WRITETIME(book_title), WRITETIME(author), WRITETIME(publisher) from catalog.book_awards;
```

# Multi-Region replication for Amazon Keyspaces (for Apache Cassandra)
<a name="multiRegion-replication"></a>

You can use Amazon Keyspaces multi-Region replication to replicate your data with automated, fully managed, *active-active* replication across the Amazon Web Services Regions of your choice. With active-active replication, each Region is able to perform reads and writes in isolation. You can improve both availability and resiliency from Regional degradation, while also benefiting from low-latency local reads and writes for global applications. 

With multi-Region replication, Amazon Keyspaces asynchronously replicates data between Regions, and data is typically propagated across Regions within a second. Also, with multi-Region replication, you no longer have the difficult work of resolving conflicts and correcting data divergence issues, so you can focus on your application. 

By default, Amazon Keyspaces replicates data across three [ Availability Zones](http://www.amazonaws.cn/about-aws/global-infrastructure/regions_az/) within the same Amazon Web Services Region for durability and high availability. With multi-Region replication, you can create multi-Region keyspaces that replicate your tables in different geographic Amazon Web Services Regions of your choice.

**Topics**
+ [

## Benefits of using multi-Region replication
](#mrr-benefits)
+ [

## Capacity modes and pricing
](#mrr-pricing)
+ [

# How multi-Region replication works in Amazon Keyspaces
](multiRegion-replication_how-it-works.md)
+ [

# Amazon Keyspaces multi-Region replication usage notes
](multiRegion-replication_usage-notes.md)
+ [

# Configure multi-Region replication for Amazon Keyspaces (for Apache Cassandra)
](multiRegion-replication-configure.md)

## Benefits of using multi-Region replication
<a name="mrr-benefits"></a>

Multi-Region replication provides the following benefits.
+ **Global reads and writes with single-digit millisecond latency** – In Amazon Keyspaces, replication is active-active. You can serve both reads and writes locally from the Regions closest to your customers with single-digit millisecond latency at any scale. You can use Amazon Keyspaces multi-Region tables for global applications that need a fast response time anywhere in the world.
+ **Improved business continuity and protection from single-Region degradation** – With multi-Region replication, you can recover from degradation in a single Amazon Web Services Region by redirecting your application to a different Region in your multi-Region keyspace. Because Amazon Keyspaces offers active-active replication, there is no impact to your reads and writes. 

  Amazon Keyspaces keeps track of any writes that have been performed on your multi-Region keyspace but haven't been propagated to all replica Regions. After the Region comes back online, Amazon Keyspaces automatically syncs any missing changes so that you can recover without any application impact.
+ **High-speed replication across Regions** – Multi-Region replication uses fast, storage-based physical replication of data across Regions, with a replication lag that is typically less than 1 second. 

  Replication in Amazon Keyspaces has little to no impact on your database queries because it doesn’t share compute resources with your application. This means that you can address high-write throughput use cases or use cases with sudden spikes or bursts in throughput without any application impact. 
+ **Consistency and conflict resolution** – Any changes made to data in any Region are replicated to the other Regions in a multi-Region keyspace. If applications update the same data in different Regions at the same time, conflicts can arise. 

  To help provide eventual consistency, Amazon Keyspaces uses cell-level timestamps and a *last writer wins* reconciliation between concurrent updates. Conflict resolution is fully managed and happens in the background without any application impact.

For more information about supported configurations and features, see [Amazon Keyspaces multi-Region replication usage notes](multiRegion-replication_usage-notes.md).

## Capacity modes and pricing
<a name="mrr-pricing"></a>

For a multi-Region keyspace, you can either use *on-demand capacity mode* or *provisioned capacity mode*. For more information, see [Configure read/write capacity modes in Amazon Keyspaces](ReadWriteCapacityMode.md).

For on-demand mode, you're billed 1 write request unit (WRU) to write up to 1 KB of data per row the same way as for single-Region tables. But you're billed for writes in each Region of your multi-Region keyspace. For example, writing a row of 3 KB of data in a multi-Region keyspace with two Regions requires 6 WRUs: 3 \$1 2 = 6 WRUs. Additionally, writes that include both static and non-static data require additional write operations. 

For provisioned mode, you're billed 1 write capacity unit (WCU) to write up to 1 KB of data per row, the same way as for single-Region tables. But you're billed for writes in each Region of your multi-Region keyspace. For example, writing a row of 3 KB of data per second in a multi-Region keyspace with two Regions requires 6 WCUs: 3 \$1 2 = 6 WCUs. Additionally, writes that include both static and non-static data require additional write operations. 

For more information about pricing, see [Amazon Keyspaces (for Apache Cassandra) pricing](http://www.amazonaws.cn/keyspaces/pricing).

# How multi-Region replication works in Amazon Keyspaces
<a name="multiRegion-replication_how-it-works"></a>

This section provides an overview of how Amazon Keyspaces multi-Region replication works. For more information about pricing, see [Amazon Keyspaces (for Apache Cassandra) pricing](http://www.amazonaws.cn/keyspaces/pricing).

**Topics**
+ [

## How multi-Region replication works in Amazon Keyspaces
](#multiRegion-replication_how-it-works-overview)
+ [

## Multi-Region replication conflict resolution
](#multiRegion-replication_how-it-works-conflict-resolution)
+ [

## Multi-Region replication disaster recovery
](#howitworks_disaster_recovery)
+ [

## Multi-Region replication in Amazon Web Services Regions disabled by default
](#howitworks_mrr_opt_in)
+ [

## Multi-Region replication and integration with point-in-time recovery (PITR)
](#howitworks_mrr_pitr)
+ [

## Multi-Region replication and integration with Amazon services
](#howitworks_integration)

## How multi-Region replication works in Amazon Keyspaces
<a name="multiRegion-replication_how-it-works-overview"></a>

Amazon Keyspaces multi-Region replication implements a data resiliency architecture that distributes your data across independent and geographically distributed Amazon Web Services Regions. It uses *active-active replication*, which provides local low latency with each Region being able to perform reads and writes in isolation.

When you create an Amazon Keyspaces multi-Region keyspace, you can select additional Regions where the data is going to be replicated to. Each table you create in a multi-Region keyspace consists of multiple replica tables (one per Region) that Amazon Keyspaces considers as a single unit. 

Every replica has the same table name and the same primary key schema. When an application writes data to a local table in one Region, the data is durably written using the `LOCAL_QUORUM` consistency level. Amazon Keyspaces automatically replicates the data asynchronously to the other replication Regions. The replication lag across Regions is typically less than one second and doesn't impact your application’s performance or throughput. 

After the data is written, you can read it from the multi-Region table in another replication Region with the `LOCAL_ONE/LOCAL_QUORUM` consistency levels. For more information about supported configurations and features, see [Amazon Keyspaces multi-Region replication usage notes](multiRegion-replication_usage-notes.md). 

## Multi-Region replication conflict resolution
<a name="multiRegion-replication_how-it-works-conflict-resolution"></a>

Amazon Keyspaces multi-Region replication is fully managed, which means that you don't have to perform replication tasks such as regularly running repair operations to clean-up data synchronization issues. Amazon Keyspaces monitors data consistency between tables in different Amazon Web Services Regions by detecting and repairing conflicts, and synchronizes replicas automatically. 

Amazon Keyspaces uses the *last writer wins* method of data reconciliation. With this conflict resolution mechanism, all of the Regions in a multi-Region keyspace agree on the latest update and converge toward a state in which they all have identical data. The reconciliation process has no impact on application performance. To support conflict resolution, client-side timestamps are automatically turned on for multi-Region tables and can't be turned off. For more information, see [Client-side timestamps in Amazon Keyspaces](client-side-timestamps.md). 

## Multi-Region replication disaster recovery
<a name="howitworks_disaster_recovery"></a>

With Amazon Keyspaces multi-Region replication, writes are replicated asynchronously across each Region. In the rare event of a single Region degradation or failure, multi-Region replication helps you to recover from disaster with little to no impact to your application. Recovery from disaster is typically measured using values for Recovery time objective (RTO) and Recovery point objective (RPO).

 **Recovery time objective** – The time it takes a system to return to a working state after a disaster. RTO measures the amount of downtime your workload can tolerate, measured in time. For disaster recovery plans that use multi-Region replication to fail over to an unaffected Region, the RTO can be nearly zero. The RTO is limited by how quickly your application can detect the failure condition and redirect traffic to another Region.

 **Recovery point objective** – The amount of data that can be lost (measured in time). For disaster recovery plans that use multi-Region replication to fail over to an unaffected Region, the RPO is typically single-digit seconds. The RPO is limited by replication latency to the failover target replica.

In the event of a Regional failure or degradation, you don't need to promote a secondary Region or perform database failover procedures because replication in Amazon Keyspaces is active-active. Instead, you can use Amazon Route 53 to route your application to the nearest healthy Region. To learn more about Route 53, see [What is Amazon Route 53?](https://docs.amazonaws.cn/Route53/latest/DeveloperGuide/Welcome.html).

If a single Amazon Web Services Region becomes isolated or degraded, your application can redirect traffic to a different Region using Route 53 to perform reads and writes against a different replica table. You can also apply custom business logic to determine when to redirect requests to other Regions. An example of this is making your application aware of the multiple endpoints that are available.

When the Region comes back online, Amazon Keyspaces resumes propagating any pending writes from that Region to the replica tables in other Regions. It also resumes propagating writes from other replica tables to the Region that is now back online.

## Multi-Region replication in Amazon Web Services Regions disabled by default
<a name="howitworks_mrr_opt_in"></a>

Amazon Keyspaces multi-Region replication is supported in the following Amazon Web Services Regions that are disabled by default:
+ Africa (Cape Town) Region
+ Middle East (UAE) Region
+ Asia Pacific (Hong Kong) Region
+ Middle East (Bahrain) Region

Before you can use a Region that's disabled by default with Amazon Keyspaces multi-Region replication, you first have to enable the Region. For more information, see [ Enable or disable Amazon Web Services Regions in your account](https://docs.amazonaws.cn/general/latest/gr/rande-manage.html#rande-manage-enable) in the [https://docs.amazonaws.cn/organizations/latest/userguide/](https://docs.amazonaws.cn/organizations/latest/userguide/).

After you've enabled a Region, you can create new Amazon Keyspaces resources in the Region and add the Region to a multi-Region keyspace.

When you disable a Region that is used by Amazon Keyspaces multi-Region replication, Amazon Keyspaces initiates a 24-hour grace period. During this time window, you can expect the following behavior:
+ Amazon Keyspaces continues to perform data manipulation language (DML) operations in enabled Regions.
+ Amazon Keyspaces pauses replicating data updates from enabled Regions to the disabled Region.
+ Amazon Keyspaces blocks all data definition language (DDL) requests in the disabled Region.

If you disabled the Region in error, you can re-enable the Region within 24 hours. If you re-enable the Region during the 24-hour grace period, Amazon Keyspaces is going to take the following actions:
+ Automatically resume all replications to the re-enabled Region.
+ Replicate any data updates that took place in enabled Regions while the Region was disabled to ensure data consistency.
+ Continue all additional multi-Region replication operations automatically.

In the case that the Region remains disabled after the 24-hour window closes, Amazon Keyspaces takes the following actions to permanently remove the Region from multi-Region replication:
+ Remove the disabled Region from all multi-Region replication keyspaces.
+ Convert multi-Region replication table replicas in the disabled Region into single-Region keyspaces and tables.
+ Amazon Keyspaces doesn't delete any resources from the disabled Region.

After Amazon Keyspaces has permanently removed the disabled Region from the multi-Region keyspace, you can't add the disabled Region back.

## Multi-Region replication and integration with point-in-time recovery (PITR)
<a name="howitworks_mrr_pitr"></a>

Point-in-time recovery is supported for multi-Region tables. To successfully restore a multi-Region table with PITR, the following conditions have to be met.
+ The source and the target table must be configured as multi-Region tables.
+ The replication Regions for the keyspace of the source table and for the keyspace of the target table must be the same.
+ PITR has to be enabled on all replicas of the source table.

You can run the restore statement from any of the Regions that the source table is available in. Amazon Keyspaces automatically restores the target table in each Region. For more information about PITR, see [How point-in-time recovery works in Amazon Keyspaces](PointInTimeRecovery_HowItWorks.md).

When you create a multi-Region table, the PITR settings that you define during the creation process are automatically applied to all tables in all Regions. When you change PITR settings using `ALTER TABLE`, Amazon Keyspaces applies the update only to the local table and not to the replicas in other Regions. To enable PITR for an existing multi-Region table, you have to repeat the `ALTER TABLE` statement for all replicas.

## Multi-Region replication and integration with Amazon services
<a name="howitworks_integration"></a>

You can monitor replication performance between tables in different Amazon Web Services Regions by using Amazon CloudWatch metrics. The following metric provides continuous monitoring of multi-Region keyspaces.
+ `ReplicationLatency` – This metric measures the time it took to replicate `updates`, `inserts`, or `deletes` from one replica table to another replica table in a multi-Region keyspace.

For more information about how to monitor CloudWatch metrics, see [Monitoring Amazon Keyspaces with Amazon CloudWatch](monitoring-cloudwatch.md).

# Amazon Keyspaces multi-Region replication usage notes
<a name="multiRegion-replication_usage-notes"></a>

Consider the following when you're using multi-Region replication with Amazon Keyspaces.
+ You can select any of the  Amazon Web Services Regions. For more information about Amazon Web Services Regions [that are disabled by default](https://docs.amazonaws.cn/general/latest/gr/rande-manage.html#rande-manage-enable), see [Multi-Region replication in Amazon Web Services Regions disabled by default](multiRegion-replication_how-it-works.md#howitworks_mrr_opt_in).
+ Amazon GovCloud (US) Regions and China Regions are not supported.
+ Consider the following workarounds until the features become available:

  Configure Time to Live (TTL) when creating the multi-Region table. You won't be able to enable and disable TTL, or adjust the TTL value later. For more information, see [Expire data with Time to Live (TTL) for Amazon Keyspaces (for Apache Cassandra)](TTL.md).
  + For encryption at rest, use an Amazon owned key. Customer managed keys are currently not supported for multi-Region tables. For more information, see 

    [Encryption at rest: How it works in Amazon Keyspaces](encryption.howitworks.md).
+ You can use `ALTER KEYSPACE` to add a Region to a single-Region or a multi-Region keyspace. For more information, see [Add an Amazon Web Services Region to a keyspace in Amazon Keyspaces](keyspaces-multi-region-add-replica.md).
  + Before adding a Region to a single-Region keyspace, ensure that no tables under the keyspace are configured with customer managed keys.
  + Any existing tags configured for keyspaces or tables are not replicated to the new Region.
+ When you're using provisioned capacity management with Amazon Keyspaces auto scaling, make sure to use the Amazon Keyspaces API operations to create and configure your multi-Region tables. The underlying Application Auto Scaling API operations that Amazon Keyspaces calls on your behalf don't have multi-Region capabilities. 

  For more information, see [Update the provisioned capacity and auto scaling settings for a multi-Region table in Amazon Keyspaces](tables-mrr-autoscaling.md). For more information on how to estimate the write capacity throughput of provisioned multi-Region tables, see [Estimate and provision capacity for a multi-Region table in Amazon Keyspaces](tables-multi-region-capacity.md).
+ Although data is automatically replicated across the selected Regions of a multi-Region table, when a client connects to an endpoint in one Region and queries the `system.peers` table, the query returns only local information. The query result appears like a single data center cluster to the client.
+ Amazon Keyspaces multi-Region replication is asynchronous, and it supports `LOCAL_QUORUM` consistency for writes. `LOCAL_QUORUM` consistency requires that an update to a row is durably persisted on two replicas in the local Region before returning success to the client. The propagation of writes to the replicated Region (or Regions) is then performed asynchronously. 

  Amazon Keyspaces multi-Region replication doesn't support synchronous replication or `QUORUM` consistency.
+ When you create a multi-Region keyspace or table, any tags that you define during the creation process are automatically applied to all keyspaces and tables in all Regions. When you change the existing tags using `ALTER KEYSPACE` or `ALTER TABLE`, the update is only applied to the keyspace or table in the Region where you're making the change. 
+ Amazon CloudWatch provides a `ReplicationLatency` metric for each replicated Region. It calculates this metric by tracking arriving rows, comparing their arrival time with their initial write time, and computing an average. Timings are stored within CloudWatch in the source Region. For more information, see [Monitoring Amazon Keyspaces with Amazon CloudWatch](monitoring-cloudwatch.md).

  It can be useful to view the average and maximum timings to determine the average and worst-case replication lag. There is no SLA on this latency.
+ When using a multi-Region table in on-demand mode, you may observe an increase in latency for asynchronous replication of writes if a table replica experiences a new traffic peak. Similar to how Amazon Keyspaces automatically adapts the capacity of a single-Region on-demand table to the application traffic it receives, Amazon Keyspaces automatically adapts the capacity of a multi-Region on-demand table replica to the traffic that it receives. The increase in replication latency is transient because Amazon Keyspaces automatically allocates more capacity as your traffic volume increases. Once all replicas have adapted to your traffic volume, replication latency should return back to normal. For more information, see [Peak traffic and scaling properties](ReadWriteCapacityMode.OnDemand.md#ReadWriteCapacityMode.PeakTraffic).
+ When using a multi-Region table in provisioned mode, if your application exceeds your provisioned throughput capacity, you may observe insufficient capacity errors and an increase in replication latency. To ensure that there's always enough read and write capacity for all table replicas in all Amazon Web Services Regions of a multi-Region table, we recommend that you configure Amazon Keyspaces auto scaling. Amazon Keyspaces auto scaling helps you provision throughput capacity efficiently for variable workloads by adjusting throughput capacity automatically in response to actual application traffic. For more information, see [How auto scaling works for multi-Region tables](autoscaling.md#autoscaling.multi-region).

# Configure multi-Region replication for Amazon Keyspaces (for Apache Cassandra)
<a name="multiRegion-replication-configure"></a>

You can use the console, Cassandra Query Language (CQL), or the Amazon Command Line Interface to create and manage multi-Region keyspaces and tables in Amazon Keyspaces. 

This section provides examples of how to create and manage multi-Region keyspaces and tables. All tables that you create in a multi-Region keyspace automatically inherit the multi-Region settings from the keyspace. 

For more information about supported configurations and features, see [Amazon Keyspaces multi-Region replication usage notes](multiRegion-replication_usage-notes.md).

**Topics**
+ [

# Configure the IAM permissions required to create multi-Region keyspaces and tables
](howitworks_replication_permissions.md)
+ [

# Configure the IAM permissions required to add an Amazon Web Services Region to a keyspace
](howitworks_replication_permissions_addReplica.md)
+ [

# Create a multi-Region keyspace in Amazon Keyspaces
](keyspaces-mrr-create.md)
+ [

# Add an Amazon Web Services Region to a keyspace in Amazon Keyspaces
](keyspaces-multi-region-add-replica.md)
+ [

# Check the replication progress when adding a new Region to a keyspace
](keyspaces-multi-region-replica-status.md)
+ [

# Create a multi-Region table with default settings in Amazon Keyspaces
](tables-mrr-create-default.md)
+ [

# Create a multi-Region table in provisioned mode with auto scaling in Amazon Keyspaces
](tables-mrr-create-provisioned.md)
+ [

# Update the provisioned capacity and auto scaling settings for a multi-Region table in Amazon Keyspaces
](tables-mrr-autoscaling.md)
+ [

# View the provisioned capacity and auto scaling settings for a multi-Region table in Amazon Keyspaces
](tables-mrr-view.md)
+ [

# Turn off auto scaling for a table in Amazon Keyspaces
](tables-mrr-autoscaling-off.md)
+ [

# Set the provisioned capacity of a multi-Region table manually in Amazon Keyspaces
](tables-mrr-capacity-manually.md)

# Configure the IAM permissions required to create multi-Region keyspaces and tables
<a name="howitworks_replication_permissions"></a>

To successfully create multi-Region keyspaces and tables, the IAM principal needs to be able to create a service-linked role. This service-linked role is a unique type of IAM role that is predefined by Amazon Keyspaces. It includes all the permissions that Amazon Keyspaces requires to perform actions on your behalf. For more information about the service-linked role, see [Using roles for Amazon Keyspaces Multi-Region Replication](using-service-linked-roles-multi-region-replication.md).

To create the service-linked role required by multi-Region replication, the policy for the IAM principal requires the following elements:
+ `iam:CreateServiceLinkedRole` – The **action** the principal can perform.
+ `arn:aws-cn:iam::*:role/aws-service-role/replication.cassandra.amazonaws.com/AWSServiceRoleForKeyspacesReplication` – The **resource** that the action can be performed on. 
+ `iam:AWSServiceName": "replication.cassandra.amazonaws.com` – The only Amazon service that this role can be attached to is Amazon Keyspaces.

The following is an example of the policy that grants the minimum required permissions to a principal to create multi-Region keyspaces and tables.

```
{
            "Effect": "Allow",
            "Action": "iam:CreateServiceLinkedRole",
            "Resource": "arn:aws-cn:iam::*:role/aws-service-role/replication.cassandra.amazonaws.com/AWSServiceRoleForKeyspacesReplication",
            "Condition": {"StringLike": {"iam:AWSServiceName": "replication.cassandra.amazonaws.com"}}
}
```

For additional IAM permissions for multi-Region keyspaces and tables, see the [Actions, resources, and condition keys for Amazon Keyspaces (for Apache Cassandra)](https://docs.amazonaws.cn/service-authorization/latest/reference/list_amazonkeyspacesforapachecassandra.html) in the *Service Authorization Reference*.

# Configure the IAM permissions required to add an Amazon Web Services Region to a keyspace
<a name="howitworks_replication_permissions_addReplica"></a>

To add a Region to a keyspace, the IAM principal needs the following permissions:
+ `cassandra:Alter`
+ `cassandra:AlterMultiRegionResource`
+ `cassandra:Create`
+ `cassandra:CreateMultiRegionResource`
+ `cassandra:Select`
+ `cassandra:SelectMultiRegionResource`
+ `cassandra:Modify`
+ `cassandra:ModifyMultiRegionResource`

If the table is configured in provisioned mode with auto scaling enabled, the following additional permissions are needed.
+ `application-autoscaling:RegisterScalableTarget`
+ `application-autoscaling:DeregisterScalableTarget`
+ `application-autoscaling:DescribeScalableTargets`
+ `application-autoscaling:PutScalingPolicy`
+ `application-autoscaling:DescribeScalingPolicies`

To successfully add a Region to a single-Region keyspace, the IAM principal also needs to be able to create a service-linked role. This service-linked role is a unique type of IAM role that is predefined by Amazon Keyspaces. It includes all the permissions that Amazon Keyspaces requires to perform actions on your behalf. For more information about the service-linked role, see [Using roles for Amazon Keyspaces Multi-Region Replication](using-service-linked-roles-multi-region-replication.md).

To create the service-linked role required by multi-Region replication, the policy for the IAM principal requires the following elements:
+ `iam:CreateServiceLinkedRole` – The **action** the principal can perform.
+ `arn:aws-cn:iam::*:role/aws-service-role/replication.cassandra.amazonaws.com/AWSServiceRoleForKeyspacesReplication` – The **resource** that the action can be performed on. 
+ `iam:AWSServiceName": "replication.cassandra.amazonaws.com` – The only Amazon service that this role can be attached to is Amazon Keyspaces.

The following is an example of the policy that grants the minimum required permissions to a principal to add a Region to a keyspace.

```
{
            "Effect": "Allow",
            "Action": "iam:CreateServiceLinkedRole",
            "Resource": "arn:aws-cn:iam::*:role/aws-service-role/replication.cassandra.amazonaws.com/AWSServiceRoleForKeyspacesReplication",
            "Condition": {"StringLike": {"iam:AWSServiceName": "replication.cassandra.amazonaws.com"}}
}
```

# Create a multi-Region keyspace in Amazon Keyspaces
<a name="keyspaces-mrr-create"></a>

This section provides examples of how to create a multi-Region keyspace. You can do this on the Amazon Keyspaces console, using CQL or the Amazon CLI. All tables that you create in a multi-Region keyspace automatically inherit the multi-Region settings from the keyspace.

**Note**  
When creating a multi-Region keyspace, Amazon Keyspaces creates a service-linked role with the name `AWSServiceRoleForAmazonKeyspacesReplication` in your account. This role allows Amazon Keyspaces to replicate writes to all replicas of a multi-Region table on your behalf. To learn more, see [Using roles for Amazon Keyspaces Multi-Region Replication](using-service-linked-roles-multi-region-replication.md).

------
#### [ Console ]

**Create a multi-Region keyspace (console)**

1. Sign in to the Amazon Web Services Management Console, and open the Amazon Keyspaces console at [https://console.amazonaws.cn/keyspaces/home](https://console.amazonaws.cn/keyspaces/home).

1. In the navigation pane, choose **Keyspaces**, and then choose **Create keyspace**.

1. For **Keyspace name**, enter the name for the keyspace.

1. In the **Multi-Region replication** section, you can add the additional Regions that are available in the list.

1. To finish, choose **Create keyspace**.

------
#### [ Cassandra Query Language (CQL) ]

**Create a multi-Region keyspace using CQL**

1. To create a multi-Region keyspace, use `NetworkTopologyStrategy` to specify the Amazon Web Services Regions that the keyspace is going to be replicated in. You must include your current Region and at least one additional Region. 

   All tables in the keyspace inherit the replication strategy from the keyspace. You can't change the replication strategy at the table level.

   `NetworkTopologyStrategy` – The replication factor for each Region is three because Amazon Keyspaces replicates data across three [Availability Zones](http://www.amazonaws.cn/about-aws/global-infrastructure/regions_az/) within the same Amazon Web Services Region, by default. 

   The following CQL statement is an example of this.

   ```
   CREATE KEYSPACE mykeyspace
   WITH REPLICATION = {'class':'NetworkTopologyStrategy', 'us-east-1':'3', 'ap-southeast-1':'3','eu-west-1':'3' };
   ```

1. You can use a CQL statement to query the `tables` table in the `system_multiregion_info` keyspace to programmatically list the Regions and the status of the multi-Region table that you specify. The following code is an example of this.

   ```
   SELECT * from system_multiregion_info.tables WHERE keyspace_name = 'mykeyspace' AND table_name = 'mytable';
   ```

   The output of the statement looks like the following:

   ```
    keyspace_name  | table_name     | region         | status
   ----------------+----------------+----------------+--------
    mykeyspace     | mytable        | us-east-1      | ACTIVE
    mykeyspace     | mytable        | ap-southeast-1 | ACTIVE
    mykeyspace     | mytable        | eu-west-1      | ACTIVE
   ```

------
#### [ CLI ]

**Create a new multi-Region keyspace using the Amazon CLI**
+ To create a multi-Region keyspace, you can use the following CLI statement. Specify your current Region and at least one additional Region in the `regionList`.

  ```
  aws keyspaces create-keyspace --keyspace-name mykeyspace \
  --replication-specification replicationStrategy=MULTI_REGION,regionList=us-east-1,eu-west-1
  ```

------

To create a multi-Region table, see [Create a multi-Region table with default settings in Amazon Keyspaces](tables-mrr-create-default.md) and [Create a multi-Region table in provisioned mode with auto scaling in Amazon Keyspaces](tables-mrr-create-provisioned.md).

# Add an Amazon Web Services Region to a keyspace in Amazon Keyspaces
<a name="keyspaces-multi-region-add-replica"></a>

You can add a new Amazon Web Services Region to a keyspace that is either a single or a multi-Region keyspace. The new replica Region is applied to all tables in the keyspace. 

To change a single-Region to a multi-Region keyspace, you have to enable client-side timestamps for all tables in the keyspace. For more information, see [Client-side timestamps in Amazon Keyspaces](client-side-timestamps.md).

If you're adding an additional Region to a multi-Region keyspace, Amazon Keyspaces has to replicate the existing table(s) into the new Region using a one-time cross-Region restore for each existing table. The restore charges for each table are billed per GB, for more information see [Backup and restore](http://www.amazonaws.cn/keyspaces/pricing/#:~:text=per%20GB-month-,Restoring%20a%20table,-Restoring%20a%20table) on the Amazon Keyspaces (for Apache Cassandra) pricing page. There's no charge for data transfer across Regions for this restore operation. In addition to data, all table properties with the exception of tags are going to be replicated to the new Region.

You can use the `ALTER KEYSPACE` statement in CQL, the `update-keyspace` command with the Amazon CLI, or the console to add a new Region to a single or to a multi-Region keyspace in Amazon Keyspaces. In order to run the statement successfully, the account you're using has to be located in one of the Regions where the keyspace is already available. While the replica is being added, you can't perform any other data definition language (DDL) operations on the resources that are being updated and replicated.

For more information about the permissions required to add a Region, see [Configure the IAM permissions required to add an Amazon Web Services Region to a keyspace](howitworks_replication_permissions_addReplica.md).

**Note**  
When adding an additional Region to a single-Region keyspace, Amazon Keyspaces creates a service-linked role with the name `AWSServiceRoleForAmazonKeyspacesReplication` in your account. This role allows Amazon Keyspaces to replicate tables to new Regions and to replicate writes from one table to all replicas of a multi-Region table on your behalf. To learn more, see [Using roles for Amazon Keyspaces Multi-Region Replication](using-service-linked-roles-multi-region-replication.md).

------
#### [ Console ]

Follow these steps to add a Region to a keyspace using the Amazon Keyspaces console.

**Add a Region to a keyspace (console)**

1. Sign in to the Amazon Web Services Management Console, and open the Amazon Keyspaces console at [https://console.amazonaws.cn/keyspaces/home](https://console.amazonaws.cn/keyspaces/home).

1. In the navigation pane, choose **Keyspaces**, and then choose a keyspace from the list.

1. Choose the **Amazon Web Services Regions** tab.

1. On the **Amazon Web Services Regions** tab, choose **Add Region**.

1. In the **Add Region** dialog, choose the additional Region that you want to add to the keyspace.

1. To finish, choose **Add**.

------
#### [ Cassandra Query Language (CQL) ]

**Add a Region to a keyspace using CQL**
+ To add a new Region to a keyspace, you can use the following statement. In this example, the keyspace is already available in the US East (N. Virginia) Region and US West (Oregon) Region Regions, and the CQL statement is adding the Region US West (N. California) Region. 

  ```
  ALTER KEYSPACE my_keyspace
  WITH REPLICATION = {
      'class': 'NetworkTopologyStrategy',
      'us-east-1': '3',
      'us-west-2': '3',
      'us-west-1': '3'
  } AND CLIENT_SIDE_TIMESTAMPS = {'status': 'ENABLED'};
  ```

------
#### [ CLI ]

**Add a Region to a keyspace using the Amazon CLI**
+ To add a new Region to a keyspace using the CLI, you can use the following example. Note that the default value for `client-side-timestamps` is `DISABLED`. With the `update-keyspace` command, you must change the value to `ENABLED`.

  ```
  aws keyspaces update-keyspace \
  --keyspace-name my_keyspace \
  --replication-specification '{"replicationStrategy": "MULTI_REGION", "regionList": ["us-east-1", "eu-west-1", "eu-west-3"] }' \
  --client-side-timestamps '{"status": "ENABLED"}'
  ```

------

# Check the replication progress when adding a new Region to a keyspace
<a name="keyspaces-multi-region-replica-status"></a>

Adding a new Region to an Amazon Keyspaces keyspace is a long running operation. To track progress you can use the queries shown in this section.

------
#### [ Cassandra Query Language (CQL) ]

**Using CQL to verify the add Region progress**
+  To verify the progress of the creation of the new table replicas in a given keyspace, you can query the `system_multiregion_info.keyspaces` table. The following CQL statement is an example of this.

  ```
  SELECT keyspace_name, region, status, tables_replication_progress
  FROM system_multiregion_info.keyspaces
  WHERE keyspace_name = 'my_keyspace';
  ```

  While a replication operation is in progress, the status shows the progress of table creation in the new Region. This is an example where 5 out of 10 tables have been replicated to the new Region.

  ```
   keyspace_name | region    | status    | tables_replication_progress
  ---------------+-----------+-----------+-------------------------
     my_keyspace | us-east-1 | Updating  | 
     my_keyspace | us-west-2 | Updating  | 
     my_keyspace | eu-west-1 | Creating  | 50%
  ```

  After the replication process has completed successfully, the output should look like this example.

  ```
   keyspace_name | region    | status
  ---------------+-----------+-----------
     my_keyspace | us-east-1 | Active
     my_keyspace | us-west-2 | Active
     my_keyspace | eu-west-1 | Active
  ```

------
#### [ CLI ]

**Using the Amazon CLI to verify the add Region progress**
+ To confirm the status of table replica creation for a given keyspace, you can use the following example.

  ```
  aws keyspaces get-keyspace \
  --keyspace-name my_keyspace
  ```

  The output should look similar to this example.

  ```
  {
      "keyspaceName": "my_keyspace",
      "resourceArn": "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/my_keyspace/",
      "replicationStrategy": "MULTI_REGION",
      "replicationRegions": [
          "us-east-1",
          "eu-west-1"
      ]
      "replicationGroupStatus": [
          {
              "RegionName": "us-east-1",
              "KeyspaceStatus": "Active"
          },
          {
              "RegionName": "eu-west-1",
              "KeyspaceStatus": "Creating",
              "TablesReplicationProgress": "50.0%"
          }
      ]
  }
  ```

------

# Create a multi-Region table with default settings in Amazon Keyspaces
<a name="tables-mrr-create-default"></a>

This section provides examples of how to create a multi-Region table in on-demand mode with all default settings. You can do this on the Amazon Keyspaces console, using CQL or the Amazon CLI. All tables that you create in a multi-Region keyspace automatically inherit the multi-Region settings from the keyspace.

To create a multi-Region keyspace, see [Create a multi-Region keyspace in Amazon Keyspaces](keyspaces-mrr-create.md).

------
#### [ Console ]

**Create a multi-Region table with default settings (console)**

1. Sign in to the Amazon Web Services Management Console, and open the Amazon Keyspaces console at [https://console.amazonaws.cn/keyspaces/home](https://console.amazonaws.cn/keyspaces/home).

1. Choose a multi-Region keyspace.

1. On the **Tables** tab, choose **Create table**.

1. For **Table name**, enter the name for the table. The Amazon Web Services Regions that this table is being replicated in are shown in the info box.

1. Continue with the table schema.

1. Under **Table settings**, continue with the **Default settings** option. Note the following default settings for multi-Region tables.
   + **Capacity mode** – The default capacity mode is **On-demand**. For more information about configuring **provisioned** mode, see [Create a multi-Region table in provisioned mode with auto scaling in Amazon Keyspaces](tables-mrr-create-provisioned.md).
   + **Encryption key management** – Only the **Amazon owned key** option is supported.
   + **Client-side timestamps** – This feature is required for multi-Region tables.
   + Choose **Customize settings** if you need to turn on Time to Live (TTL) for the table and all its replicas.
**Note**  
You won't be able to change TTL settings on an existing multi-Region table.

1. To finish, choose **Create table**.

------
#### [ Cassandra Query Language (CQL) ]

**Create a multi-Region table in on-demand mode with default settings**
+ To create a multi-Region table with default settings, you can use the following CQL statement.

  ```
  CREATE TABLE mykeyspace.mytable(pk int, ck int, PRIMARY KEY (pk, ck))
      WITH CUSTOM_PROPERTIES = {
  	'capacity_mode':{
  		'throughput_mode':'PAY_PER_REQUEST'
  	},
  	'point_in_time_recovery':{
  		'status':'enabled'
  	},
  	'encryption_specification':{
  		'encryption_type':'AWS_OWNED_KMS_KEY'
  	},
  	'client_side_timestamps':{
  		'status':'enabled'
  	}
  };
  ```

------
#### [ CLI ]

**Using the Amazon CLI**

1. To create a multi-Region table with default settings, you only need to specify the schema. You can use the following example.

   ```
   aws keyspaces create-table --keyspace-name mykeyspace --table-name mytable \
   --schema-definition 'allColumns=[{name=pk,type=int}],partitionKeys={name= pk}'
   ```

   The output of the command is:

   ```
   {
       "resourceArn": "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/mykeyspace/table/mytable"
   }
   ```

1. To confirm the table's settings, you can use the following statement.

   ```
   aws keyspaces get-table --keyspace-name mykeyspace --table-name mytable
   ```

   The output shows all default settings of a multi-Region table. 

   ```
   {
       "keyspaceName": "mykeyspace",
       "tableName": "mytable",
       "resourceArn": "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/mykeyspace/table/mytable",
       "creationTimestamp": "2023-12-19T16:50:37.639000+00:00",
       "status": "ACTIVE",
       "schemaDefinition": {
           "allColumns": [
               {
                   "name": "pk",
                   "type": "int"
               }
           ],
           "partitionKeys": [
               {
                   "name": "pk"
               }
           ],
           "clusteringKeys": [],
           "staticColumns": []
       },
       "capacitySpecification": {
           "throughputMode": "PAY_PER_REQUEST",
           "lastUpdateToPayPerRequestTimestamp": "2023-12-19T16:50:37.639000+00:00"
       },
       "encryptionSpecification": {
           "type": "AWS_OWNED_KMS_KEY"
       },
       "pointInTimeRecovery": {
           "status": "DISABLED"
       },
       "defaultTimeToLive": 0,
       "comment": {
           "message": ""
       },
       "clientSideTimestamps": {
           "status": "ENABLED"
       },
       "replicaSpecifications": [
           {
               "region": "us-east-1",
               "status": "ACTIVE",
               "capacitySpecification": {
                   "throughputMode": "PAY_PER_REQUEST",
                   "lastUpdateToPayPerRequestTimestamp": 1702895811.469
               }
           },
           {
               "region": "eu-north-1",
               "status": "ACTIVE",
               "capacitySpecification": {
                   "throughputMode": "PAY_PER_REQUEST",
                   "lastUpdateToPayPerRequestTimestamp": 1702895811.121
               }
           }
       ]
   }
   ```

------

# Create a multi-Region table in provisioned mode with auto scaling in Amazon Keyspaces
<a name="tables-mrr-create-provisioned"></a>

This section provides examples of how to create a multi-Region table in provisioned mode with auto scaling. You can do this on the Amazon Keyspaces console, using CQL or the Amazon CLI. 

For more information about supported configurations and multi-Region replication features, see [Amazon Keyspaces multi-Region replication usage notes](multiRegion-replication_usage-notes.md).

To create a multi-Region keyspace, see [Create a multi-Region keyspace in Amazon Keyspaces](keyspaces-mrr-create.md).

When you create a new multi-Region table in provisioned mode with auto scaling settings, you can specify the general settings for the table that are valid for all Amazon Web Services Regions that the table is replicated in. You can then overwrite read capacity settings and read auto scaling settings for each replica. The write capacity, however, remains synchronized between all replicas to ensure that there's enough capacity to replicate writes across all Regions. 

**Note**  
Amazon Keyspaces automatic scaling requires the presence of a service-linked role (`AWSServiceRoleForApplicationAutoScaling_CassandraTable`) that performs automatic scaling actions on your behalf. This role is created automatically for you. For more information, see [Using service-linked roles for Amazon Keyspaces](using-service-linked-roles.md).

------
#### [ Console ]

**Create a new multi-Region table with automatic scaling enabled**

1. Sign in to the Amazon Web Services Management Console, and open the Amazon Keyspaces console at [https://console.amazonaws.cn/keyspaces/home](https://console.amazonaws.cn/keyspaces/home).

1. Choose a multi-Region keyspace.

1. On the **Tables** tab, choose **Create table**.

1. On the **Create table** page in the **Table details** section, select a keyspace and provide a name for the new table.

1. In the **Columns** section, create the schema for your table.

1. In the **Primary key** section, define the primary key of the table and select optional clustering columns.

1. In the **Table settings** section, choose **Customize settings**.

1. Continue to **Read/write capacity settings**.

1. For **Capacity mode**, choose **Provisioned**.

1. In the **Read capacity** section, confirm that **Scale automatically** is selected.

   You can select to configure the same read capacity units for all Amazon Web Services Regions that the table is replicated in. Alternatively, you can clear the check box and configure the read capacity for each Region differently.

   If you choose to configure each Region differently, you select the minimum and maximum read capacity units for each table replica, as well as the target utilization.
   + **Minimum capacity units** – Enter the value for the minimum level of throughput that the table should always be ready to support. The value must be between 1 and the maximum throughput per second quota for your account (40,000 by default).
   + **Maximum capacity units** – Enter the maximum amount of throughput that you want to provision for the table. The value must be between 1 and the maximum throughput per second quota for your account (40,000 by default).
   + **Target utilization** – Enter a target utilization rate between 20% and 90%. When traffic exceeds the defined target utilization rate, capacity is automatically scaled up. When traffic falls below the defined target, it is automatically scaled down again.
   + Clear the **Scale automatically** check box if you want to provision the table's read capacity manually. This setting applies to all replicas of the table.
**Note**  
To ensure that there's enough read capacity for all replicas, we recommend Amazon Keyspaces automatic scaling for provisioned multi-Region tables.
**Note**  
To learn more about default quotas for your account and how to increase them, see [Quotas for Amazon Keyspaces (for Apache Cassandra)](quotas.md).

1. In the **Write capacity** section, confirm that **Scale automatically** is selected. Then configure the capacity units for the table. The write capacity units stay synced across all Amazon Web Services Regions to ensure that there is enough capacity to replicate write events across the Regions.
   + Clear **Scale automatically** if you want to provision the table's write capacity manually. This setting applies to all replicas of the table.
**Note**  
To ensure that there's enough write capacity for all replicas, we recommend Amazon Keyspaces automatic scaling for provisioned multi-Region tables.

1. Choose **Create table**. Your table is created with the specified automatic scaling parameters.

------
#### [ Cassandra Query Language (CQL) ]

**Create a multi-Region table with provisioned capacity mode and auto scaling using CQL**
+ To create a multi-Region table in provisioned mode with auto scaling, you must first specify the capacity mode by defining `CUSTOM_PROPERTIES` for the table. After specifying provisioned capacity mode, you can configure the auto scaling settings for the table using `AUTOSCALING_SETTINGS`. 

  For detailed information about auto scaling settings, the target tracking policy, target value, and optional settings, see [Create a new table with automatic scaling](autoscaling.createTable.md).

  To define the read capacity for a table replica in a specific Region, you can configure the following parameters as part of the table's `replica_updates`:
  + The Region
  + The provisioned read capacity units (optional)
  + Auto scaling settings for read capacity (optional)

  The following example shows a `CREATE TABLE` statement for a multi-Region table in provisioned mode. The general write and read capacity auto scaling settings are the same. However, the read auto scaling settings specify additional cooldown periods of 60 seconds before scaling the table's read capacity up or down. In addition, the read capacity auto scaling settings for the Region US East (N. Virginia) are higher than those for other replicas. Also, the target value is set to 70% instead of 50%.

  ```
  CREATE TABLE mykeyspace.mytable(pk int, ck int, PRIMARY KEY (pk, ck))
  WITH CUSTOM_PROPERTIES = {  
      'capacity_mode': {  
          'throughput_mode': 'PROVISIONED',  
          'read_capacity_units': 5,  
          'write_capacity_units': 5  
      }
  } AND AUTOSCALING_SETTINGS = {
      'provisioned_write_capacity_autoscaling_update': {
          'maximum_units': 10,  
          'minimum_units': 5,  
          'scaling_policy': {
              'target_tracking_scaling_policy_configuration': {
                  'target_value': 50
              }  
          }  
      },
      'provisioned_read_capacity_autoscaling_update': {  
          'maximum_units': 10,  
          'minimum_units': 5,  
          'scaling_policy': {  
              'target_tracking_scaling_policy_configuration': {  
                  'target_value': 50,
                  'scale_in_cooldown': 60,  
                  'scale_out_cooldown': 60
              }  
          }  
      },
      'replica_updates': {
          'us-east-1': {
              'provisioned_read_capacity_autoscaling_update': {
                  'maximum_units': 20,
                  'minimum_units': 5,
                  'scaling_policy': {
                      'target_tracking_scaling_policy_configuration': {
                          'target_value': 70
                      } 
                  }
              }
          }
      }
  };
  ```

------
#### [ CLI ]

**Create a new multi-Region table in provisioned mode with auto scaling using the Amazon CLI**
+ To create a multi-Region table in provisioned mode with auto scaling configuration, you can use the Amazon CLI. Note that you must use the Amazon Keyspaces CLI `create-table` command to configure multi-Region auto scaling settings. This is because Application Auto Scaling, the service that Amazon Keyspaces uses to perform auto scaling on your behalf, doesn't support multiple Regions. 

  For more information about auto scaling settings, the target tracking policy, target value, and optional settings, see [Create a new table with automatic scaling](autoscaling.createTable.md).

  To define the read capacity for a table replica in a specific Region, you can configure the following parameters as part of the table's `replicaSpecifications`:
  + The Region
  + The provisioned read capacity units (optional)
  + Auto scaling settings for read capacity (optional)

  When you're creating provisioned multi-Region tables with complex auto scaling settings and different configurations for table replicas, it's helpful to load the table's auto scaling settings and replica configurations from JSON files. 

  To use the following code example, you can download the example JSON files from [auto-scaling.zip](samples/auto-scaling.zip), and extract `auto-scaling.json` and `replication.json`. Take note of the path to the files. 

  In this example, the JSON files are located in the current directory. For different file path options, see [ How to load parameters from a file](https://docs.amazonaws.cn/cli/latest/userguide/cli-usage-parameters-file.html#cli-usage-parameters-file-how).

  ```
  aws keyspaces create-table --keyspace-name mykeyspace --table-name mytable \
  --schema-definition 'allColumns=[{name=pk,type=int},{name=ck,type=int}],partitionKeys=[{name=pk},{name=ck}]' \
  --capacity-specification throughputMode=PROVISIONED,readCapacityUnits=1,writeCapacityUnits=1 \
  --auto-scaling-specification file://auto-scaling.json \
  --replica-specifications file://replication.json
  ```

------

# Update the provisioned capacity and auto scaling settings for a multi-Region table in Amazon Keyspaces
<a name="tables-mrr-autoscaling"></a>

This section includes examples of how to use the console, CQL, and the Amazon CLI to manage the Amazon Keyspaces auto scaling settings of provisioned multi-Region tables. For more information about general auto scaling configuration options and how they work, see [Manage throughput capacity automatically with Amazon Keyspaces auto scaling](autoscaling.md). 

Note that if you're using provisioned capacity mode for multi-Region tables, you must always use Amazon Keyspaces API calls to configure auto scaling. This is because the underlying Application Auto Scaling API operations are not Region-aware.

For more information on how to estimate write capacity throughput of provisioned multi-Region tables, see [Estimate and provision capacity for a multi-Region table in Amazon Keyspaces](tables-multi-region-capacity.md).

For more information about the Amazon Keyspaces API, see [https://docs.amazonaws.cn/keyspaces/latest/APIReference/Welcome.html](https://docs.amazonaws.cn/keyspaces/latest/APIReference/Welcome.html).

When you update the provisioned mode or auto scaling settings of a multi-Region table, you can update read capacity settings and the read auto scaling configuration for each replica of the table. 

The write capacity, however, remains synchronized between all replicas to ensure that there's enough capacity to replicate writes across all Regions.

------
#### [ Cassandra Query Language (CQL) ]

**Update the provisioned capacity and auto scaling settings of a multi-Region table using CQL**
+  You can use `ALTER TABLE` to update the capacity mode and auto scaling settings of an existing table. If you're updating a table that is currently in on-demand capacity mode, `capacity_mode` is required. If your table is already in provisioned capacity mode, this field can be omitted. 

  For detailed information about auto scaling settings, the target tracking policy, target value, and optional settings, see [Create a new table with automatic scaling](autoscaling.createTable.md). 

  In the same statement, you can also update the read capacity and auto scaling settings of table replicas in specific Regions by updating the table's `replica_updates` property. The following statement is an example of this.

  ```
  ALTER TABLE mykeyspace.mytable
  WITH CUSTOM_PROPERTIES = {  
      'capacity_mode': {  
          'throughput_mode': 'PROVISIONED',  
          'read_capacity_units': 1,  
          'write_capacity_units': 1  
      }
  } AND AUTOSCALING_SETTINGS = {
      'provisioned_write_capacity_autoscaling_update': {
          'maximum_units': 10,  
          'minimum_units': 5,  
          'scaling_policy': {
              'target_tracking_scaling_policy_configuration': {
                  'target_value': 50
              }  
          }  
      },
      'provisioned_read_capacity_autoscaling_update': {  
          'maximum_units': 10,  
          'minimum_units': 5,  
          'scaling_policy': {  
              'target_tracking_scaling_policy_configuration': {  
                  'target_value': 50,
                  'scale_in_cooldown': 60,  
                  'scale_out_cooldown': 60
              }  
          }  
      },
      'replica_updates': {
          'us-east-1': {
              'provisioned_read_capacity_autoscaling_update': {
                  'maximum_units': 20,
                  'minimum_units': 5,
                  'scaling_policy': {
                      'target_tracking_scaling_policy_configuration': {
                          'target_value': 70
                      } 
                  }
              }
          }
      }
  };
  ```

------
#### [ CLI ]

**Update the provisioned capacity and auto scaling settings of a multi-Region table using the Amazon CLI**
+ To update the provisioned mode and auto scaling configuration of an existing table, you can use the Amazon CLI `update-table` command. 

  Note that you must use the Amazon Keyspaces CLI commands to create or modify multi-Region auto scaling settings. This is because Application Auto Scaling, the service that Amazon Keyspaces uses to perform auto scaling of table capacity on your behalf, doesn't support multiple Amazon Web Services Regions. 

   To update the read capacity for a table replica in a specific Region, you can change one of the following optional parameters of the table's `replicaSpecifications`:
  + The provisioned read capacity units (optional)
  + Auto scaling settings for read capacity (optional)

  When you're updating multi-Region tables with complex auto scaling settings and different configurations for table replicas, it's helpful to load the table's auto scaling settings and replica configurations from JSON files. 

  To use the following code example, you can download the example JSON files from [auto-scaling.zip](samples/auto-scaling.zip), and extract `auto-scaling.json` and `replication.json`. Take note of the path to the files. 

  In this example, the JSON files are located in the current directory. For different file path options, see [ How to load parameters from a file](https://docs.amazonaws.cn/cli/latest/userguide/cli-usage-parameters-file.html#cli-usage-parameters-file-how).

  ```
  aws keyspaces update-table --keyspace-name mykeyspace --table-name mytable \
  --capacity-specification throughputMode=PROVISIONED,readCapacityUnits=1,writeCapacityUnits=1 \
  --auto-scaling-specification file://auto-scaling.json \
  --replica-specifications file://replication.json
  ```

------

# View the provisioned capacity and auto scaling settings for a multi-Region table in Amazon Keyspaces
<a name="tables-mrr-view"></a>

You can view a multi-Region table's provisioned capacity and auto scaling settings on the Amazon Keyspaces console, using CQL, or the Amazon CLI. This section provides examples of how to do this using CQL and the Amazon CLI. 

------
#### [ Cassandra Query Language (CQL) ]

**View the provisioned capacity and auto scaling settings of a multi-Region table using CQL**
+ To view the auto scaling configuration of a multi-Region table, use the following command.

  ```
  SELECT * FROM system_multiregion_info.autoscaling WHERE keyspace_name = 'mykeyspace' AND table_name = 'mytable';
  ```

  The output for this command looks like the following:

  ```
   keyspace_name  | table_name | region         | provisioned_read_capacity_autoscaling_update                                                                                                                                                                      | provisioned_write_capacity_autoscaling_update
  ----------------+------------+----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    mykeyspace    |  mytable   | ap-southeast-1 | {'minimum_units': 5, 'maximum_units': 10, 'scaling_policy': {'target_tracking_scaling_policy_configuration': {'scale_out_cooldown': 60, 'disable_scale_in': false, 'target_value': 50, 'scale_in_cooldown': 60}}} | {'minimum_units': 5, 'maximum_units': 10, 'scaling_policy': {'target_tracking_scaling_policy_configuration': {'scale_out_cooldown': 0, 'disable_scale_in': false, 'target_value': 50, 'scale_in_cooldown': 0}}}
    mykeyspace    |  mytable   | us-east-1      | {'minimum_units': 5, 'maximum_units': 20, 'scaling_policy': {'target_tracking_scaling_policy_configuration': {'scale_out_cooldown': 60, 'disable_scale_in': false, 'target_value': 70, 'scale_in_cooldown': 60}}} | {'minimum_units': 5, 'maximum_units': 10, 'scaling_policy': {'target_tracking_scaling_policy_configuration': {'scale_out_cooldown': 0, 'disable_scale_in': false, 'target_value': 50, 'scale_in_cooldown': 0}}}
    mykeyspace    |  mytable   | eu-west-1      | {'minimum_units': 5, 'maximum_units': 10, 'scaling_policy': {'target_tracking_scaling_policy_configuration': {'scale_out_cooldown': 60, 'disable_scale_in': false, 'target_value': 50, 'scale_in_cooldown': 60}}} | {'minimum_units': 5, 'maximum_units': 10, 'scaling_policy': {'target_tracking_scaling_policy_configuration': {'scale_out_cooldown': 0, 'disable_scale_in': false, 'target_value': 50, 'scale_in_cooldown': 0}}}
  ```

------
#### [ CLI ]

**View the provisioned capacity and auto scaling settings of a multi-Region table using the Amazon CLI**
+ To view the auto scaling configuration of a multi-Region table, you can use the `get-table-auto-scaling-settings` operation. The following CLI command is an example of this.

  ```
  aws keyspaces get-table-auto-scaling-settings --keyspace-name mykeyspace --table-name mytable
  ```

  You should see the following output.

  ```
  {
      "keyspaceName": "mykeyspace",
      "tableName": "mytable",
      "resourceArn": "arn:aws-cn:cassandra:us-east-1:777788889999:/keyspace/mykeyspace/table/mytable",
      "autoScalingSpecification": {
          "writeCapacityAutoScaling": {
              "autoScalingDisabled": false,
              "minimumUnits": 5,
              "maximumUnits": 10,
              "scalingPolicy": {
                  "targetTrackingScalingPolicyConfiguration": {
                      "disableScaleIn": false,
                      "scaleInCooldown": 0,
                      "scaleOutCooldown": 0,
                      "targetValue": 50.0
                  }
              }
          },
          "readCapacityAutoScaling": {
              "autoScalingDisabled": false,
              "minimumUnits": 5,
              "maximumUnits": 20,
              "scalingPolicy": {
                  "targetTrackingScalingPolicyConfiguration": {
                      "disableScaleIn": false,
                      "scaleInCooldown": 60,
                      "scaleOutCooldown": 60,
                      "targetValue": 70.0
                  }
              }
          }
      },
      "replicaSpecifications": [
          {
              "region": "us-east-1",
              "autoScalingSpecification": {
                  "writeCapacityAutoScaling": {
                      "autoScalingDisabled": false,
                      "minimumUnits": 5,
                      "maximumUnits": 10,
                      "scalingPolicy": {
                          "targetTrackingScalingPolicyConfiguration": {
                              "disableScaleIn": false,
                              "scaleInCooldown": 0,
                              "scaleOutCooldown": 0,
                              "targetValue": 50.0
                          }
                      }
                  },
                  "readCapacityAutoScaling": {
                      "autoScalingDisabled": false,
                      "minimumUnits": 5,
                      "maximumUnits": 20,
                      "scalingPolicy": {
                          "targetTrackingScalingPolicyConfiguration": {
                              "disableScaleIn": false,
                              "scaleInCooldown": 60,
                              "scaleOutCooldown": 60,
                              "targetValue": 70.0
                          }
                      }
                  }
              }
          },
          {
              "region": "eu-north-1",
              "autoScalingSpecification": {
                  "writeCapacityAutoScaling": {
                      "autoScalingDisabled": false,
                      "minimumUnits": 5,
                      "maximumUnits": 10,
                      "scalingPolicy": {
                          "targetTrackingScalingPolicyConfiguration": {
                              "disableScaleIn": false,
                              "scaleInCooldown": 0,
                              "scaleOutCooldown": 0,
                              "targetValue": 50.0
                          }
                      }
                  },
                  "readCapacityAutoScaling": {
                      "autoScalingDisabled": false,
                      "minimumUnits": 5,
                      "maximumUnits": 10,
                      "scalingPolicy": {
                          "targetTrackingScalingPolicyConfiguration": {
                              "disableScaleIn": false,
                              "scaleInCooldown": 60,
                              "scaleOutCooldown": 60,
                              "targetValue": 50.0
                          }
                      }
                  }
              }
          }
      ]
  }
  ```

------

# Turn off auto scaling for a table in Amazon Keyspaces
<a name="tables-mrr-autoscaling-off"></a>

This section provides examples of how to turn off auto scaling for a multi-Region table in provisioned capacity mode. You can do this on the Amazon Keyspaces console, using CQL or the Amazon CLI. 

**Important**  
We recommend using auto scaling for multi-Region tables that use provisioned capacity mode. For more information, see [Estimate and provision capacity for a multi-Region table in Amazon Keyspaces](tables-multi-region-capacity.md).

**Note**  
To delete the service-linked role that Application Auto Scaling uses, you must disable automatic scaling on all tables in the account across all Amazon Web Services Regions.

------
#### [ Console ]

**Turn off Amazon Keyspaces automatic scaling for an existing multi-Region table on the console**

1. Sign in to the Amazon Web Services Management Console, and open the Amazon Keyspaces console at [https://console.amazonaws.cn/keyspaces/home](https://console.amazonaws.cn/keyspaces/home).

1. Choose the table that you want to work with and choose the **Capacity** tab.

1. In the **Capacity settings** section, choose **Edit**.

1. To disable Amazon Keyspaces automatic scaling, clear the **Scale automatically** check box. Disabling automatic scaling deregisters the table as a scalable target with Application Auto Scaling. To delete the service-linked role that Application Auto Scaling uses to access your Amazon Keyspaces table, follow the steps in [Deleting a service-linked role for Amazon Keyspaces](using-service-linked-roles-app-auto-scaling.md#delete-service-linked-role-app-auto-scaling). 

1. When the automatic scaling settings are defined, choose **Save**.

------
#### [ Cassandra Query Language (CQL) ]

**Turn off auto scaling for a multi-Region table using CQL**
+  You can use `ALTER TABLE` to turn off auto scaling for an existing table. Note that you can't turn off auto scaling for an individual table replica.

  In the following example, auto scaling is turned off for the table's read capacity.

  ```
  ALTER TABLE mykeyspace.mytable
  WITH AUTOSCALING_SETTINGS = {
      'provisioned_read_capacity_autoscaling_update': {
          'autoscaling_disabled': true
      }
  };
  ```

------
#### [ CLI ]

**Turn off auto scaling for a multi-Region table using the Amazon CLI**
+  You can use the Amazon CLI `update-table` command to turn off auto scaling for an existing table. Note that you can't turn off auto scaling for an individual table replica. 

  In the following example, auto scaling is turned off for the table's read capacity.

  ```
  aws keyspaces update-table --keyspace-name mykeyspace --table-name mytable 
             \ --auto-scaling-specification readCapacityAutoScaling={autoScalingDisabled=true}
  ```

------

# Set the provisioned capacity of a multi-Region table manually in Amazon Keyspaces
<a name="tables-mrr-capacity-manually"></a>

If you have to turn off auto scaling for a multi-Region table, you can provision the table's read capacity for a replica table manually using CQL or the Amazon CLI. 

**Note**  
We recommend using auto scaling for multi-Region tables that use provisioned capacity mode. For more information, see [Estimate and provision capacity for a multi-Region table in Amazon Keyspaces](tables-multi-region-capacity.md).

------
#### [ Cassandra Query Language (CQL) ]

**Setting the provisioned capacity of a multi-Region table manually using CQL**
+ You can use `ALTER TABLE` to provision the table's read capacity for a replica table manually.

  ```
  ALTER TABLE mykeyspace.mytable
  WITH CUSTOM_PROPERTIES = {  
      'capacity_mode': {  
          'throughput_mode': 'PROVISIONED',  
          'read_capacity_units': 1,  
          'write_capacity_units': 1  
      },
      'replica_updates': {
          'us-east-1': {
              'read_capacity_units': 2
           }
      }
  };
  ```

------
#### [ CLI ]

**Set the provisioned capacity of a multi-Region table manually using the Amazon CLI**
+ If you have to turn off auto scaling for a multi-Region table, you can use `update-table` to provision the table's read capacity for a replica table manually.

  ```
  aws keyspaces update-table --keyspace-name mykeyspace --table-name mytable \
  --capacity-specification throughputMode=PROVISIONED,readCapacityUnits=1,writeCapacityUnits=1 \
  --replica-specifications region="us-east-1",readCapacityUnits=5
  ```

------

# Backup and restore data with point-in-time recovery for Amazon Keyspaces
<a name="PointInTimeRecovery"></a>

Point-in-time recovery (PITR) helps protect your Amazon Keyspaces tables from accidental write or delete operations by providing you continuous backups of your table data. 

For example, suppose that a test script writes accidentally to a production Amazon Keyspaces table. With point-in-time recovery, you can restore that table's data to any second in time since PITR was enabled within the last 35 days. If you delete a table with point-in-time recovery enabled, you can query for the deleted table's data for 35 days (at no additional cost), and restore it to the state it was in just before the point of deletion. 

You can restore an Amazon Keyspaces table to a point in time by using the console, the Amazon SDK and the Amazon Command Line Interface (Amazon CLI), or Cassandra Query Language (CQL). For more information, see [Use point-in-time recovery in Amazon Keyspaces](PointInTimeRecovery_Tutorial.md).

Point-in-time operations have no performance or availability impact on the base table, and restoring a table doesn't consume additional throughput. 

For information about PITR quotas, see [Quotas for Amazon Keyspaces (for Apache Cassandra)](quotas.md). 

For information about pricing, see [Amazon Keyspaces (for Apache Cassandra) pricing](http://www.amazonaws.cn/keyspaces/pricing).

**Topics**
+ [

# How point-in-time recovery works in Amazon Keyspaces
](PointInTimeRecovery_HowItWorks.md)
+ [

# Use point-in-time recovery in Amazon Keyspaces
](PointInTimeRecovery_Tutorial.md)

# How point-in-time recovery works in Amazon Keyspaces
<a name="PointInTimeRecovery_HowItWorks"></a>

This section provides an overview of how Amazon Keyspaces point-in-time recovery (PITR) works. For more information about pricing, see [Amazon Keyspaces (for Apache Cassandra) pricing](http://www.amazonaws.cn/keyspaces/pricing).

**Topics**
+ [

## Time window for PITR continuous backups
](#howitworks_backup_window)
+ [

## PITR restore settings
](#howitworks_backup_settings)
+ [

## PITR restore of encrypted tables
](#howitworks_backup_encryption)
+ [

## PITR restore of multi-Region tables
](#howitworks_backup_multiRegion)
+ [

## PITR restore of tables with user-defined types (UDTs)
](#howitworks_backup_udt)
+ [

## Table restore time with PITR
](#howitworks_restore_time)
+ [

## Amazon Keyspaces PITR and integration with Amazon services
](#howitworks_integration)

## Time window for PITR continuous backups
<a name="howitworks_backup_window"></a>

Amazon Keyspaces PITR uses two timestamps to maintain the time frame for which restorable backups are available for a table.
+ Earliest restorable time – Marks the time of the earliest restorable backup. The earliest restorable backup goes back up to 35 days or when PITR was enabled, whichever is more recent. The maximum backup window of 35 days can't be modified. 
+ Current time – The timestamp for the latest restorable backup is the current time. If no timestamp is provided during a restore, current time is used.

When PITR is enabled, you can restore to any point in time between `EarliestRestorableDateTime` and `CurrentTime`. You can only restore table data to a time when PITR was enabled. 

If you disable PITR and later reenable it again, you reset the start time for the first available backup to when PITR was reenabled. This means that disabling PITR erases your backup history.

**Note**  
Data definition language (DDL) operations on tables, such as schema changes, are performed asynchronously. You can only see completed operations in your restored table data, but you might see additional actions on your source table if they were in progress at the time of the restore. For a list of DDL statements, see [DDL statements (data definition language) in Amazon Keyspaces](cql.ddl.md).

A table doesn't have to be active to be restored. You can also restore deleted tables if PITR was enabled on the deleted table and the deletion occurred within the backup window (or within the last 35 days).

**Note**  
If a new table is created with the same qualified name (for example, mykeyspace.mytable) as a previously deleted table, the deleted table will no longer be restorable. If you attempt to do this from the console, a warning is displayed.

## PITR restore settings
<a name="howitworks_backup_settings"></a>

When you restore a table using PITR, Amazon Keyspaces restores your source table's schema and data to the state based on the selected timestamp (`day:hour:minute:second`) to a new table. PITR doesn't overwrite existing tables.

In addition to the table's schema and data, PITR restores the `custom_properties` from the source table. Unlike the table's data, which is restored based on the selected timestamp between earliest restore time and current time, custom properties are always restored based on the table's settings as of the current time. 

The settings of the restored table match the settings of the source table with the timestamp of when the restore was initiated. If you want to overwrite these settings during restore, you can do so using `WITH custom_properties`. Custom properties include the following settings.
+ Read/write capacity mode
+ Provisioned throughput capacity settings
+ PITR settings

If the table is in provisioned capacity mode with auto scaling enabled, the restore operation also restores the table's auto scaling settings. You can overwrite them using the `autoscaling_settings` parameter in CQL or `autoScalingSpecification` with the CLI. For more information on auto scaling settings, see [Manage throughput capacity automatically with Amazon Keyspaces auto scaling](autoscaling.md).

When you do a full table restore, all table settings for the restored table come from the current settings of the source table at the time of the restore. 

For example, suppose that a table's provisioned throughput was recently lowered to 50 read capacity units and 50 write capacity units. You then restore the table's state to three weeks ago. At this time, its provisioned throughput was set to 100 read capacity units and 100 write capacity units. In this case, Amazon Keyspaces restores your table data to that point in time, but uses the current provisioned throughput settings (50 read capacity units and 50 write capacity units).

The following settings are not restored, and you must configure them manually for the new table. 
+ Amazon Keyspaces change data capture (CDC) streams
+ Amazon Identity and Access Management (IAM) policies
+ Amazon CloudWatch metrics and alarms
+ Tags (can be added to the CQL `RESTORE` statement using `WITH TAGS`)

## PITR restore of encrypted tables
<a name="howitworks_backup_encryption"></a>

When you restore a table using PITR, Amazon Keyspaces restores your source table's encryption settings. If the table was encrypted with an Amazon owned key (default), the table is restored with the same setting automatically. If the table you want to restore was encrypted using a customer managed key, the same customer managed key needs to be accessible to Amazon Keyspaces to restore the table data.

You can change the encryption settings of the table at the time of restore. To change from an Amazon owned key to a customer managed key, you need to supply a valid and accessible customer managed key at the time of restore. 

If you want to change from a customer managed key to an Amazon owned key, confirm that Amazon Keyspaces has access to the customer managed key of the source table to restore the table with an Amazon owned key. For more information about encryption at rest settings for tables, see [Encryption at rest: How it works in Amazon Keyspaces](encryption.howitworks.md).

**Note**  
If the table was deleted because Amazon Keyspaces lost access to your customer managed key, you need to ensure the customer managed key is accessible to Amazon Keyspaces before trying to restore the table. A table that was encrypted with a customer managed key can't be restored if Amazon Keyspaces doesn't have access to that key. For more information, see [Troubleshooting key access](https://docs.amazonaws.cn/kms/latest/developerguide/policy-evaluation.html) in the Amazon Key Management Service Developer Guide.

## PITR restore of multi-Region tables
<a name="howitworks_backup_multiRegion"></a>

You can restore a multi-Region table using PITR. For the restore operation to be successful, PITR has to be enabled on all replicas of the source table and both the source and the destination table have to be replicated to the same Amazon Web Services Regions.

Amazon Keyspaces restores the settings of the source table in each of the replicated Regions that are part of the keyspace. You can also override settings during the restore operation. For more information about settings that can be changed during the restore, see [PITR restore settings](#howitworks_backup_settings).

For more information about multi-Region replication, see [How multi-Region replication works in Amazon Keyspaces](multiRegion-replication_how-it-works.md).

## PITR restore of tables with user-defined types (UDTs)
<a name="howitworks_backup_udt"></a>

You can restore a table that uses UDTs. For the restore operation to be successful, the referenced UDTs have to exist and be valid in the keyspace.

If any required UDT is missing when you attempt to restore a table, Amazon Keyspaces tries to restore the UDT schema automatically and then continues to restore the table.

If you removed and recreated the UDT, Amazon Keyspaces restores the UDT with the new schema of the UDT and rejects the request to restore the table using the original UDT schema. In this case, if you wish to restore the table with the old UDT schema, you can restore the table to a new keyspace. When you delete and recreate a UDT, even if the schema of the recreated UDT is the same as the schema of the deleted UDT, the recreated UDT is considered a new UDT. In this case, Amazon Keyspaces rejects the request to restore the table with the old UDT schema.

If the UDT is missing and Amazon Keyspaces attempts to restore the UDT, the attempt fails if you have reached the maximum number of UDTs for the account in the Region.

For more information about UDT quotas and default values, see [Quotas and default values for user-defined types (UDTs) in Amazon Keyspaces](quotas.md#quotas-udts). For more information about working with UDTs, see [User-defined types (UDTs) in Amazon Keyspaces](udts.md).

## Table restore time with PITR
<a name="howitworks_restore_time"></a>

The time it takes you to restore a table is based on multiple factors and isn't always correlated directly to the size of the table. 

The following are some considerations for restore times.
+ You restore backups to a new table. It can take up to 20 minutes (even if the table is empty) to perform all the actions to create the new table and initiate the restore process.
+ Restore times for large tables with well-distributed data models can be several hours or longer.
+ If your source table contains data that is significantly skewed, the time to restore might increase. For example, if your table’s primary key is using the month of the year as a partition key, and all your data is from the month of December, you have skewed data.

A best practice when planning for disaster recovery is to regularly document average restore completion times and establish how these times affect your overall Recovery Time Objective.

## Amazon Keyspaces PITR and integration with Amazon services
<a name="howitworks_integration"></a>

The following PITR operations are logged using Amazon CloudTrail to enable continuous monitoring and auditing.
+ Create a new table with PITR enabled or disabled.
+ Enable or disable PITR on an existing table.
+ Restore an active or a deleted table.

For more information, see [Logging Amazon Keyspaces API calls with Amazon CloudTrail](logging-using-cloudtrail.md). 

You can perform the following PITR actions using Amazon CloudFormation. 
+ Create a new table with PITR enabled or disabled.
+ Enable or disable PITR on an existing table.

For more information, see the [Cassandra Resource Type Reference](https://docs.amazonaws.cn/AWSCloudFormation/latest/UserGuide/AWS_Cassandra.html) in the [Amazon CloudFormation User Guide](https://docs.amazonaws.cn/AWSCloudFormation/latest/UserGuide/).

# Use point-in-time recovery in Amazon Keyspaces
<a name="PointInTimeRecovery_Tutorial"></a>

With Amazon Keyspaces (for Apache Cassandra), you can restore tables to a specific point in time using Point-in-Time Restore (PITR). PITR enables you to restore a table to a prior state within the last 35 days, providing data protection and recovery capabilities. This feature is valuable in cases such as accidental data deletion, application errors, or for testing purposes. You can quickly and efficiently recover data, minimizing downtime and data loss. The following sections guide you through the process of restoring tables using PITR in Amazon Keyspaces, ensuring data integrity and business continuity. 

**Topics**
+ [

# Configure restore table IAM permissions for Amazon Keyspaces PITR
](howitworks_restore_permissions.md)
+ [

# Configure PITR for a table in Amazon Keyspaces
](configure_PITR.md)
+ [

# Turn off PITR for an Amazon Keyspaces table
](disable_PITR.md)
+ [

# Restore a table from backup to a specified point in time in Amazon Keyspaces
](restoretabletopointintime.md)
+ [

# Restore a deleted table using Amazon Keyspaces PITR
](restoredeleted.md)

# Configure restore table IAM permissions for Amazon Keyspaces PITR
<a name="howitworks_restore_permissions"></a>

This section summarizes how to configure permissions for an Amazon Identity and Access Management (IAM) principal to restore Amazon Keyspaces tables. In IAM, the Amazon managed policy `AmazonKeyspacesFullAccess` includes the permissions to restore Amazon Keyspaces tables. To implement a custom policy with minimum required permissions, consider the requirements outlined in the next section.

To successfully restore a table, the IAM principal needs the following minimum permissions:
+ `cassandra:Restore` – The restore action is required for the target table to be restored.
+ `cassandra:Select` – The select action is required to read from the source table.
+ `cassandra:TagResource` – The tag action is optional, and only required if the restore operation adds tags.

This is an example of a policy that grants minimum required permissions to a user to restore tables in keyspace `mykeyspace`.

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "cassandra:Restore",
            "cassandra:Select"
         ],
         "Resource":[
            "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/mykeyspace/*",
            "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/system*"
         ]
      }
   ]
}
```

Additional permissions to restore a table might be required based on other selected features. For example, if the source table is encrypted at rest with a customer managed key, Amazon Keyspaces must have permissions to access the customer managed key of the source table to successfully restore the table. For more information, see [PITR restore of encrypted tables](PointInTimeRecovery_HowItWorks.md#howitworks_backup_encryption). 

If you are using IAM policies with [condition keys](https://docs.amazonaws.cn/IAM/latest/UserGuide/reference_policies_condition-keys.html) to restrict incoming traffic to specific sources, you must ensure that Amazon Keyspaces has permission to perform a restore operation on your principal's behalf. You must add an `aws:ViaAWSService` condition key to your IAM policy if your policy restricts incoming traffic to any of the following:
+ VPC endpoints with `aws:SourceVpce`
+ IP ranges with `aws:SourceIp`
+ VPCs with `aws:SourceVpc`

The `aws:ViaAWSService` condition key allows access when any Amazon service makes a request using the principal's credentials. For more information, see [IAM JSON policy elements: Condition key](https://docs.amazonaws.cn/IAM/latest/UserGuide/reference_policies_condition-keys.html) in the *IAM User Guide*. 

The following is an example of a policy that restricts source traffic to a specific IP address and allows Amazon Keyspaces to restore a table on the principal's behalf.

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Sid":"CassandraAccessForCustomIp",
         "Effect":"Allow",
         "Action":"cassandra:*",
         "Resource":"*",
         "Condition":{
            "Bool":{
               "aws:ViaAWSService":"false"
            },
            "ForAnyValue:IpAddress":{
               "aws:SourceIp":[
                  "123.45.167.89"
               ]
            }
         }
      },
      {
         "Sid":"CassandraAccessForAwsService",
         "Effect":"Allow",
         "Action":"cassandra:*",
         "Resource":"*",
         "Condition":{
            "Bool":{
               "aws:ViaAWSService":"true"
            }
         }
      }
   ]
}
```

 For an example policy using the `aws:ViaAWSService` global condition key, see [VPC endpoint policies and Amazon Keyspaces point-in-time recovery (PITR)](vpc-endpoints.md#VPC_PITR_restore).

# Configure PITR for a table in Amazon Keyspaces
<a name="configure_PITR"></a>

You can configure a table in Amazon Keyspaces for backup and restore operations using PITR with the console, CQL, and the Amazon CLI.

When creating a new table using CQL or the Amazon CLI, you must explicitly enable PITR in the create table statement. When you create a new table using the console, PITR will be enable by default.

To learn how to restore a table, see [Restore a table from backup to a specified point in time in Amazon Keyspaces](restoretabletopointintime.md).

------
#### [ Console ]

**Configure PITR for a table using the console**

1. Sign in to the Amazon Web Services Management Console, and open the Amazon Keyspaces console at [https://console.amazonaws.cn/keyspaces/home](https://console.amazonaws.cn/keyspaces/home).

1. In the navigation pane, choose **Tables** and select the table you want to edit.

1. On the **Backups** tab, choose **Edit**.

1. In the **Edit point-in-time recovery settings** section, select **Enable Point-in-time recovery**.

1. Choose **Save changes**.

------
#### [ Cassandra Query Language (CQL) ]

**Configure PITR for a table using CQL**

1. You can manage PITR settings for tables by using the `point_in_time_recovery` custom property.

   To enable PITR when you're creating a new table, you must set the status of `point_in_time_recovery` to `enabled`. You can use the following CQL command as an example.

   ```
   CREATE TABLE "my_keyspace1"."my_table1"(
   	"id" int,
   	"name" ascii,
   	"date" timestamp,
   	PRIMARY KEY("id"))
   WITH CUSTOM_PROPERTIES = {
   	'capacity_mode':{'throughput_mode':'PAY_PER_REQUEST'}, 
   	'point_in_time_recovery':{'status':'enabled'}
   }
   ```
**Note**  
If no point-in-time recovery custom property is specified, point-in-time recovery is disabled by default.

1. To enable PITR for an existing table using CQL, run the following CQL command.

   ```
   ALTER TABLE mykeyspace.mytable
   WITH custom_properties = {'point_in_time_recovery': {'status': 'enabled'}}
   ```

------
#### [ CLI ]

**Configure PITR for a table using the Amazon CLI**

1. You can manage PITR settings for tables by using the `UpdateTable` API.

   To enable PITR when you're creating a new table, you must include `point-in-time-recovery 'status=ENABLED'` in the create table command. You can use the following Amazon CLI command as an example. The command has been broken into separate lines to improve readability.

   ```
   aws keyspaces create-table --keyspace-name 'myKeyspace' --table-name 'myTable' 
               --schema-definition 'allColumns=[{name=id,type=int},{name=name,type=text},{name=date,type=timestamp}],partitionKeys=[{name=id}]' 
               --point-in-time-recovery 'status=ENABLED'
   ```
**Note**  
If no point-in-time recovery value is specified, point-in-time recovery is disabled by default.

1. To confirm the point-in-time recovery setting for a table, you can use the following Amazon CLI command.

   ```
   aws keyspaces get-table --keyspace-name 'myKeyspace' --table-name 'myTable'
   ```

1. To enable PITR for an existing table using the Amazon CLI, run the following command.

   ```
   aws keyspaces update-table --keyspace-name 'myKeyspace' --table-name 'myTable' --point-in-time-recovery 'status=ENABLED'
   ```

------

# Turn off PITR for an Amazon Keyspaces table
<a name="disable_PITR"></a>

You can turn off PITR for an Amazon Keyspaces table at any time using the console, CQL, or the Amazon CLI. 

**Important**  
Disabling PITR deletes your backup history immediately, even if you reenable PITR on the table within 35 days.

To learn how to restore a table, see [Restore a table from backup to a specified point in time in Amazon Keyspaces](restoretabletopointintime.md).

------
#### [ Console ]

**Disable PITR for a table using the console**

1. Sign in to the Amazon Web Services Management Console, and open the Amazon Keyspaces console at [https://console.amazonaws.cn/keyspaces/home](https://console.amazonaws.cn/keyspaces/home).

1. In the navigation pane, choose **Tables** and select the table you want to edit.

1. On the **Backups** tab, choose **Edit**.

1. In the **Edit point-in-time recovery settings** section, clear the **Enable Point-in-time recovery** check box.

1. Choose **Save changes**.

------
#### [ Cassandra Query Language (CQL) ]

**Disable PITR for a table using CQL**
+ To disable PITR for an existing table, run the following CQL command.

  ```
  ALTER TABLE mykeyspace.mytable
  WITH custom_properties = {'point_in_time_recovery': {'status': 'disabled'}}
  ```

------
#### [ CLI ]

**Disable PITR for a table using the Amazon CLI**
+ To disable PITR for an existing table, run the following Amazon CLI command.

  ```
  aws keyspaces update-table --keyspace-name 'myKeyspace' --table-name 'myTable' --point-in-time-recovery 'status=DISABLED'
  ```

------

# Restore a table from backup to a specified point in time in Amazon Keyspaces
<a name="restoretabletopointintime"></a>

The following section demonstrates how to restore an existing Amazon Keyspaces table to a specified point in time. 

**Note**  
This procedure assumes that the table you're using has been configured with point-in-time recovery. To enable PITR for a table, see [Configure PITR for a table in Amazon Keyspaces](configure_PITR.md). 

**Important**  
 While a restore is in progress, don't modify or delete the Amazon Identity and Access Management (IAM) policies that grant the IAM principal (for example, user, group, or role) permission to perform the restore. Otherwise, unexpected behavior can result. For example, if you remove write permissions for a table while that table is being restored, the underlying `RestoreTableToPointInTime` operation can't write any of the restored data to the table.   
You can modify or delete permissions only after the restore operation is complete.

------
#### [ Console ]

**Restore a table to a specified point in time using the console**

1. Sign in to the Amazon Web Services Management Console, and open the Amazon Keyspaces console at [https://console.amazonaws.cn/keyspaces/home](https://console.amazonaws.cn/keyspaces/home).

1. In the navigation pane on the left side of the console, choose **Tables**.

1. In the list of tables, choose the table you want to restore. 

1. On the **Backups** tab of the table, in the **Point-in-time recovery** section, choose **Restore**.

1. For the new table name, enter a new name for the restored table, for example **mytable\$1restored**. 

1. To define the point in time for the restore operation, you can choose between two options:
   + Select the preconfigured **Earliest** time.
   + Select **Specify date and time** and enter the date and time you want to restore the new table to.
**Note**  
You can restore to any point in time within **Earliest** time and the current time. Amazon Keyspaces restores your table data to the state based on the selected date and time (day:hour:minute:second). 

1. Choose **Restore** to start the restore process. 

   The table that is being restored is shown with the status **Restoring**. After the restore process is finished, the status of the restored table changes to **Active**.

------
#### [ Cassandra Query Language (CQL) ]

**Restore a table to a point in time using CQL**

1. You can restore an active table to a point-in-time between `earliest_restorable_timestamp` and the current time. Current time is the default.

   To confirm that point-in-time recovery is enabled for the table, query the `system_schema_mcs.tables` as shown in this example.

   ```
   SELECT custom_properties
   FROM system_schema_mcs.tables
   WHERE keyspace_name = 'mykeyspace' AND table_name = 'mytable';
   ```

   Point-in-time recovery is enabled as shown in the following sample output. 

   ```
   custom_properties
   -----------------
   {
     ...,
       "point_in_time_recovery": {
       "earliest_restorable_timestamp":"2020-06-30T19:19:21.175Z"
       "status":"enabled"
     }
   }
   ```

1. 
   + Restore the table to the current time. When you omit the `WITH restore_timestamp = ...` clause, the current timestamp is used. 

     ```
     RESTORE TABLE mykeyspace.mytable_restored
     FROM TABLE mykeyspace.mytable;
     ```
   + You can also restore to a specific point in time, defined by a `restore_timestamp` in ISO 8601 format. You can specify any point in time during the last 35 days. For example, the following command restores the table to the `EarliestRestorableDateTime`. 

     ```
     RESTORE TABLE mykeyspace.mytable_restored
     FROM TABLE mykeyspace.mytable
     WITH restore_timestamp = '2020-06-30T19:19:21.175Z';
     ```

     For a full syntax description, see [RESTORE TABLE](cql.ddl.table.md#cql.ddl.table.restore) in the language reference.

1. To verify that the restore of the table was successful, query the `system_schema_mcs.tables` to confirm the status of the table.

   ```
   SELECT status
   FROM system_schema_mcs.tables
   WHERE keyspace_name = 'mykeyspace' AND table_name = 'mytable_restored'
   ```

   The query shows the following output.

   ```
   status
   ------
   RESTORING
   ```

   The table that is being restored is shown with the status **Restoring**. After the restore process is finished, the status of the table changes to **Active**.

------
#### [ CLI ]

**Restore a table to a point in time using the Amazon CLI**

1. Create a simple table named `myTable` that has PITR enabled. The command has been broken up into separate lines for readability.

   ```
   aws keyspaces create-table --keyspace-name 'myKeyspace' --table-name 'myTable' 
               --schema-definition 'allColumns=[{name=id,type=int},{name=name,type=text},{name=date,type=timestamp}],partitionKeys=[{name=id}]' 
               --point-in-time-recovery 'status=ENABLED'
   ```

1. Confirm the properties of the new table and review the `earliestRestorableTimestamp` for PITR.

   ```
   aws keyspaces get-table --keyspace-name 'myKeyspace' --table-name 'myTable'
   ```

   The output of this command returns the following.

   ```
   {
       "keyspaceName": "myKeyspace",
       "tableName": "myTable",
       "resourceArn": "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/myKeyspace/table/myTable",
       "creationTimestamp": "2022-06-20T14:34:57.049000-07:00",
       "status": "ACTIVE",
       "schemaDefinition": {
           "allColumns": [
               {
                   "name": "id",
                   "type": "int"
               },
               {
                   "name": "date",
                   "type": "timestamp"
               },
               {
                   "name": "name",
                   "type": "text"
               }
           ],
           "partitionKeys": [
               {
                   "name": "id"
               }
           ],
           "clusteringKeys": [],
           "staticColumns": []
       },
       "capacitySpecification": {
           "throughputMode": "PAY_PER_REQUEST",
           "lastUpdateToPayPerRequestTimestamp": "2022-06-20T14:34:57.049000-07:00"
       },
       "encryptionSpecification": {
           "type": "AWS_OWNED_KMS_KEY"
       },
       "pointInTimeRecovery": {
           "status": "ENABLED",
           "earliestRestorableTimestamp": "2022-06-20T14:35:13.693000-07:00"
       },
       "defaultTimeToLive": 0,
       "comment": {
           "message": ""
       }
   }
   ```

1. 
   + To restore a table to a point in time, specify a `restore_timestamp` in ISO 8601 format. You can chose any point in time during the last 35 days in one second intervals. For example, the following command restores the table to the `EarliestRestorableDateTime`. 

     ```
     aws keyspaces restore-table --source-keyspace-name 'myKeyspace' --source-table-name 'myTable' --target-keyspace-name 'myKeyspace' --target-table-name 'myTable_restored' --restore-timestamp "2022-06-20 21:35:14.693"
     ```

     The output of this command returns the ARN of the restored table.

     ```
     {
         "restoredTableARN": "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/myKeyspace/table/myTable_restored"
     }
     ```
   + To restore the table to the current time, you can omit the `restore-timestamp` parameter.

     ```
     aws keyspaces restore-table --source-keyspace-name 'myKeyspace' --source-table-name 'myTable' --target-keyspace-name 'myKeyspace' --target-table-name 'myTable_restored1'"
     ```

------

# Restore a deleted table using Amazon Keyspaces PITR
<a name="restoredeleted"></a>

The following procedure shows how to restore a deleted table from backup to the time of deletion. You can do this using CQL or the Amazon CLI. 

**Note**  
This procedure assumes that PITR was enabled on the deleted table.



------
#### [ Cassandra Query Language (CQL) ]

**Restore a deleted table using CQL**

1. To confirm that point-in-time recovery is enabled for a deleted table, query the system table. Only tables with point-in-time recovery enabled are shown.

   ```
   SELECT custom_properties
   FROM system_schema_mcs.tables_history 
   WHERE keyspace_name = 'mykeyspace' AND table_name = 'my_table';
   ```

   The query shows the following output.

   ```
   custom_properties
   ------------------
   {
       ...,
      "point_in_time_recovery":{
         "restorable_until_time":"2020-08-04T00:48:58.381Z",
         "status":"enabled"
      }
   }
   ```

1. Restore the table to the time of deletion with the following sample statement.

   ```
   RESTORE TABLE mykeyspace.mytable_restored
   FROM TABLE mykeyspace.mytable;
   ```

------
#### [ CLI ]

**Restore a deleted table using the Amazon CLI**

1. Delete a table that you created previously that has PITR enabled. The following command is an example.

   ```
   aws keyspaces delete-table --keyspace-name 'myKeyspace' --table-name 'myTable'
   ```

1. Restore the deleted table to the time of deletion with the following command.

   ```
   aws keyspaces restore-table --source-keyspace-name 'myKeyspace' --source-table-name 'myTable' --target-keyspace-name 'myKeyspace' --target-table-name 'myTable_restored2'
   ```

   The output of this command returns the ARN of the restored table.

   ```
   {
       "restoredTableARN": "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/myKeyspace/table/myTable_restored2"
   }
   ```

------

# Expire data with Time to Live (TTL) for Amazon Keyspaces (for Apache Cassandra)
<a name="TTL"></a>

Amazon Keyspaces (for Apache Cassandra) Time to Live (TTL) helps you simplify your application logic and optimize the price of storage by expiring data from tables automatically. Data that you no longer need is automatically deleted from your table based on the Time to Live value that you set. 

This makes it easier to comply with data retention policies based on business, industry, or regulatory requirements that define how long data needs to be retained or specify when data must be deleted. 

For example, you can use TTL in an AdTech application to schedule when data for specific ads expires and is no longer visible to clients. You can also use TTL to retire older data automatically and save on your storage costs. 

You can set a default TTL value for the entire table, and overwrite that value for individual rows and columns. TTL operations don't impact your application's performance. Also, the number of rows and columns marked to expire with TTL doesn't affect your table's availability.

Amazon Keyspaces automatically filters out expired data so that expired data isn't returned in query results or available for use in data manipulation language (DML) statements. Amazon Keyspaces typically deletes expired data from storage within 10 days of the expiration date. 

In rare cases, Amazon Keyspaces may not be able to delete data within 10 days if there is sustained activity on the underlying storage partition to protect availability. In these cases, Amazon Keyspaces continues to attempt to delete the expired data once traffic on the partition decreases. 

After the data is permanently deleted from storage, you stop incurring storage fees. 

You can set, modify, or disable default TTL settings for new and existing tables by using the console, Cassandra Query Language (CQL), or the Amazon CLI. 

On tables with default TTL configured, you can use CQL statements to override the default TTL settings of the table and apply custom TTL values to rows and columns. For more information, see [Use the `INSERT` statement to set custom Time to Live (TTL) values for new rows](TTL-how-to-insert-cql.md) and [Use the `UPDATE` statement to edit custom Time to Live (TTL) settings for rows and columns](TTL-how-to-update-cql.md).

TTL pricing is based on the size of the rows being deleted or updated by using Time to Live. TTL operations are metered in units of `TTL deletes`. One TTL delete is consumed per KB of data per row that is deleted or updated. 

For example, to update a row that stores 2.5 KB of data and to delete one or more columns within the row at the same time requires three TTL deletes. Or, to delete an entire row that contains 3.5 KB of data requires four TTL deletes. 

One TTL delete is consumed per KB of deleted data per row. For more information about pricing, see [Amazon Keyspaces (for Apache Cassandra) pricing](http://www.amazonaws.cn/keyspaces/pricing).

**Topics**
+ [

## Amazon Keyspaces Time to Live and integration with Amazon services
](#ttl-howitworks_integration)
+ [

# Create a new table with default Time to Live (TTL) settings
](TTL-how-to-create-table.md)
+ [

# Update the default Time to Live (TTL) value of a table
](TTL-how-to-update-default.md)
+ [

# Create table with custom Time to Live (TTL) settings enabled
](TTL-how-to-enable-custom-new.md)
+ [

# Update table with custom Time to Live (TTL)
](TTL-how-to-enable-custom-alter.md)
+ [

# Use the `INSERT` statement to set custom Time to Live (TTL) values for new rows
](TTL-how-to-insert-cql.md)
+ [

# Use the `UPDATE` statement to edit custom Time to Live (TTL) settings for rows and columns
](TTL-how-to-update-cql.md)

## Amazon Keyspaces Time to Live and integration with Amazon services
<a name="ttl-howitworks_integration"></a>

The following TTL metric is available in Amazon CloudWatch to enable continuous monitoring.
+ `TTLDeletes` – The units consumed to delete or update data in a row by using Time to Live (TTL).

For more information about how to monitor CloudWatch metrics, see [Monitoring Amazon Keyspaces with Amazon CloudWatch](monitoring-cloudwatch.md).

When you use Amazon CloudFormation, you can turn on TTL when creating an Amazon Keyspaces table. For more information, see the [Amazon CloudFormation User Guide](https://docs.amazonaws.cn/AWSCloudFormation/latest/UserGuide/aws-resource-cassandra-table.html). 

# Create a new table with default Time to Live (TTL) settings
<a name="TTL-how-to-create-table"></a>

In Amazon Keyspaces, you can set a default TTL value for all rows in a table when the table is created. 

The default TTL value for a table is zero, which means that data doesn't expire automatically. If the default TTL value for a table is greater than zero, an expiration timestamp is added to each row.

TTL values are set in seconds, and the maximum configurable value is 630,720,000 seconds, which is the equivalent of 20 years. 

After table creation, you can overwrite the table's default TTL setting for specific rows or columns with CQL DML statements. For more information, see [Use the `INSERT` statement to set custom Time to Live (TTL) values for new rows](TTL-how-to-insert-cql.md) and [Use the `UPDATE` statement to edit custom Time to Live (TTL) settings for rows and columns](TTL-how-to-update-cql.md).

When you enable TTL on a table, Amazon Keyspaces begins to store additional TTL-related metadata for each row. In addition, TTL uses expiration timestamps to track when rows or columns expire. The timestamps are stored as row metadata and contribute to the storage cost for the row. 

 After the TTL feature is enabled, you can't disable it for a table. Setting the table’s `default_time_to_live` to 0 disables default expiration times for new data, but it doesn't deactivate the TTL feature or revert the table back to the original Amazon Keyspaces storage metadata or write behavior. 

The following examples show how to create a new table with a default TTL value.

------
#### [ Console ]

**Create a new table with a Time to Live default value using the console.**

1. Sign in to the Amazon Web Services Management Console, and open the Amazon Keyspaces console at [https://console.amazonaws.cn/keyspaces/home](https://console.amazonaws.cn/keyspaces/home).

1. In the navigation pane, choose **Tables**, and then choose **Create table**.

1. On the **Create table** page in the **Table details** section, select a keyspace and provide a name for the new table.

1. In the **Schema** section, create the schema for your table.

1. In the **Table settings** section, choose **Customize settings**.

1. Continue to **Time to Live (TTL)**.

   In this step, you select the default TTL settings for the table. 

   For the **Default TTL period**, enter the expiration time and choose the unit of time you entered, for example seconds, days, or years. Amazon Keyspaces will store the value in seconds.

1. Choose **Create table**. Your table is created with the specified default TTL value.

------
#### [ Cassandra Query Language (CQL) ]

**Create a new table with a default TTL value using CQL**

1. The following statement creates a new table with the default TTL value set to 3,024,000 seconds, which represents 35 days.

   ```
   CREATE TABLE my_table (
                   userid uuid,
                   time timeuuid,
                   subject text,
                   body text,
                   user inet,
                   PRIMARY KEY (userid, time)
                   ) WITH default_time_to_live = 3024000;
   ```

1. To confirm the TTL settings for the new table, use the `cqlsh` `DESCRIBE` statement as shown in the following example. The output shows the default TTL setting for the table as `default_time_to_live`.

   ```
   DESC TABLE my_table;
   ```

   ```
   CREATE TABLE my_keyspace.my_table (
       userid uuid,
       time timeuuid,
       body text,
       subject text,
       user inet,
       PRIMARY KEY (userid, time)
   ) WITH CLUSTERING ORDER BY (time ASC)
       AND bloom_filter_fp_chance = 0.01
       AND caching = {'class': 'com.amazonaws.cassandra.DefaultCaching'}
       AND comment = ''
       AND compaction = {'class': 'com.amazonaws.cassandra.DefaultCompaction'}
       AND compression = {'class': 'com.amazonaws.cassandra.DefaultCompression'}
       AND crc_check_chance = 1.0
       AND dclocal_read_repair_chance = 0.0
       AND default_time_to_live = 3024000
       AND gc_grace_seconds = 7776000
       AND max_index_interval = 2048
       AND memtable_flush_period_in_ms = 3600000
       AND min_index_interval = 128
       AND read_repair_chance = 0.0
       AND speculative_retry = '99PERCENTILE';
   ```

------
#### [ CLI ]

**Create a new table with a default TTL value using the Amazon CLI**

1. You can use the following command to create a new table with the default TTL value set to one year.

   ```
   aws keyspaces create-table --keyspace-name 'myKeyspace' --table-name 'myTable' \
               --schema-definition 'allColumns=[{name=id,type=int},{name=name,type=text},{name=date,type=timestamp}],partitionKeys=[{name=id}]' \
               --default-time-to-live '31536000'
   ```

1. To confirm the TTL status of the table, you can use the following command.

   ```
   aws keyspaces get-table --keyspace-name 'myKeyspace' --table-name 'myTable'
   ```

   The output of the command looks like in the following example

   ```
   {
       "keyspaceName": "myKeyspace",
       "tableName": "myTable",
       "resourceArn": "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/myKeyspace/table/myTable",
       "creationTimestamp": "2024-09-02T10:52:22.190000+00:00",
       "status": "ACTIVE",
       "schemaDefinition": {
           "allColumns": [
               {
                   "name": "id",
                   "type": "int"
               },
               {
                   "name": "date",
                   "type": "timestamp"
               },
               {
                   "name": "name",
                   "type": "text"
               }
           ],
           "partitionKeys": [
               {
                   "name": "id"
               }
           ],
           "clusteringKeys": [],
           "staticColumns": []
       },
       "capacitySpecification": {
           "throughputMode": "PAY_PER_REQUEST",
           "lastUpdateToPayPerRequestTimestamp": "2024-09-02T10:52:22.190000+00:00"
       },
       "encryptionSpecification": {
           "type": "AWS_OWNED_KMS_KEY"
       },
       "pointInTimeRecovery": {
           "status": "DISABLED"
       },
       "ttl": {
           "status": "ENABLED"
       },
       "defaultTimeToLive": 31536000,
       "comment": {
           "message": ""
       },
       "replicaSpecifications": []
   }
   ```

------

# Update the default Time to Live (TTL) value of a table
<a name="TTL-how-to-update-default"></a>

You can update an existing table with a new default TTL value. TTL values are set in seconds, and the maximum configurable value is 630,720,000 seconds, which is the equivalent of 20 years.

When you enable TTL on a table, Amazon Keyspaces begins to store additional TTL-related metadata for each row. In addition, TTL uses expiration timestamps to track when rows or columns expire. The timestamps are stored as row metadata and contribute to the storage cost for the row. 

After TTL has been enabled for a table, you can overwrite the table's default TTL setting for specific rows or columns with CQL DML statements. For more information, see [Use the `INSERT` statement to set custom Time to Live (TTL) values for new rows](TTL-how-to-insert-cql.md) and [Use the `UPDATE` statement to edit custom Time to Live (TTL) settings for rows and columns](TTL-how-to-update-cql.md).

 After the TTL feature is enabled, you can't disable it for a table. Setting the table’s `default_time_to_live` to 0 disables default expiration times for new data, but it doesn't deactivate the TTL feature or revert the table back to the original Amazon Keyspaces storage metadata or write behavior. 

Follow these steps to update default Time to Live settings for existing tables using the console, CQL, or the Amazon CLI.

------
#### [ Console ]

**Update the default TTL value of a table using the console**

1. Sign in to the Amazon Web Services Management Console, and open the Amazon Keyspaces console at [https://console.amazonaws.cn/keyspaces/home](https://console.amazonaws.cn/keyspaces/home).

1. Choose the table that you want to update, and then choose the **Additional settings** tab.

1. Continue to **Time to Live (TTL)** and choose **Edit**.

1. For the **Default TTL period**, enter the expiration time and choose the unit of time, for example seconds, days, or years. Amazon Keyspaces will store the value in seconds. This doesn't change the TTL value of existing rows. 

1. When the TTL settings are defined, choose **Save changes**.

------
#### [ Cassandra Query Language (CQL) ]

**Update the default TTL value of a table using CQL**

1. You can use `ALTER TABLE` to edit default Time to Live (TTL) settings of a table. To update the default TTL settings of the table to 2,592,000 seconds, which represents 30 days, you can use the following statement.

   ```
   ALTER TABLE my_table WITH default_time_to_live = 2592000;
   ```

1. To confirm the TTL settings for the updated table, use the `cqlsh` `DESCRIBE` statement as shown in the following example. The output shows the default TTL setting for the table as `default_time_to_live`.

   ```
   DESC TABLE my_table;
   ```

   The output of the statement should look similar to this example.

   ```
   CREATE TABLE my_keyspace.my_table (
       id int PRIMARY KEY,
       date timestamp,
       name text
   ) WITH bloom_filter_fp_chance = 0.01
       AND caching = {'class': 'com.amazonaws.cassandra.DefaultCaching'}
       AND comment = ''
       AND compaction = {'class': 'com.amazonaws.cassandra.DefaultCompaction'}
       AND compression = {'class': 'com.amazonaws.cassandra.DefaultCompression'}
       AND crc_check_chance = 1.0
       AND dclocal_read_repair_chance = 0.0
       AND default_time_to_live = 2592000
       AND gc_grace_seconds = 7776000
       AND max_index_interval = 2048
       AND memtable_flush_period_in_ms = 3600000
       AND min_index_interval = 128
       AND read_repair_chance = 0.0
       AND speculative_retry = '99PERCENTILE';
   ```

------
#### [ CLI ]

**Update the default TTL value of a table using the Amazon CLI**

1. You can use `update-table` to edit the default TTL value a table. To update the default TTL settings of the table to 2,592,000 seconds, which represents 30 days, you can use the following statement.

   ```
   aws keyspaces update-table --keyspace-name 'myKeyspace' --table-name 'myTable' --default-time-to-live '2592000'
   ```

1. To confirm the updated default TTL value, you can use the following statement.

   ```
   aws keyspaces get-table --keyspace-name 'myKeyspace' --table-name 'myTable'
   ```

   The output of the statement should look like in the following example.

   ```
   {
       "keyspaceName": "myKeyspace",
       "tableName": "myTable",
       "resourceArn": "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/myKeyspace/table/myTable",
       "creationTimestamp": "2024-09-02T10:52:22.190000+00:00",
       "status": "ACTIVE",
       "schemaDefinition": {
           "allColumns": [
               {
                   "name": "id",
                   "type": "int"
               },
               {
                   "name": "date",
                   "type": "timestamp"
               },
               {
                   "name": "name",
                   "type": "text"
               }
           ],
           "partitionKeys": [
               {
                   "name": "id"
               }
           ],
           "clusteringKeys": [],
           "staticColumns": []
       },
       "capacitySpecification": {
           "throughputMode": "PAY_PER_REQUEST",
           "lastUpdateToPayPerRequestTimestamp": "2024-09-02T10:52:22.190000+00:00"
       },
       "encryptionSpecification": {
           "type": "AWS_OWNED_KMS_KEY"
       },
       "pointInTimeRecovery": {
           "status": "DISABLED"
       },
       "ttl": {
           "status": "ENABLED"
       },
       "defaultTimeToLive": 2592000,
       "comment": {
           "message": ""
       },
       "replicaSpecifications": []
   }
   ```

------

# Create table with custom Time to Live (TTL) settings enabled
<a name="TTL-how-to-enable-custom-new"></a>

To create a new table with Time to Live custom settings that can be applied to rows and columns without enabling TTL default settings for the entire table, you can use the following commands.

**Note**  
If a table is created with `ttl` custom settings enabled, you can't disable the setting later.

------
#### [ Cassandra Query Language (CQL) ]

**Create a new table with custom TTL setting using CQL**
+ 

  ```
  CREATE TABLE my_keyspace.my_table (id int primary key) WITH CUSTOM_PROPERTIES={'ttl':{'status': 'enabled'}};
  ```

------
#### [ CLI ]

**Create a new table with custom TTL setting using the Amazon CLI**

1. You can use the following command to create a new table with TTL enabled.

   ```
   aws keyspaces create-table --keyspace-name 'myKeyspace' --table-name 'myTable' \
                                   --schema-definition 'allColumns=[{name=id,type=int},{name=name,type=text}, {name=date,type=timestamp}],partitionKeys=[{name=id}]' \
                                   --ttl 'status=ENABLED'
   ```

1. To confirm that TTL is enabled for the table, you can use the following statement.

   ```
   aws keyspaces get-table --keyspace-name 'myKeyspace' --table-name 'myTable'
   ```

   The output of the statement should look like in the following example.

   ```
   {
       "keyspaceName": "myKeyspace",
       "tableName": "myTable",
       "resourceArn": "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/myKeyspace/table/myTable",
       "creationTimestamp": "2024-09-02T10:52:22.190000+00:00",
       "status": "ACTIVE",
       "schemaDefinition": {
           "allColumns": [
            {
                   "name": "id",
                   "type": "int"
               },
               {
                   "name": "date",
                   "type": "timestamp"
               },
               {
                   "name": "name",
                   "type": "text"
               }
           ],
           "partitionKeys": [
               {
                   "name": "id"
               }
           ],
           "clusteringKeys": [],
           "staticColumns": []
       },
       "capacitySpecification": {
           "throughputMode": "PAY_PER_REQUEST",
           "lastUpdateToPayPerRequestTimestamp": "2024-09-02T11:18:55.796000+00:00"
       },
       "encryptionSpecification": {
           "type": "AWS_OWNED_KMS_KEY"
       },
       "pointInTimeRecovery": {
           "status": "DISABLED"
       },
       "ttl": {
           "status": "ENABLED"
       },
       "defaultTimeToLive": 0,
       "comment": {
           "message": ""
       },
       "replicaSpecifications": []
   }
   ```

------

# Update table with custom Time to Live (TTL)
<a name="TTL-how-to-enable-custom-alter"></a>

To enable Time to Live custom settings for a table so that TTL values can be applied to individual rows and columns without setting a TTL default value for the entire table, you can use the following commands.

**Note**  
After `ttl` is enabled, you can't disable it for the table.

------
#### [ Cassandra Query Language (CQL) ]

**Enable custom TTL settings for a table using CQL**
+ 

  ```
  ALTER TABLE my_table WITH CUSTOM_PROPERTIES={'ttl':{'status': 'enabled'}};
  ```

------
#### [ CLI ]

**Enable custom TTL settings for a table using the Amazon CLI**

1. You can use the following command to update the custom TTL setting of a table.

   ```
   aws keyspaces update-table --keyspace-name 'myKeyspace' --table-name 'myTable' --ttl 'status=ENABLED'
   ```

1. To confirm that TTL is now enabled for the table, you can use the following statement.

   ```
   aws keyspaces get-table --keyspace-name 'myKeyspace' --table-name 'myTable'
   ```

   The output of the statement should look like in the following example.

   ```
   {
       "keyspaceName": "myKeyspace",
       "tableName": "myTable",
       "resourceArn": "arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/myKeyspace/table/myTable",
       "creationTimestamp": "2024-09-02T11:32:27.349000+00:00",
       "status": "ACTIVE",
       "schemaDefinition": {
           "allColumns": [
               {
                   "name": "id",
                   "type": "int"
               },
               {
                   "name": "date",
                   "type": "timestamp"
               },
               {
                   "name": "name",
                   "type": "text"
               }
           ],
           "partitionKeys": [
               {
                   "name": "id"
               }
           ],
           "clusteringKeys": [],
           "staticColumns": []
       },
       "capacitySpecification": {
           "throughputMode": "PAY_PER_REQUEST",
           "lastUpdateToPayPerRequestTimestamp": "2024-09-02T11:32:27.349000+00:00"
       },
       "encryptionSpecification": {
           "type": "AWS_OWNED_KMS_KEY"
       },
       "pointInTimeRecovery": {
           "status": "DISABLED"
       },
       "ttl": {
           "status": "ENABLED"
       },
       "defaultTimeToLive": 0,
       "comment": {
           "message": ""
       },
       "replicaSpecifications": []
   }
   ```

------

# Use the `INSERT` statement to set custom Time to Live (TTL) values for new rows
<a name="TTL-how-to-insert-cql"></a>

**Note**  
Before you can set custom TTL values for rows using the `INSERT` statement, you must first enable custom TTL on the table. For more information, see [Update table with custom Time to Live (TTL)](TTL-how-to-enable-custom-alter.md).

To overwrite a table's default TTL value by setting expiration dates for individual rows, you can use the `INSERT` statement:
+ `INSERT` – Insert a new row of data with a TTL value set.

Setting TTL values for new rows using the `INSERT` statement takes precedence over the default TTL setting of the table. 

The following CQL statement inserts a row of data into the table and changes the default TTL setting to 259,200 seconds (which is equivalent to 3 days).

```
INSERT INTO my_table (userid, time, subject, body, user)
        VALUES (B79CB3BA-745E-5D9A-8903-4A02327A7E09, 96a29100-5e25-11ec-90d7-b5d91eceda0a, 'Message', 'Hello','205.212.123.123')
        USING TTL 259200;
```

To confirm the TTL settings for the inserted row, use the following statement.

```
SELECT TTL (subject) from my_table;
```

# Use the `UPDATE` statement to edit custom Time to Live (TTL) settings for rows and columns
<a name="TTL-how-to-update-cql"></a>

**Note**  
Before you can set custom TTL values for rows and columns, you must enable TTL on the table first. For more information, see [Update table with custom Time to Live (TTL)](TTL-how-to-enable-custom-alter.md).

You can use the `UPDATE` statement to overwrite a table's default TTL value by setting the expiration date for individual rows and columns:
+ Rows – You can update an existing row of data with a custom TTL value.
+ Columns – You can update a subset of columns within existing rows with a custom TTL value.

Setting TTL values for rows and columns takes precedence over the default TTL setting for the table. 

To change the TTL settings of the 'subject' column inserted earlier from 259,200 seconds (3 days) to 86,400 seconds (one day), use the following statement.

```
UPDATE my_table USING TTL 86400 set subject = 'Updated Message' WHERE userid = B79CB3BA-745E-5D9A-8903-4A02327A7E09 and time = 96a29100-5e25-11ec-90d7-b5d91eceda0a;
```

You can run a simple select query to see the updated record before the expiration time.

```
SELECT * from my_table;
```

The query shows the following output.

```
userid                               | time                                 | body  | subject         | user
--------------------------------------+--------------------------------------+-------+-----------------+-----------------
b79cb3ba-745e-5d9a-8903-4a02327a7e09  | 96a29100-5e25-11ec-90d7-b5d91eceda0a | Hello | Updated Message | 205.212.123.123
50554d6e-29bb-11e5-b345-feff819cdc9f  | cf03fb21-59b5-11ec-b371-dff626ab9620 | Hello |         Message | 205.212.123.123
```

To confirm that the expiration was successful, run the same query again after the configured expiration time.

```
SELECT * from my_table;
```

The query shows the following output after the 'subject' column has expired.

```
userid                               | time                                 | body  | subject | user
--------------------------------------+--------------------------------------+-------+---------+-----------------
b79cb3ba-745e-5d9a-8903-4a02327a7e09  | 96a29100-5e25-11ec-90d7-b5d91eceda0a | Hello |    null | 205.212.123.123
50554d6e-29bb-11e5-b345-feff819cdc9f  | cf03fb21-59b5-11ec-b371-dff626ab9620 | Hello | Message | 205.212.123.123
```

# Using this service with an Amazon SDK
<a name="sdk-general-information-section"></a>

Amazon software development kits (SDKs) are available for many popular programming languages. Each SDK provides an API, code examples, and documentation that make it easier for developers to build applications in their preferred language.


| SDK documentation | 
| --- | 
| [Amazon CLI](https://docs.amazonaws.cn/cli) | 
| [Amazon SDK for Java](https://docs.amazonaws.cn/sdk-for-java) | 
| [Amazon SDK for JavaScript](https://docs.amazonaws.cn/sdk-for-javascript) | 
| [Amazon SDK for .NET](https://docs.amazonaws.cn/sdk-for-net) | 
| [Amazon SDK for PHP](https://docs.amazonaws.cn/sdk-for-php) | 
| [Amazon Tools for PowerShell](https://docs.amazonaws.cn/powershell) | 
| [Amazon SDK for Python (Boto3)](https://docs.amazonaws.cn/pythonsdk) | 
| [Amazon SDK for Ruby](https://docs.amazonaws.cn/sdk-for-ruby) | 
| [Amazon SDK for SAP ABAP](https://docs.amazonaws.cn/sdk-for-sapabap) | 

# Working with tags and labels for Amazon Keyspaces resources
<a name="tagging-keyspaces"></a>

 You can label Amazon Keyspaces (for Apache Cassandra) resources using *tags*. Tags let you categorize your resources in different ways—for example, by purpose, owner, environment, or other criteria. Tags can help you do the following: 
+  Quickly identify a resource based on the tags that you assigned to it. 
+  See Amazon bills broken down by tags. 
+ Control access to Amazon Keyspaces resources based on tags. For IAM policy examples using tags, see [Authorization based on Amazon Keyspaces tags](security_iam_service-with-iam.md#security_iam_service-with-iam-tags).

 Tagging is supported by Amazon services like Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage Service (Amazon S3), Amazon Keyspaces, and more. Efficient tagging can provide cost insights by enabling you to create reports across services that carry a specific tag. 

 To get started with tagging, do the following: 

1.  Understand [Restrictions for using tags to label resources in Amazon Keyspaces](TaggingRestrictions.md). 

1.  Create tags by using [Tag keyspaces, tables, and streams in Amazon Keyspaces](Tagging.Operations.md). 

1.  Use [Create cost allocation reports using tags for Amazon Keyspaces](CostAllocationReports.md) to track your Amazon costs per active tag. 

 Finally, it is good practice to follow optimal tagging strategies. For information, see [Amazon tagging strategies](https://d0.awsstatic.com/aws-answers/AWS_Tagging_Strategies.pdf). 

# Restrictions for using tags to label resources in Amazon Keyspaces
<a name="TaggingRestrictions"></a>

 Each tag consists of a key and a value, both of which you define. The following restrictions apply: 
+  Each Amazon Keyspaces keyspace, table, or stream can have only one tag with the same key. If you try to add an existing tag (same key), the existing tag value is updated to the new value. 
+ Tags applied to a keyspace don't automatically apply to tables within that keyspace. To apply the same tag to a keyspace and all its tables, each resource must be individually tagged.
+ Tags applied to a table don't automatically apply to the stream of that table. To apply the same tags to a table and the stream during table creation, you can use the `PropagateTagsOnEnable` flag when you create the table. Using this flag, Amazon Keyspaces applies the tags of the table to the stream during stream creation. When the stream is active, changes to the table tags don't apply to the stream.
+ When you create a multi-Region keyspace or table, any tags that you define during the creation process are automatically applied to all keyspaces and tables in all Regions. When you change existing tags using `ALTER KEYSPACE` or `ALTER TABLE`, the update is only applied to the keyspace or table in the Region where you're making the change.
+ A value acts as a descriptor within a tag category (key). In Amazon Keyspaces the value cannot be empty or null.
+  Tag keys and values are case sensitive. 
+  The maximum key length is 128 Unicode characters. 
+ The maximum value length is 256 Unicode characters. 
+  The allowed characters are letters, white space, and numbers, plus the following special characters: `+ - = . _ : /` 
+  The maximum number of tags per resource is 50.
+ Amazon-assigned tag names and values are automatically assigned the `aws:` prefix, which you can't assign. Amazon-assigned tag names don't count toward the tag limit of 50. User-assigned tag names have the prefix `user:` in the cost allocation report. 
+  You can't backdate the application of a tag. 

# Tag keyspaces, tables, and streams in Amazon Keyspaces
<a name="Tagging.Operations"></a>

You can add, list, edit, or delete tags for keyspaces, tables, and streams using the Amazon Keyspaces console, the Amazon CLI, or Cassandra Query Language (CQL). You can then activate these user-defined tags so that they appear on the Amazon Billing and Cost Management console for cost allocation tracking. For more information, see [Create cost allocation reports using tags for Amazon Keyspaces](CostAllocationReports.md). 

 For bulk editing, you can also use Tag Editor on the console. For more information, see [Working with Tag Editor](https://docs.amazonaws.cn/awsconsolehelpdocs/latest/gsg/tag-editor.html) in the *Amazon Resource Groups User Guide*. 

For information about tag structure, see [Restrictions for using tags to label resources in Amazon Keyspaces](TaggingRestrictions.md). 

**Topics**
+ [

# Add tags when creating a new keyspace
](Tagging.Operations.new.keyspace.md)
+ [

# Add tags to a keyspace
](Tagging.Operations.existing.keyspace.md)
+ [

# Delete tags from a keyspace
](Tagging.Operations.existing.keyspace.drop.md)
+ [

# View the tags of a keyspace
](Tagging.Operations.view.keyspace.md)
+ [

# Add tags when creating a new table
](Tagging.Operations.new.table.md)
+ [

# Add tags to a table
](Tagging.Operations.existing.table.md)
+ [

# Delete tags from a table
](Tagging.Operations.existing.table.drop.md)
+ [

# View the tags of a table
](Tagging.Operations.view.table.md)
+ [

# Add tags to a new stream when creating a table
](Tagging.Operations.new.table.stream.md)
+ [

# Add tags to a new stream for an existing table
](Tagging.Operations.new.stream.md)
+ [

# Add new tags to a stream
](Tagging.Operations.existing.stream.md)
+ [

# Delete tags from a stream
](Tagging.Operations.existing.stream.drop.md)
+ [

# View the tags of a stream
](Tagging.Operations.view.stream.md)

# Add tags when creating a new keyspace
<a name="Tagging.Operations.new.keyspace"></a>

You can use the Amazon Keyspaces console, CQL or the Amazon CLI to add tags when you create a new keyspace. 

------
#### [ Console ]

**Set a tag when creating a new keyspace using the console**

1. Sign in to the Amazon Web Services Management Console, and open the Amazon Keyspaces console at [https://console.amazonaws.cn/keyspaces/home](https://console.amazonaws.cn/keyspaces/home).

1. In the navigation pane, choose **Keyspaces**, and then choose **Create keyspace**.

1. On the **Create keyspace** page, provide a name for the keyspace. 

1. Under **Tags** choose **Add new tag** and enter a key and a value.

1. Choose **Create keyspace**.

------
#### [ Cassandra Query Language (CQL) ]

**Set a tag when creating a new keyspace using CQL**
+ The following example creates a new keyspace with tags.

  ```
  CREATE KEYSPACE mykeyspace WITH TAGS = {'key1':'val1', 'key2':'val2'};
  ```

------
#### [ CLI ]

**Set a tag when creating a new keyspace using the Amazon CLI**
+ The following statement creates a new keyspace with tags.

  ```
  aws keyspaces create-keyspace --keyspace-name 'myKeyspace' --tags 'key=key1,value=val1' 'key=key2,value=val2'
  ```

------

# Add tags to a keyspace
<a name="Tagging.Operations.existing.keyspace"></a>

The following examples show how to add tags to a keyspace in Amazon Keyspaces.

------
#### [ Console ]

**Add a tag to an existing keyspace using the console**

1. Sign in to the Amazon Web Services Management Console, and open the Amazon Keyspaces console at [https://console.amazonaws.cn/keyspaces/home](https://console.amazonaws.cn/keyspaces/home).

1. In the navigation pane, choose **Keyspaces**.

1. Choose a keyspace from the list. Then choose the **Tags** tab where you can view the tags of the keyspace.

1. Choose **Manage tags** to add, edit, or delete tags.

1. Choose **Save changes**.

------
#### [ Cassandra Query Language (CQL) ]

**Add a tag to an existing keyspace using CQL**
+ 

  ```
  ALTER KEYSPACE mykeyspace ADD TAGS {'key1':'val1', 'key2':'val2'};
  ```

------
#### [ CLI ]

**Add a tag to an existing keyspace using the Amazon CLI**
+ The following example shows how to add new tags to an existing keyspace.

  ```
  aws keyspaces tag-resource --resource-arn 'arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/myKeyspace/' --tags 'key=key3,value=val3' 'key=key4,value=val4'
  ```

------

# Delete tags from a keyspace
<a name="Tagging.Operations.existing.keyspace.drop"></a>

------
#### [ Console ]

**Delete a tag from an existing keyspace using the console**

1. Sign in to the Amazon Web Services Management Console, and open the Amazon Keyspaces console at [https://console.amazonaws.cn/keyspaces/home](https://console.amazonaws.cn/keyspaces/home).

1. In the navigation pane, choose **Keyspaces**.

1. Choose a keyspace from the list. Then choose the **Tags** tab where you can view the tags of the keyspace.

1. Choose **Manage tags** and delete the tags you don't need anymore.

1. Choose **Save changes**.

------
#### [ Cassandra Query Language (CQL) ]

**Delete a tag from an existing keyspace using CQL**
+ 

  ```
  ALTER KEYSPACE mykeyspace DROP TAGS {'key1':'val1', 'key2':'val2'};
  ```

------
#### [ CLI ]

**Delete a tag from an existing keyspace using the Amazon CLI**
+ The following statement removes the specified tags from a keyspace.

  ```
  aws keyspaces untag-resource --resource-arn 'arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/myKeyspace/' --tags 'key=key3,value=val3' 'key=key4,value=val4'
  ```

------

# View the tags of a keyspace
<a name="Tagging.Operations.view.keyspace"></a>

The following examples show how to read tags using the console, CQL or the Amazon CLI.

------
#### [ Console ]

**View the tags of a keyspace using the Amazon Keyspaces console**

1. Sign in to the Amazon Web Services Management Console, and open the Amazon Keyspaces console at [https://console.amazonaws.cn/keyspaces/home](https://console.amazonaws.cn/keyspaces/home).

1. In the navigation pane, choose **Keyspaces**.

1. Choose a keyspace from the list. Then choose the **Tags** tab where you can view the tags of the keyspace.

------
#### [ Cassandra Query Language (CQL) ]

**View the tags of a keyspace using CQL**

To read the tags attached to a keyspace, use the following CQL statement.

```
SELECT * FROM system_schema_mcs.tags WHERE valid_where_clause;
```

The `WHERE` clause is required, and must use one of the following formats:
+ `keyspace_name = 'mykeyspace' AND resource_type = 'keyspace'`
+ `resource_id = arn`
+ The following statement shows whether a keyspace has tags.

  ```
  SELECT * FROM system_schema_mcs.tags WHERE keyspace_name = 'mykeyspace' AND resource_type = 'keyspace';
  ```

  The output of the query looks like the following.

  ```
  resource_id                                                      | keyspace_name | resource_name | resource_type | tags
  -----------------------------------------------------------------+---------------+---------------+---------------+------
  arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/mykeyspace/      | mykeyspace    | mykeyspace    | keyspace      | {'key1': 'val1', 'key2': 'val2'}
  ```

------
#### [ CLI ]

**View the tags of a keyspace using the Amazon CLI**
+ This example shows how to list the tags of the specified resource.

  ```
  aws keyspaces list-tags-for-resource --resource-arn 'arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/myKeyspace/'
  ```

  The output of the last command looks like this.

  ```
  {
      "tags": [
          {
              "key": "key1",
              "value": "val1"
          },
          {
              "key": "key2",
              "value": "val2"
          },
          {
              "key": "key3",
              "value": "val3"
          },
          {
              "key": "key4",
              "value": "val4"
          }
      ]
  }
  ```

------

# Add tags when creating a new table
<a name="Tagging.Operations.new.table"></a>

You can use the Amazon Keyspaces console, CQL or the Amazon CLI to add tags to new tables when you create them. 

------
#### [ Console ]

**Add a tag when creating a new table using the (console)**

1. Sign in to the Amazon Web Services Management Console, and open the Amazon Keyspaces console at [https://console.amazonaws.cn/keyspaces/home](https://console.amazonaws.cn/keyspaces/home).

1. In the navigation pane, choose **Tables**, and then choose **Create table**.

1. On the **Create table** page in the **Table details** section, select a keyspace and provide a name for the table.

1. In the **Schema** section, create the schema for your table.

1. In the **Table settings** section, choose **Customize settings**.

1. Continue to the **Table tags – *optional*** section, and choose **Add new tag** to create new tags.

1. Choose **Create table**.

------
#### [ Cassandra Query Language (CQL) ]

**Add tags when creating a new table using CQL**
+ The following example creates a new table with tags.

  ```
  CREATE TABLE mytable(...) WITH TAGS = {'key1':'val1', 'key2':'val2'};
  ```

------
#### [ CLI ]

**Add tags when creating a new table using the Amazon CLI**
+ The following example shows how to create a new table with tags. The command creates a table *myTable* in an already existing keyspace *myKeyspace*. Note that the command has been broken up into different lines to help with readability.

  ```
  aws keyspaces create-table --keyspace-name 'myKeyspace' --table-name 'myTable' 
              --schema-definition 'allColumns=[{name=id,type=int},{name=name,type=text},{name=date,type=timestamp}],partitionKeys=[{name=id}]' 
              --tags 'key=key1,value=val1' 'key=key2,value=val2'
  ```

------

# Add tags to a table
<a name="Tagging.Operations.existing.table"></a>

You can add tags to an existing table in Amazon Keyspaces using the console, CQL or the Amazon CLI.

------
#### [ Console ]

**Add tags to a table using the Amazon Keyspaces console**

1. Sign in to the Amazon Web Services Management Console, and open the Amazon Keyspaces console at [https://console.amazonaws.cn/keyspaces/home](https://console.amazonaws.cn/keyspaces/home).

1. In the navigation pane, choose **Tables**.

1. Choose a table from the list and choose the **Tags** tab. 

1. Choose **Manage tags** to add tags to the table.

1. Choose **Save changes**.

------
#### [ Cassandra Query Language (CQL) ]

**Add tags to a table using CQL**
+ The following statement shows how to add tags to an existing table.

  ```
  ALTER TABLE mykeyspace.mytable ADD TAGS {'key1':'val1', 'key2':'val2'};
  ```

------
#### [ CLI ]

**Add tags to a table using the Amazon CLI**
+ The following example shows how to add new tags to an existing table.

  ```
  aws keyspaces tag-resource --resource-arn 'arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/myKeyspace/table/myTable' --tags 'key=key3,value=val3' 'key=key4,value=val4'
  ```

------

# Delete tags from a table
<a name="Tagging.Operations.existing.table.drop"></a>

------
#### [ Console ]

**Delete tags from a table using the Amazon Keyspaces console**

1. Sign in to the Amazon Web Services Management Console, and open the Amazon Keyspaces console at [https://console.amazonaws.cn/keyspaces/home](https://console.amazonaws.cn/keyspaces/home).

1. In the navigation pane, choose **Tables**.

1. Choose a table from the list and choose the **Tags** tab. 

1. Choose **Manage tags** to delete tags from the table.

1. Choose **Save changes**.

------
#### [ Cassandra Query Language (CQL) ]

**Delete tags from a table using CQL**
+ The following statement shows how to delete tags from an existing table.

  ```
  ALTER TABLE mytable DROP TAGS {'key3':'val3', 'key4':'val4'};
  ```

------
#### [ CLI ]

**Add tags to a table using the Amazon CLI**
+ The following statement removes the specified tags from a keyspace.

  ```
  aws keyspaces untag-resource --resource-arn 'arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/myKeyspace/table/myTable' --tags 'key=key3,value=val3' 'key=key4,value=val4'
  ```

------

# View the tags of a table
<a name="Tagging.Operations.view.table"></a>

The following examples show how to view the tags of a table in Amazon Keyspaces using the console, CQL, or the Amazon CLI.

------
#### [ Console ]

**View the tags of a table using the console**

1. Sign in to the Amazon Web Services Management Console, and open the Amazon Keyspaces console at [https://console.amazonaws.cn/keyspaces/home](https://console.amazonaws.cn/keyspaces/home).

1. In the navigation pane, choose **Tables**.

1. Choose a table from the list and choose the **Tags** tab. 

------
#### [ Cassandra Query Language (CQL) ]

**View the tags of a table using CQL**

To read the tags attached to a table, use the following CQL statement.

```
SELECT * FROM system_schema_mcs.tags WHERE valid_where_clause;
```

The `WHERE` clause is required, and must use one of the following formats:
+ `keyspace_name = 'mykeyspace' AND resource_name = 'mytable'`
+ `resource_id = arn`
+ The following query returns the tags of the specified table.

  ```
  SELECT * FROM system_schema_mcs.tags WHERE keyspace_name = 'mykeyspace' AND resource_name = 'mytable';
  ```

  The output of that query looks like the following.

  ```
  resource_id                                                                 | keyspace_name | resource_name | resource_type | tags
  ----------------------------------------------------------------------------+---------------+---------------+---------------+------
  arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/mykeyspace/table/mytable| mykeyspace    | mytable       | table         | {'key1': 'val1', 'key2': 'val2'}
  ```

------
#### [ CLI ]

**View the tags of a table using the Amazon CLI**
+ This example shows how to list the tags of the specified resource.

  ```
  aws keyspaces list-tags-for-resource --resource-arn 'arn:aws-cn:arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/my_keyspace/table/my_table/stream/2025-05-11T21:21:33.291cassandra:us-east-1:111122223333:/keyspace/myKeyspace/table/myTable'
  ```

  The output of the last command looks like this.

  ```
  {
      "tags": [
          {
              "key": "key1",
              "value": "val1"
          },
          {
              "key": "key2",
              "value": "val2"
          },
          {
              "key": "key3",
              "value": "val3"
          },
          {
              "key": "key4",
              "value": "val4"
          }
      ]
  }
  ```

------

# Add tags to a new stream when creating a table
<a name="Tagging.Operations.new.table.stream"></a>

You can add tags to streams when you create a new table with a stream using CQL or the Amazon CLI to tag a stream.

**Note**  
Amazon Keyspaces CDC requires the presence of a service-linked role (`AWSServiceRoleForAmazonKeyspacesCDC`) that publishes metric data from Amazon Keyspaces CDC streams into the `"cloudwatch:namespace": "AWS/Cassandra"` in your CloudWatch account on your behalf. This role is created automatically for you. For more information, see [Using roles for Amazon Keyspaces CDC streams](using-service-linked-roles-CDC-streams.md).

------
#### [ Cassandra Query Language (CQL) ]

**Add tags to a stream when creating a new table using CQL**

1. To create a new table with a stream and apply the table tags automatically to the stream, you can use the `'propagate_tags': 'TABLE'` flag. The following statement is an example of this.

   ```
   CREATE TABLE mytable (pk int, ck text, PRIMARY KEY(pk))
   WITH TAGS={'key1':'val1', 'key2':'val2'}
   AND cdc = TRUE
   AND CUSTOM_PROPERTIES={
       'cdc_specification': {
           'view_type': 'NEW_IMAGE',
           'propagate_tags': 'TABLE'
       }
   };
   ```

1. To apply new tags to the stream, you can use the following example.

   ```
   CREATE TABLE mytable (pk int, ck text, PRIMARY KEY(pk))
   WITH TAGS={'key1':'val1', 'key2':'val2'}
   AND cdc = TRUE
   AND CUSTOM_PROPERTIES={
       'cdc_specification': {
           'view_type': 'NEW_IMAGE',
           'tags': { 'key': 'string', 'value': 'string' },
       }
   };
   ```

------
#### [ CLI ]

**Add tags to a stream when creating a new table using the Amazon CLI**

1. To create a table with a stream and apply the table tags automatically to the stream, you can use the `propagateTags=Table` flag. The following code is an example of this.

   ```
   aws keyspaces create-table \
   --keyspace-name 'my_keyspace' \
   --table-name 'my_table' \
   --schema-definition 'allColumns=[{name=pk,type=int},{name=ck,type=text}],clusteringKeys=[{name=ck,orderBy=ASC}],partitionKeys=[{name=pk}]' \
   --tags key=tag_key,value=tag_value
   --cdc-specification propagateTags=TABLE,status=ENABLED,viewType=NEW_IMAGE
   ```

1. To apply different tags to the stream, you can use the following example.

   ```
   aws keyspaces create-table \
   --keyspace-name 'my_keyspace' \
   --table-name 'my_table' \
   --schema-definition 'allColumns=[{name=pk,type=int},{name=ck,type=text}],clusteringKeys=[{name=ck,orderBy=ASC}],partitionKeys=[{name=pk}]' \
   --tags key=tag_key,value=tag_value 
   --cdc-specification 'status=ENABLED,viewType=NEW_IMAGE,tags=[{key=tag_key, value=tag_value}]'
   ```

------

# Add tags to a new stream for an existing table
<a name="Tagging.Operations.new.stream"></a>

You can add tags when you create a new stream for an existing table. You can either use the `PropagateTags` flag to apply the table tags to the stream or specify new tags for the stream. You can use CQL or the Amazon CLI to tag a new stream.

**Note**  
Amazon Keyspaces CDC requires the presence of a service-linked role (`AWSServiceRoleForAmazonKeyspacesCDC`) that publishes metric data from Amazon Keyspaces CDC streams into the `"cloudwatch:namespace": "AWS/Cassandra"` in your CloudWatch account on your behalf. This role is created automatically for you. For more information, see [Using roles for Amazon Keyspaces CDC streams](using-service-linked-roles-CDC-streams.md).

------
#### [ Console ]

**Add tags when creating a new stream using the (console)**

1. Sign in to the Amazon Web Services Management Console, and open the Amazon Keyspaces console at [https://console.amazonaws.cn/keyspaces/home](https://console.amazonaws.cn/keyspaces/home).

1. In the navigation pane, choose **Tables**, and then choose the table you want to add a stream for.

1. Choose the **Streams** tab.

1. In the **Stream details** section, choose **Edit**.

1. Select **Turn on streams** .

1. Select the **View type** and continue to **Tags** to create tags for the stream.

1. You can select one of the following options:
   + **No tags** – Use this option if you don't want to create any tags for the stream.
   + **Copy tags from table** – Use this option if you want to copy the tags from the table to the stream. After copying the tags, you can edit them for the stream. Note that this option is only available if the table has tags.
   + **Add new tags** – You can add up to 50 tags for the stream by choosing **Add new tag**.

1. Choose **Save changes**.

------
#### [ Cassandra Query Language (CQL) ]

**Add tags when creating a new stream**

1. To create a new stream for an existing table and apply the table's tags to the stream, you can use the `'propagate_tags': 'TABLE'` flag. The following statements is an example of this.

   ```
   ALTER TABLE mytable WITH cdc = TRUE AND CUSTOM_PROPERTIES={ 'cdc_specification': { 'view_type': 'NEW_IMAGE', 'propagate_tags': 'TABLE' } };
   ```

1. To create a new stream for an existing table and specify new tags, you can use the following example.

   ```
   ALTER TABLE mytable WITH cdc = TRUE AND CUSTOM_PROPERTIES={ 'cdc_specification': { 'view_type': 'NEW_IMAGE', 'tags': { 'key': 'string', 'value': 'string' }} };
   ```

------
#### [ CLI ]

**Add tags when creating a new stream using the Amazon CLI**

1. To create a new stream with tags, you can use the `propagateTags=TABLE` flag to apply the table's tags automatically to the stream. The following code is an example of this.

   ```
   aws keyspaces update-table \ 
   --keyspace-name 'my_keyspace' \ 
   --table-name 'my_table' \
   --cdc-specification propagateTags=TABLE,status=ENABLED,viewType=NEW_IMAGE
   ```

1. To create a new stream for an existing table and specify new tags, you can use the following example.

   ```
   aws keyspaces update-table \ 
   --keyspace-name 'my_keyspace' \ 
   --table-name 'my_table' \
   --cdc-specification 'status=ENABLED,viewType=NEW_IMAGE,tags=[{key=tag_key, value=tag_value}]'
   ```

------

# Add new tags to a stream
<a name="Tagging.Operations.existing.stream"></a>

You can add new tags to an existing stream in Amazon Keyspaces using the CQL or the Amazon CLI. You can only add tags to the latest stream.

------
#### [ Console ]

**Add tags to an existing stream (console)**

1. Sign in to the Amazon Web Services Management Console, and open the Amazon Keyspaces console at [https://console.amazonaws.cn/keyspaces/home](https://console.amazonaws.cn/keyspaces/home).

1. In the navigation pane, choose **Tables**, and then choose the table with the stream that you want to tag.

1. Choose the **Streams** tab.

1. In the **Tags** section, choose **Manage tags**.

1. Choose **Add new tag** to add a new tag. You can create up to 50 tags by repeating this step.

1. Choose **Save changes**.

------
#### [ Cassandra Query Language (CQL) ]

**Add tags to a stream using CQL**
+ The following statement shows how to add tags to an existing stream.

  ```
  ALTER TABLE mykeyspace.mytable ADD TAGS_FOR_CDC {'key1':'val1', 'key2':'val2'};
  ```

------
#### [ CLI ]

**Add tags to a stream using the Amazon CLI**
+ The following example shows how to add new tags to an existing stream.

  ```
  aws keyspaces tag-resource --resource-arn 'arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/my_keyspace/table/my_table/stream/2025-05-11T21:21:33.291' --tags 'key=key3,value=val3' 'key=key4,value=val4'
  ```

------

# Delete tags from a stream
<a name="Tagging.Operations.existing.stream.drop"></a>

To delete tags from a stream, you can use CQL or the Amazon CLI. You can only delete the tags for the latest stream. 

------
#### [ Console ]

**Delete tags from a table using the Amazon Keyspaces console**

1. Sign in to the Amazon Web Services Management Console, and open the Amazon Keyspaces console at [https://console.amazonaws.cn/keyspaces/home](https://console.amazonaws.cn/keyspaces/home).

1. In the navigation pane, choose **Tables**.

1. Choose a table from the list and choose the **Streams** tab. 

1. In the **Tags** section choose **Manage tags** to delete tags from the table.

1. After the tag you want to delete, choose **Remove**.

1. Choose **Save changes**.

------
#### [ Cassandra Query Language (CQL) ]

**Delete tags from a stream using CQL**
+ The following statement shows how to delete tags from an existing stream.

  ```
  ALTER TABLE mytable DROP TAGS_FOR_CDC {'key3':'val3', 'key4':'val4'};
  ```

------
#### [ CLI ]

**Delete tags from a stream using the Amazon CLI**
+ The following statement removes the specified tags from a stream.

  ```
  aws keyspaces untag-resource --resource-arn 'arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/my_keyspace/table/my_table/stream/2025-05-11T21:21:33.291' --tags 'key=key3,value=val3' 'key=key4,value=val4'
  ```

------

# View the tags of a stream
<a name="Tagging.Operations.view.stream"></a>

The following examples show how to view the tags of a stream in Amazon Keyspaces using CQL or the Amazon CLI.

------
#### [ Console ]

**View the tags of a stream using the console**

1. Sign in to the Amazon Web Services Management Console, and open the Amazon Keyspaces console at [https://console.amazonaws.cn/keyspaces/home](https://console.amazonaws.cn/keyspaces/home).

1. In the navigation pane, choose **Tables**.

1. Choose a table from the list and choose the **Streams** tab. 

1. You can view the tags of the stream in the **Tags** section.

------
#### [ Cassandra Query Language (CQL) ]

**View the tags of a stream using CQL**

To read the tags attached to a stream, you must specify the resource ARN of the stream in the `WHERE` clause. The following CQL syntax is an example of this.

```
SELECT * FROM system_schema_mcs.tags WHERE resource_id = stream_arn;
```
+ The following query returns the tags for the specified stream.

  ```
  SELECT tags FROM system_schema_mcs.tags WHERE resource_id = 'arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/my_keyspace/table/my_table/stream/2025-05-06T17:17:39.800';
  ```

  The output of that query looks like the following.

  ```
   resource_id                                                                                                       | keyspace_name | resource_name           | resource_type | tags   
   ------------------------------------------------------------------------------------------------------------------+---------------+-------------------------+---------------+---------------------- 
   arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/my_keyspace/table/my_table/stream/2025-04-02T23:00:07.052     |      singleks | 2025-04-02T23:00:07.052 |        stream | {'tagkey': 'tagval'}
  ```

------
#### [ CLI ]

**View the tags of a stream using the Amazon CLI**
+ This example shows how to list the tags for all streams under the specified keyspace.

  ```
  aws keyspaces list-tags-for-resource --resource-arn 'arn:aws-cn:cassandra:us-east-1:111122223333:/keyspace/my_keyspace/table/my_table/stream/2025-05-11T21:21:33.291'
  ```

  The output of the last command looks like this.

  ```
  {
      "tags": [
          {
              "key": "key1",
              "value": "val1"
          },
          {
              "key": "key2",
              "value": "val2"
          },
          {
              "key": "key3",
              "value": "val3"
          },
          {
              "key": "key4",
              "value": "val4"
          }
      ]
  }
  ```

------

# Create cost allocation reports using tags for Amazon Keyspaces
<a name="CostAllocationReports"></a>

Amazon uses tags to organize resource costs on your cost allocation report. Amazon provides two types of cost allocation tags:
+ An Amazon-generated tag. Amazon defines, creates, and applies this tag for you.
+ User-defined tags. You define, create, and apply these tags.

You must activate both types of tags separately before they can appear in Cost Explorer or on a cost allocation report. 

 To activate Amazon-generated tags: 

1.  Sign in to the Amazon Web Services Management Console and open the Billing and Cost Management console at [https://console.amazonaws.cn/billing/home\$1/.](https://console.amazonaws.cn/billing/home#/) 

1.  In the navigation pane, choose **Cost Allocation Tags**. 

1.  Under **Amazon-Generated Cost Allocation Tags**, choose **Activate**. 

 To activate user-defined tags: 

1.  Sign in to the Amazon Web Services Management Console and open the Billing and Cost Management console at [https://console.amazonaws.cn/billing/home\$1/.](https://console.amazonaws.cn/billing/home#/) 

1.  In the navigation pane, choose **Cost Allocation Tags**. 

1.  Under **User-Defined Cost Allocation Tags**, choose **Activate**. 

 After you create and activate tags, Amazon generates a cost allocation report with your usage and costs grouped by your active tags. The cost allocation report includes all of your Amazon costs for each billing period. The report includes both tagged and untagged resources, so that you can clearly organize the charges for resources. 

**Note**  
 Currently, any data transferred out from Amazon Keyspaces won't be broken down by tags on cost allocation reports. 

 For more information, see [Using cost allocation tags](https://docs.amazonaws.cn/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html). 

# Create Amazon Keyspaces resources with Amazon CloudFormation
<a name="creating-resources-with-cloudformation"></a>

Amazon Keyspaces is integrated with Amazon CloudFormation, a service that helps you model and set up your Amazon keyspaces and tables so that you can spend less time creating and managing your resources and infrastructure. You create a template that describes the keyspaces and tables that you want, and Amazon CloudFormation takes care of provisioning and configuring those resources for you. 

When you use Amazon CloudFormation, you can reuse your template to set up your Amazon Keyspaces resources consistently and repeatedly. Just describe your resources once, and then provision the same resources over and over in multiple Amazon Web Services accounts and Regions. 

## Amazon Keyspaces and Amazon CloudFormation templates
<a name="working-with-templates"></a>

To provision and configure resources for Amazon Keyspaces, you must understand [Amazon CloudFormation templates](https://docs.amazonaws.cn/AWSCloudFormation/latest/UserGuide/template-guide.html). Templates are formatted text files in JSON or YAML. These templates describe the resources that you want to provision in your Amazon CloudFormation stacks. If you're unfamiliar with JSON or YAML, you can use Amazon CloudFormation Designer to help you get started with Amazon CloudFormation templates. For more information, see [What is Amazon CloudFormation designer?](https://docs.amazonaws.cn/AWSCloudFormation/latest/UserGuide/working-with-templates-cfn-designer.html) in the *Amazon CloudFormation User Guide*.

Amazon Keyspaces supports creating keyspaces and tables in Amazon CloudFormation. For the tables you create using Amazon CloudFormation templates, you can specify the schema, read/write mode, provisioned throughput settings, and other supported features. For more information, including examples of JSON and YAML templates for keyspaces and tables, see [Amazon Keyspaces (for Apache Cassandra) resource type reference](https://docs.amazonaws.cn/AWSCloudFormation/latest/UserGuide/AWS_Cassandra.html) in the *Amazon CloudFormation Template Reference*.

## Learn more about Amazon CloudFormation
<a name="learn-more-cloudformation"></a>

To learn more about Amazon CloudFormation, see the following resources:
+ [Amazon CloudFormation](https://www.amazonaws.cn/cloudformation/)
+ [Amazon CloudFormation User Guide](https://docs.amazonaws.cn/AWSCloudFormation/latest/UserGuide/Welcome.html)
+ [Amazon CloudFormation command line interface User Guide](https://docs.amazonaws.cn/cloudformation-cli/latest/userguide/what-is-cloudformation-cli.html)