

 Amazon Redshift will no longer support the creation of new Python UDFs starting Patch 198. Existing Python UDFs will continue to function until June 30, 2026. For more information, see the [ blog post ](https://amazonaws-china.com/blogs/big-data/amazon-redshift-python-user-defined-functions-will-reach-end-of-support-after-june-30-2026/). 

# Logging and monitoring in Amazon Redshift
Logging and monitoring

Monitoring is an important part of maintaining the reliability, availability, and performance of Amazon Redshift and your Amazon solutions. You can collect monitoring data from all of the parts of your Amazon solution so that you can more easily debug a multi-point failure if one occurs. Amazon provides several tools for monitoring your Amazon Redshift resources and responding to potential incidents:

**Amazon CloudWatch Alarms**  
Using Amazon CloudWatch alarms, you watch a single metric over a time period that you specify. If the metric exceeds a given threshold, a notification is sent to an Amazon SNS topic or Amazon Auto Scaling policy. CloudWatch alarms do not invoke actions because they are in a particular state. Rather the state must have changed and been maintained for a specified number of periods. For more information, see [Creating an alarm](performance-metrics-alarms.md). For a list of metrics, see [Performance data in Amazon Redshift](metrics-listing.md). 

**Amazon CloudTrail Logs**  
CloudTrail provides a record of API operations taken by a user, an IAM role, or an Amazon service in Amazon Redshift. Using the information collected by CloudTrail, you can determine the request that was made to Amazon Redshift, the IP address from which the request was made, who made the request, when it was made, and additional details. For more information, see [Logging with CloudTrail](logging-with-cloudtrail.md).

# Database audit logging
Database audit logging

Amazon Redshift logs information about connections and user activities in your database. These logs help you to monitor the database for security and troubleshooting purposes, a process called *database auditing*. The logs can be stored in:
+ *Amazon S3 buckets* - This provides access with data-security features for users who are responsible for monitoring activities in the database.
+ *Amazon CloudWatch* - You can view audit-logging data using the features built into CloudWatch, such as visualization features and setting actions.

**Note**  
[SYS\$1CONNECTION\$1LOG](https://docs.amazonaws.cn/redshift/latest/dg/SYS_CONNECTION_LOG.html) collects connection log data for Amazon Redshift Serverless. Note that when you collect audit logging data for Amazon Redshift Serverless, it can't be sent to log files, only to CloudWatch.

**Topics**
+ [

## Amazon Redshift logs
](#db-auditing-logs)
+ [

## Audit logs and Amazon CloudWatch
](#db-auditing-cloudwatch-provisioned)
+ [

# Enabling audit logging
](db-auditing-console.md)
+ [

# Secure logging
](db-auditing-secure-logging.md)

## Amazon Redshift logs


Amazon Redshift logs information in the following log files:
+ *Connection log* – Logs authentication attempts, connections, and disconnections.
+ *User log* – Logs information about changes to database user definitions.
+ *User activity log* – Logs each query before it's run on the database.

The connection and user logs are useful primarily for security purposes. You can use the connection log to monitor information about users connecting to the database and related connection information. This information might be their IP address, when they made the request, what type of authentication they used, and so on. You can use the user log to monitor changes to the definitions of database users. 

The user activity log is useful primarily for troubleshooting purposes. It tracks information about the types of queries that both the users and the system perform in the database. 

The connection log and user log both correspond to information that is stored in the system tables in your database. You can use the system tables to obtain the same information, but the log files provide a simpler mechanism for retrieval and review. The log files rely on Amazon S3 permissions rather than database permissions to perform queries against the tables. Additionally, by viewing the information in log files rather than querying the system tables, you reduce any impact of interacting with the database.

**Note**  
Log files are not as current as the system log tables which are [STL\$1USERLOG](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_USERLOG.html) and [STL\$1CONNECTION\$1LOG](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_CONNECTION_LOG.html). Records that are older than, but not including, the latest record are copied to log files.

**Note**  
For Amazon Redshift Serverless, [SYS\$1CONNECTION\$1LOG](https://docs.amazonaws.cn/redshift/latest/dg/SYS_CONNECTION_LOG.html) collects connection-log data. When you collect audit logging data for Amazon Redshift Serverless, it can't be sent to log files, only to CloudWatch.

### Connection log


Logs authentication attempts, and connections and disconnections. The following table describes the information in the connection log. For more information about these fields, see [STL\$1CONNECTION\$1LOG](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_CONNECTION_LOG.html) in the *Amazon Redshift Database Developer Guide*. For more information about the collected connection log data for Amazon Redshift Serverless, see [SYS\$1CONNECTION\$1LOG](https://docs.amazonaws.cn/redshift/latest/dg/SYS_CONNECTION_LOG.html).

[\[See the AWS documentation website for more details\]](http://docs.amazonaws.cn/en_us/redshift/latest/mgmt/db-auditing.html)

### User log


 Records details for the following changes to a database user:
+ Create user
+ Drop user
+ Alter user (rename)
+ Alter user (alter properties)

[\[See the AWS documentation website for more details\]](http://docs.amazonaws.cn/en_us/redshift/latest/mgmt/db-auditing.html)

Query the [SYS\$1USERLOG](https://docs.amazonaws.cn/redshift/latest/dg/SYS_USERLOG.html) system view to find additional information about changes to users. This view includes log data from Amazon Redshift Serverless. 

### User activity log


Logs each query before it is run on the database.

[\[See the AWS documentation website for more details\]](http://docs.amazonaws.cn/en_us/redshift/latest/mgmt/db-auditing.html)

## Audit logs and Amazon CloudWatch


 Audit logging is not turned on by default in Amazon Redshift. When you turn on logging on your cluster, Amazon Redshift exports logs to Amazon CloudWatch, or creates and uploads logs to Amazon S3, that capture data from the time audit logging is enabled to the present time. Each logging update is a continuation of the previous logs.

Audit logging to CloudWatch or to Amazon S3 is an optional process. Logging to system tables is not optional and happens automatically. For more information about logging to system tables, see [System Tables Reference](https://docs.amazonaws.cn/redshift/latest/dg/cm_chap_system-tables.html) in the Amazon Redshift Database Developer Guide. 

The connection log, user log, and user activity log are enabled together by using the Amazon Web Services Management Console, the Amazon Redshift API Reference, or the Amazon Command Line Interface (Amazon CLI). For the user activity log, you must also enable the `enable_user_activity_logging` database parameter. If you enable only the audit logging feature, but not the associated parameter, the database audit logs log information for only the connection log and user log, but not for the user activity log. The `enable_user_activity_logging` parameter is not enabled (`false`) by default. You can set it to `true` to enable the user activity log. For more information, see [Amazon Redshift parameter groups](working-with-parameter-groups.md). 

When you enable logging to CloudWatch, Amazon Redshift exports cluster connection, user, and user-activity log data to an Amazon CloudWatch Logs log group. The log data doesn't change, in terms of schema. CloudWatch is built for monitoring applications, and you can use it to perform real-time analysis or set it to take actions. You can also use Amazon CloudWatch Logs to store your log records in durable storage. 

Using CloudWatch to view logs is a recommended alternative to storing log files in Amazon S3. It doesn't require much configuration, and it may suit your monitoring requirements, especially if you use it already to monitor other services and applications.

### Log groups and log events in Amazon CloudWatch


After selecting which Amazon Redshift logs to export, you can monitor log events in Amazon CloudWatch Logs. A new log group is automatically created for Amazon Redshift Serverless, under the following prefix, in which `log_type` represents the log type.

```
/aws/redshift/cluster/<cluster_name>/<log_type>
```

For example, if you choose to export the connection log, log data is stored in the following log group.

```
/aws/redshift/cluster/cluster1/connectionlog
```

Log events are exported to a log group using the log stream. To search for information within log events for your serverless endpoint, use the Amazon CloudWatch Logs console, the Amazon CLI, or the Amazon CloudWatch Logs API. For information about searching and filtering log data, see [Creating metrics from log events using filters](https://docs.amazonaws.cn/AmazonCloudWatch/latest/logs/MonitoringLogData.html).

In CloudWatch, you can search your log data with a query syntax that provides for granularity and flexibility. For more information, see [CloudWatch Logs Insights query syntax](https://docs.amazonaws.cn/AmazonCloudWatch/latest/logs/CWL_QuerySyntax.html).

### Migrating to Amazon CloudWatch audit logging


In any case where you are sending logs to Amazon S3 and you change the configuration, for example to send logs to CloudWatch, logs that remain in Amazon S3 are unaffected. You can still query the log data in the Amazon S3 buckets where it resides.

### Log files in Amazon S3


The number and size of Amazon Redshift log files in Amazon S3 depends heavily on the activity in your cluster. If you have an active cluster that is generating a large number of logs, Amazon Redshift might generate the log files more frequently. You might have a series of log files for the same type of activity, such as having multiple connection logs within the same hour.

When Amazon Redshift uses Amazon S3 to store logs, you incur charges for the storage that you use in Amazon S3. Before you configure logging to Amazon S3, plan for how long you need to store the log files. As part of this, determine when the log files can either be deleted or archived, based on your auditing needs. The plan that you create depends heavily on the type of data that you store, such as data subject to compliance or regulatory requirements. For more information about Amazon S3 pricing, go to [Amazon Simple Storage Service (S3) Pricing](http://www.amazonaws.cn/s3/pricing/).

#### Limitations when you enable logging to Amazon S3


Audit logging has the following constraints:
+ You can use only Amazon S3-managed keys (SSE-S3) encryption (AES-256).
+ The Amazon S3 buckets must have the S3 Object Lock feature turned off.

#### Bucket permissions for Amazon Redshift audit logging


When you turn on logging to Amazon S3, Amazon Redshift collects logging information and uploads it to log files stored in Amazon S3. You can use an existing bucket or a new bucket. Amazon Redshift requires the following IAM permissions to the bucket: 
+ `s3:GetBucketAcl` The service requires read permissions to the Amazon S3 bucket so it can identify the bucket owner. 
+ `s3:PutObject` The service requires put object permissions to upload the logs. Also, the user or IAM role that turns on logging must have `s3:PutObject` permission to the Amazon S3 bucket. Each time logs are uploaded, the service determines whether the current bucket owner matches the bucket owner at the time logging was enabled. If these owners don't match, you receive an error.

If, when you enable audit logging, you select the option to create a new bucket, correct permissions are applied to it. However, if you create your own bucket in Amazon S3, or use an existing bucket, make sure to add a bucket policy that includes the bucket name. Logs are delivered using service-principal credentials. For most Amazon Web Services Regions, you add the Redshift service-principal name, *redshift.amazonaws.com*. 

**Note**  
The ARN format for the China (Beijing) Region uses the `aws-cn` identifier instead of the `aws` identifier, as shown in the following policy example.  

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "Put bucket policy needed for audit logging",
            "Effect": "Allow",
            "Principal": {
                "Service": "redshift.amazonaws.com"
            },
            "Action": [
                "s3:PutObject",
                "s3:GetBucketAcl"
            ],
            "Resource": [
                "arn:aws-cn:s3:::BucketName",
                "arn:aws-cn:s3:::BucketName/*"
            ]
        }
    ]
}
```

The bucket policy uses the following format. *ServiceName* and *BucketName* are placeholders for your own values. Also specify the associated actions and resources in the bucket policy.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "Put bucket policy needed for audit logging",
            "Effect": "Allow",
            "Principal": {
                "Service": "ServiceName"
            },
            "Action": [
                "s3:PutObject",
                "s3:GetBucketAcl"
            ],
            "Resource": [
                "arn:aws-cn:s3:::BucketName",
                "arn:aws-cn:s3:::BucketName/*"
            ]
        }
    ]
}
```

------

The following example is a bucket policy for the US East (N. Virginia) Region and a bucket named `AuditLogs`.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "Put bucket policy needed for audit logging",
            "Effect": "Allow",
            "Principal": {
                "Service": "redshift.amazonaws.com"
            },
            "Action": [
                "s3:PutObject",
                "s3:GetBucketAcl"
            ],
            "Resource": [
                "arn:aws-cn:s3:::AuditLogs",
                "arn:aws-cn:s3:::AuditLogs/*"
            ]
        }
    ]
}
```

------

Regions that aren't enabled by default, also known as "opt-in" Regions, require a Region-specific service principal name. For these, the service-principal name includes the region, in the format `redshift.region.amazonaws.com`. For example, *redshift.ap-east-1.amazonaws.com* for the Asia Pacific (Hong Kong) Region. For a list of the Regions that aren't enabled by default, see [Managing Amazon Web Services Regions](https://docs.amazonaws.cn/general/latest/gr/rande-manage.html) in the *Amazon Web Services General Reference*.

**Note**  
The Region-specific service-principal name corresponds to the Region where the cluster is located.

##### Best practices for log files


 When Redshift uploads log files to Amazon S3, large files can be uploaded in parts. If a multipart upload isn't successful, it's possible for parts of a file to remain in the Amazon S3 bucket. This can result in additional storage costs, so it's important to understand what occurs when a multipart upload fails. For a detailed explanation about multipart upload for audit logs, see [Uploading and copying objects using multipart upload](https://docs.amazonaws.cn/AmazonS3/latest/userguide/mpuoverview.html) and [Aborting a multipart upload](https://docs.amazonaws.cn/AmazonS3/latest/userguide/abort-mpu.html).

For more information about creating S3 buckets and adding bucket policies, see [Creating a general purpose bucket](https://docs.amazonaws.cn/AmazonS3/latest/userguide/create-bucket-overview.html) and [Bucket policies for Amazon S3](https://docs.amazonaws.cn/AmazonS3/latest/userguide/bucket-policies.html) in the *Amazon Simple Storage Service User Guide*. 

#### Bucket structure for Amazon Redshift audit logging


By default, Amazon Redshift organizes the log files in the Amazon S3 bucket by using the following bucket and object structure: ``

`AWSLogs/AccountID/ServiceName/Region/Year/Month/Day/AccountID_ServiceName_Region_ClusterName_LogType_Timestamp.gz` 

An example is: `AWSLogs/123456789012/redshift/us-east-1/2013/10/29/123456789012_redshift_us-east-1_mycluster_userlog_2013-10-29T18:01.gz`

If you provide an Amazon S3 key prefix, put the prefix at the start of the key.

For example, if you specify a prefix of myprefix: `myprefix/AWSLogs/123456789012/redshift/us-east-1/2013/10/29/123456789012_redshift_us-east-1_mycluster_userlog_2013-10-29T18:01.gz`

The Amazon S3 key prefix can't exceed 512 characters. It can't contain spaces ( ), double quotation marks (“), single quotation marks (‘), a backslash (\$1). There is also a number of special characters and control characters that aren't allowed. The hexadecimal codes for these characters are as follows:
+ x00 to x20
+ x22
+ x27
+ x5c
+ x7f or larger

### Audit logging in Amazon S3 considerations


 Amazon Redshift audit logging can be interrupted for the following reasons: 
+  Amazon Redshift does not have permission to upload logs to the Amazon S3 bucket. Verify that the bucket is configured with the correct IAM policy. For more information, see [Bucket permissions for Amazon Redshift audit logging](#db-auditing-bucket-permissions). 
+  The bucket owner changed. When Amazon Redshift uploads logs, it verifies that the bucket owner is the same as when logging was enabled. If the bucket owner has changed, Amazon Redshift cannot upload logs until you configure another bucket to use for audit logging. 
+  The bucket cannot be found. If the bucket is deleted in Amazon S3, Amazon Redshift cannot upload logs. You either must recreate the bucket or configure Amazon Redshift to upload logs to a different bucket. 

### API calls with Amazon CloudTrail


Amazon Redshift is integrated with Amazon CloudTrail, a service that provides a record of actions taken by a user, role, or an Amazon service in Amazon Redshift. CloudTrail captures all API calls for Amazon Redshift as events. For more information about Amazon Redshift integration with Amazon CloudTrail, see [Logging with CloudTrail](https://docs.amazonaws.cn/redshift/latest/mgmt/logging-with-cloudtrail.html).

You can use CloudTrail independently from or in addition to Amazon Redshift database audit logging. 

To learn more about CloudTrail, see the [Amazon CloudTrail User Guide](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/).

# Enabling audit logging


Configure Amazon Redshift to export audit log data. Logs can be exported to CloudWatch, or as files to Amazon S3 buckets.

## Enabling audit logging using the console


### Console steps


**To enable audit logging for a cluster**

1. Sign in to the Amazon Web Services Management Console and open the Amazon Redshift console at [https://console.amazonaws.cn/redshiftv2/](https://console.amazonaws.cn/redshiftv2/).

1. On the navigation menu, choose **Clusters**, then choose the cluster that you want to update. 

1. Choose the **Properties** tab. On the **Database configurations** panel, choose **Edit**, then **Edit audit logging**.

1. On the **Edit audit logging** page, choose **Turn on** and select **S3 bucket** or **CloudWatch**. We recommend using CloudWatch because administration is easy and it has helpful features for data visualization.

1. Choose which logs to export.

1. To save your choices, choose **Save changes**.

# Secure logging


When Amazon Redshift logs a query that references one or more Amazon Glue Data Catalog views, Amazon Redshift automatically masks fields in certain system table and view columns when logging metadata about that query.

Secure log masking applies to all system table and view entries that Amazon Redshift generates while running a query that fits the masking conditions. The following table lists system views and columns that have secure logging applied, masking text with `******` and numbers with `-1`. The number of asterisks used to mask text matches the number of characters in the original text, up to 6 characters. Strings longer than 6 characters still appear as 6 asterisks.


****  

| System table | Sensitive columns | 
| --- | --- | 
| [SYS\$1EXTERNAL\$1QUERY\$1DETAIL](https://docs.amazonaws.cn/redshift/latest/dg/SYS_EXTERNAL_QUERY_DETAIL.html) | **Columns:** source\$1type, total\$1partitions, qualified\$1partitions, scanned\$1files, returned\$1rows, returned\$1bytes, file\$1format, file\$1location, external\$1query\$1text, warning\$1message. | 
| [SYS\$1EXTERNAL\$1QUERY\$1ERROR](https://docs.amazonaws.cn/redshift/latest/dg/SYS_EXTERNAL_QUERY_ERROR.html) | **Columns:** file\$1location, rowid, column\$1name, original\$1value, modified\$1value, trigger, action, action\$1value, error\$1code. | 
| [SYS\$1QUERY\$1DETAIL](https://docs.amazonaws.cn/redshift/latest/dg/SYS_QUERY_DETAIL.html) | **Columns:** step\$1id, step\$1name, table\$1id, table\$1name, input\$1bytes, input\$1rows, output\$1bytes, output\$1rows, blocks\$1read, blocks\$1write, local\$1read\$1IO, remote\$1read\$1IO, spilled\$1block\$1local\$1disk, spilled\$1block\$1remote\$1disk, step\$1attribute. | 
| [SYS\$1QUERY\$1HISTORY](https://docs.amazonaws.cn/redshift/latest/dg/SYS_QUERY_HISTORY.html) | **Columns:** returned\$1rows, returned\$1bytes. | 
| [STL\$1AGGR](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_AGGR.html) | **Columns:** rows, bytes, tbl, type. | 
| [STL\$1BCAST](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_BCAST.html) | **Columns:** rows, bytes, packets. | 
| [STL\$1DDLTEXT](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_DDLTEXT.html) | **Columns:** label, text. | 
| [STL\$1DELETE](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_DELETE.html) | **Columns:** rows, tbl. | 
| [STL\$1DIST](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_DIST.html) | **Columns:** rows, bytes, packets. | 
| [STL\$1ERROR](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_ERROR.html) | **Columns:** file, linenum, context, error. | 
| [STL\$1EXPLAIN](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_EXPLAIN.html) | **Columns:** plannode, info. | 
| [STL\$1FILE\$1SCAN](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_FILE_SCAN.html) | **Columns:** name, line, bytes. | 
| [STL\$1HASH](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_HASH.html) | **Columns:** rows, bytes, tbl, est\$1rows. | 
| [STL\$1HASHJOIN](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_HASHJOIN.html) | **Columns:** rows, tbl, num\$1parts, join\$1type. | 
| [STL\$1INSERT](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_INSERT.html) | **Columns:** rows, tbl. | 
| [STL\$1LIMIT](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_LIMIT.html) | **Columns:** rows. | 
| [STL\$1MERGE](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_MERGE.html) | **Columns:** rows. | 
| [STL\$1MERGEJOIN](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_MERGEJOIN.html) | **Columns:** rows, tbl. | 
| [STL\$1NESTLOOP](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_NESTLOOP.html) | **Columns:** rows, tbl. | 
| [STL\$1PARSE](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_PARSE.html) | **Columns:** rows. | 
| [STL\$1PLAN\$1INFO](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_PLAN_INFO.html) | **Columns:** startupcost, totalcost, rows, bytes. | 
| [STL\$1PROJECT](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_PROJECT.html) | **Columns:** rows, tbl. | 
| [STL\$1QUERY](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_QUERY.html) | **Columns:** querytxt. | 
| [STL\$1QUERY\$1METRICS](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_QUERY_METRICS.html) | **Columns:** max\$1rows, rows, max\$1blocks\$1read, blocks\$1read, max\$1blocks\$1to\$1disk, blocks\$1to\$1disk, max\$1query\$1scan\$1size, query\$1scan\$1size. | 
| [STL\$1QUERYTEXT](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_QUERYTEXT.html) | **Columns:** text. | 
| [STL\$1RETURN](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_RETURN.html) | **Columns:** rows, bytes. | 
| [STL\$1S3CLIENT](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_S3CLIENT.html) | **Columns:** bucket, key, transfer\$1size, data\$1size. | 
| [STL\$1S3CLIENT\$1ERROR](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_S3CLIENT_ERROR.html) | **Columns:** bucket, key, error, transfer\$1size. | 
| [STL\$1SAVE](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_SAVE.html) | **Columns:** rows, bytes, tbl. | 
| [STL\$1SCAN](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_SCAN.html) | **Columns:** rows, bytes, fetches, type, tbl, rows\$1pre\$1filter, rows\$1pre\$1user\$1filter, perm\$1table\$1name, scanned\$1mega\$1value. | 
| [STL\$1SORT](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_SORT.html) | **Columns:** rows, bytes, tbl. | 
| [STL\$1SSHCLIENT\$1ERROR](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_SSHCLIENT_ERROR) | **Columns:** ssh\$1username, endpoint, command, error. | 
| [STL\$1TR\$1CONFLICT](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_TR_CONFLICT.html) | **Columns:** table\$1id. | 
| [STL\$1UNDONE](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_UNDONE.html) | **Columns:** table\$1id. | 
| [STL\$1UNIQUE](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_UNIQUE.html) | **Columns:** rows, type, bytes. | 
| [STL\$1UTILITYTEXT](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_UTILITYTEXT.html) | **Columns:** label, text. | 
| [STL\$1WINDOW](https://docs.amazonaws.cn/redshift/latest/dg/r_STL_WINDOW.html) | **Columns:** rows. | 
| [STV\$1BLOCKLIST](https://docs.amazonaws.cn/redshift/latest/dg/r_STV_BLOCKLIST.html) | **Columns:** col, tbl, num\$1values, minvalue, maxvalue. | 
| [STV\$1EXEC\$1STATE](https://docs.amazonaws.cn/redshift/latest/dg/r_STV_EXEC_STATE.html) | **Columns:** rows, bytes, label. | 
| [STV\$1INFLIGHT](https://docs.amazonaws.cn/redshift/latest/dg/r_STV_INFLIGHT.html) | **Columns:** label, text. | 
| [STV\$1LOCKS](https://docs.amazonaws.cn/redshift/latest/dg/r_STV_LOCKS.html) | **Columns:** table\$1id. | 
| [STV\$1QUERY\$1METRICS](https://docs.amazonaws.cn/redshift/latest/dg/r_STV_QUERY_METRICS.html) | **Columns:** rows, max\$1rows, blocks\$1read, max\$1blocks\$1read, max\$1blocks\$1to\$1disk, blocks\$1to\$1disk, max\$1query\$1scan\$1size, query\$1scan\$1size. | 
| [STV\$1STARTUP\$1RECOVERY\$1STATE](https://docs.amazonaws.cn/redshift/latest/dg/r_STV_STARTUP_RECOVERY_STATE.html) | **Columns:** table\$1id, table\$1name. | 
| [STV\$1TBL\$1PERM](https://docs.amazonaws.cn/redshift/latest/dg/r_STV_TBL_PERM.html) | **Columns:** id, name, rows, sorted\$1rows, temp, block\$1count, query\$1scan\$1size. | 
| [STV\$1TBL\$1TRANS](https://docs.amazonaws.cn/redshift/latest/dg/r_STV_TBL_TRANS.html) | **Columns:** id, rows, size. | 
| [SVCS\$1EXPLAIN](https://docs.amazonaws.cn/redshift/latest/dg/r_SVCS_EXPLAIN.html) | **Columns:** plannode, info. | 
| [SVCS\$1PLAN\$1INFO](https://docs.amazonaws.cn/redshift/latest/dg/r_SVCS_PLAN_INFO.html) | **Columns:** rows, bytes. | 
| [SVCS\$1QUERY\$1SUMMARY](https://docs.amazonaws.cn/redshift/latest/dg/r_SVCS_QUERY_SUMMARY.html) | **Columns:** step, rows, bytes, rate\$1row, rate\$1byte, label, rows\$1pre\$1filter. | 
| [SVCS\$1S3LIST](https://docs.amazonaws.cn/redshift/latest/dg/r_SVCS_S3LIST.html) | **Columns:** bucket, prefix, retrieved\$1files, max\$1file\$1size, avg\$1file\$1size. | 
| [SVCS\$1S3LOG](https://docs.amazonaws.cn/redshift/latest/dg/r_SVCS_S3LOG.html) | **Columns:** message. | 
| [SVCS\$1S3PARTITION\$1SUMMARY](https://docs.amazonaws.cn/redshift/latest/dg/r_SVCS_S3PARTITION_SUMMARY.html) | **Columns:** total\$1partitions, qualified\$1partitions, min\$1assigned\$1partitions, max\$1assigned\$1partitions, avg\$1assigned\$1partitions. | 
| [SVCS\$1S3QUERY\$1SUMMARY](https://docs.amazonaws.cn/redshift/latest/dg/r_SVCS_S3QUERY_SUMMARY.html) | **Columns:** external\$1table\$1name, file\$1format, s3\$1scanned\$1rows, s3\$1scanned\$1bytes, s3query\$1returned\$1rows, s3query\$1returned\$1bytes. | 
| [SVL\$1QUERY\$1METRICS](https://docs.amazonaws.cn/redshift/latest/dg/r_SVL_QUERY_METRICS.html) | **Columns:** step\$1label, scan\$1row\$1count, join\$1row\$1count, nested\$1loop\$1join\$1row\$1count, return\$1row\$1count, spectrum\$1scan\$1row\$1count, spectrum\$1scan\$1size\$1mb. | 
| [SVL\$1QUERY\$1METRICS\$1SUMMARY](https://docs.amazonaws.cn/redshift/latest/dg/r_SVL_QUERY_METRICS_SUMMARY.html) | **Columns:** step\$1label, scan\$1row\$1count, join\$1row\$1count, nested\$1loop\$1join\$1row\$1count, return\$1row\$1count, spectrum\$1scan\$1row\$1count, spectrum\$1scan\$1size\$1mb. | 
| [SVL\$1QUERY\$1REPORT](https://docs.amazonaws.cn/redshift/latest/dg/r_SVL_QUERY_REPORT.html) | **Columns:** rows, bytes, label, rows\$1pre\$1filter. | 
| [SVL\$1QUERY\$1SUMMARY](https://docs.amazonaws.cn/redshift/latest/dg/r_SVL_QUERY_SUMMARY.html) | **Columns:** rows, bytes, rows\$1pre\$1filter. | 
| [SVL\$1S3LIST](https://docs.amazonaws.cn/redshift/latest/dg/r_SVL_S3LIST.html) | **Columns:** bucket, prefix, retrieved\$1files, max\$1file\$1size, avg\$1file\$1size. | 
| [SVL\$1S3LOG](https://docs.amazonaws.cn/redshift/latest/dg/r_SVL_S3LOG.html) | **Columns:** message. | 
| [SVL\$1S3PARTITION](https://docs.amazonaws.cn/redshift/latest/dg/r_SVL_S3PARTITION.html) | **Columns:** rows, bytes, label, rows\$1pre\$1filter. | 
| [SVL\$1S3PARTITION\$1SUMMARY](https://docs.amazonaws.cn/redshift/latest/dg/r_SVL_S3PARTITION_SUMMARY.html) | **Columns:** total\$1partitions, qualified\$1partitions, min\$1assigned\$1partitions, max\$1assigned\$1partitions, avg\$1assigned\$1partitions. | 
| [SVL\$1S3QUERY](https://docs.amazonaws.cn/redshift/latest/dg/r_SVL_S3QUERY.html) | **Columns:** external\$1table\$1name, file\$1format, s3\$1scanned\$1rows, s3\$1scanned\$1bytes, s3query\$1returned\$1rows, s3query\$1returned\$1bytes, files. | 
| [SVL\$1S3QUERY\$1SUMMARY](https://docs.amazonaws.cn/redshift/latest/dg/r_SVL_S3QUERY_SUMMARY.html) | **Columns:** external\$1table\$1name, file\$1format, s3\$1scanned\$1rows, s3\$1scanned\$1bytes, s3query\$1returned\$1rows, s3query\$1returned\$1bytes. | 
| [SVL\$1S3RETRIES](https://docs.amazonaws.cn/redshift/latest/dg/r_SVL_S3RETRIES.html) | **Columns:** file\$1size, location, message. | 
| [SVL\$1SPECTRUM\$1SCAN\$1ERROR](https://docs.amazonaws.cn/redshift/latest/dg/r_SVL_SPECTRUM_SCAN_ERROR.html) | **Columns:** location, rowid, colname, original\$1value, modified\$1value, trigger, action, action\$1value, error\$1code. | 
| [SVL\$1STATEMENTTEXT](https://docs.amazonaws.cn/redshift/latest/dg/r_SVL_STATEMENTTEXT.html) | **Columns:** type, text. | 
| [SVL\$1STORED\$1PROC\$1CALL](https://docs.amazonaws.cn/redshift/latest/dg/r_SVL_STORED_PROC_CALL.html) | **Columns:** querytxt. | 
| [SVL\$1STORED\$1PROC\$1MESSAGES](https://docs.amazonaws.cn/redshift/latest/dg/r_SVL_STORED_PROC_MESSAGES.html) | **Columns:** message, linenum, querytext. | 
| [SVL\$1UDF\$1LOG](https://docs.amazonaws.cn/redshift/latest/dg/r_SVL_UDF_LOG.html) | **Columns:** message, funcname. | 
| [SVV\$1DISKUSAGE](https://docs.amazonaws.cn/redshift/latest/dg/r_SVV_DISKUSAGE.html) | **Columns:** name, col, tbl, blocknum, num\$1values, minvalue, maxvalue. | 
| [SVV\$1QUERY\$1STATE](https://docs.amazonaws.cn/redshift/latest/dg/r_SVV_QUERY_STATE.html) | **Columns:** rows, bytes, label. | 
| [SVV\$1TABLE\$1INFO](https://docs.amazonaws.cn/redshift/latest/dg/r_SVV_TABLE_INFO.html) | **Columns:** table\$1id, table. | 
| [SVV\$1TRANSACTIONS](https://docs.amazonaws.cn/redshift/latest/dg/r_SVV_TRANSACTIONS.html) | **Columns:** relation. | 

For more information on system tables and views, see [ System tables and views reference](https://docs.amazonaws.cn/redshift/latest/dg/cm_chap_system-tables.html) in the *Amazon Redshift Database Developer Guide*. For information on Amazon Redshift’s ability to dynamically mask query results, see [ Dynamic data masking](https://docs.amazonaws.cn/redshift/latest/dg/t_ddm.html) in the *Amazon Redshift Database Developer Guide*. For information on creating views in the Amazon Glue Data Catalog using Amazon Redshift, see [Amazon Glue Data Catalog views](https://docs.amazonaws.cn/redshift/latest/dg/data-catalog-views-overview.html) in the *Amazon Redshift Database Developer Guide*.

# Logging with CloudTrail


Amazon Redshift, data sharing, Amazon Redshift Serverless, Amazon Redshift Data API, and query editor v2 are all integrated with Amazon CloudTrail. CloudTrail is a service that provides a record of actions taken by a user, role, or an Amazon service in Amazon Redshift. CloudTrail captures all API calls for Amazon Redshift as events. The calls captured include calls from the Redshift console and code calls to the Redshift operations. 

If you create a CloudTrail trail, you can have continuous delivery of CloudTrail events to an Amazon S3 bucket, including events for Redshift. If you don't configure a trail, you can still view the most recent events in the CloudTrail console in **Event history**. Using the information collected by CloudTrail, you can determine certain things. These include the request that was made to Redshift, the IP address from which the request was made, who made the request, when it was made, and additional details. 

You can use CloudTrail independently from or in addition to Amazon Redshift database audit logging. 

To learn more about CloudTrail, see the [Amazon CloudTrail User Guide](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/).

## Information in CloudTrail


CloudTrail is turned on in your Amazon account when you create the account. When activity occurs, that activity is recorded in a CloudTrail event along with other Amazon service events in **Event history**. You can view, search, and download recent events in your Amazon account. For more information, see [Viewing Events with CloudTrail Event History](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/view-cloudtrail-events.html) in the *Amazon CloudTrail User Guide*. 

For an ongoing record of events in your Amazon account, including events for Redshift, create a trail. CloudTrail uses *trails* to deliver log files to an Amazon S3 bucket. By default, when you create a trail in the console, the trail applies to all Amazon Regions. The trail logs events from all Regions in the Amazon partition and delivers the log files to the Amazon S3 bucket that you specify. Additionally, you can configure other Amazon services to further analyze and act upon the event data collected in CloudTrail logs. For more information, see the following in the *Amazon CloudTrail User Guide*:
+ [Overview for Creating a Trail](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/cloudtrail-create-and-update-a-trail.html)
+ [CloudTrail Supported Services and Integrations](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/cloudtrail-aws-service-specific-topics.html#cloudtrail-aws-service-specific-topics-integrations)
+ [Configuring Amazon SNS Notifications for CloudTrail](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/getting_notifications_top_level.html)
+ [Receiving CloudTrail Log Files from Multiple Regions](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/receive-cloudtrail-log-files-from-multiple-regions.html) and [Receiving CloudTrail Log Files from Multiple Accounts](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/cloudtrail-receive-logs-from-multiple-accounts.html)

All Amazon Redshift, Amazon Redshift Serverless, Data API, data sharing, and query editor v2 actions are logged by CloudTrail. For example, calls to the `AuthorizeDatashare`, `CreateNamespace`, `ExecuteStatement`, and `CreateConnection` actions generate entries in the CloudTrail log files. 

Every event or log entry contains information about who generated the request. The identity information helps you determine the following: 
+ Whether the request was made with root or user credentials.
+ Whether the request was made with temporary security credentials for a role or federated user.
+ Whether the request was made by another Amazon service.

For more information, see [CloudTrail userIdentity Element](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html) in the *Amazon CloudTrail User Guide*.

## Log files entries


A *trail* is a configuration that allows for delivery of events as log files to an Amazon S3 bucket that you specify. CloudTrail log files contain one or more log entries. An *event* represents a single request from any source and includes information about the requested action, the date and time of the action, request parameters, and so on. CloudTrail log files aren't an ordered stack trace of the public API calls, so they don't appear in any specific order. 

## Amazon Redshift Datashare example


The following example shows a CloudTrail log entry that illustrates the `AuthorizeDataShare` operation. 

```
{
    "eventVersion": "1.08",
    "userIdentity": {
        "type": "AssumedRole",
        "principalId": "AKIAIOSFODNN7EXAMPLE:janedoe",
        "arn": "arn:aws:sts::111122223333:user/janedoe",
        "accountId": "111122223333",
        "accessKeyId": "AKIAI44QH8DHBEXAMPLE",
        "sessionContext": {
            "sessionIssuer": {
                "type": "Role",
                "principalId": "AKIAIOSFODNN7EXAMPLE:janedoe",
                "arn": "arn:aws:sts::111122223333:user/janedoe",
                "accountId": "111122223333",
                "userName": "janedoe"
            },
            "attributes": {
                "creationDate": "2021-08-02T23:40:45Z",
                "mfaAuthenticated": "false"
            }
        }
    },
    "eventTime": "2021-08-02T23:40:58Z",
    "eventSource": "redshift.amazonaws.com",
    "eventName": "AuthorizeDataShare",
    "awsRegion": "us-east-1",
    "sourceIPAddress": "3.227.36.75",
    "userAgent":"aws-cli/1.18.118 Python/3.6.10 Linux/4.9.217-0.1.ac.205.84.332.metal1.x86_64 botocore/1.17.41", 
    "requestParameters": {
        "dataShareArn": "arn:aws:redshift:us-east-1:111122223333:datashare:4c64c6ec-73d5-42be-869b-b7f7c43c7a53/testshare",
        "consumerIdentifier": "555555555555"
    },
    "responseElements": {
        "dataShareArn": "arn:aws:redshift:us-east-1:111122223333:datashare:4c64c6ec-73d5-42be-869b-b7f7c43c7a53/testshare",
        "producerNamespaceArn": "arn:aws:redshift:us-east-1:123456789012:namespace:4c64c6ec-73d5-42be-869b-b7f7c43c7a53",
        "producerArn": "arn:aws:redshift:us-east-1:111122223333:namespace:4c64c6ec-73d5-42be-869b-b7f7c43c7a53",
        "allowPubliclyAccessibleConsumers": true,
        "dataShareAssociations": [
            {
                "consumerIdentifier": "555555555555",
                "status": "AUTHORIZED",
                "createdDate": "Aug 2, 2021 11:40:56 PM",
                "statusChangeDate": "Aug 2, 2021 11:40:57 PM"
            }
        ]
    },
    "requestID": "87ee1c99-9e41-42be-a5c4-00495f928422",
    "eventID": "03a3d818-37c8-46a6-aad5-0151803bdb09",
    "readOnly": false,
    "eventType": "AwsApiCall",
    "managementEvent": true,
    "recipientAccountId": "111122223333",
    "eventCategory": "Management"
}
```

## Amazon Redshift Serverless example


Amazon Redshift Serverless is integrated with Amazon CloudTrail to provide a record of actions taken in Amazon Redshift Serverless. CloudTrail captures all API calls for Amazon Redshift Serverless as events. For more information about Amazon Redshift Serverless features, see [Amazon Redshift Serverless feature overview](https://docs.amazonaws.cn/redshift/latest/mgmt/serverless-considerations.html).

The following example shows a CloudTrail log entry that demonstrates the `CreateNamespace` action.

```
{
    "eventVersion": "1.08",
    "userIdentity": {
        "type": "AssumedRole",
        "principalId": "AAKEOFPINEXAMPLE:admin",
        "arn": "arn:aws:sts::111111111111:assumed-role/admin/admin",
        "accountId": "111111111111",
        "accessKeyId": "AAKEOFPINEXAMPLE",
        "sessionContext": {
            "sessionIssuer": {
                "type": "Role",
                "principalId": "AAKEOFPINEXAMPLE",
                "arn": "arn:aws:iam::111111111111:role/admin",
                "accountId": "111111111111",
                "userName": "admin"
            },
            "webIdFederationData": {},
            "attributes": {
                "creationDate": "2022-03-21T20:51:58Z",
                "mfaAuthenticated": "false"
            }
        }
    },
    "eventTime": "2022-03-21T23:15:40Z",
    "eventSource": "redshift-serverless.amazonaws.com",
    "eventName": "CreateNamespace",
    "awsRegion": "us-east-1",
    "sourceIPAddress": "56.23.155.33",
    "userAgent": "aws-cli/2.4.14 Python/3.8.8 Linux/5.4.181-109.354.amzn2int.x86_64 exe/x86_64.amzn.2 prompt/off command/redshift-serverless.create-namespace",
    "requestParameters": {
        "adminUserPassword": "HIDDEN_DUE_TO_SECURITY_REASONS",
        "adminUsername": "HIDDEN_DUE_TO_SECURITY_REASONS",
        "dbName": "dev",
        "namespaceName": "testnamespace"
    },
    "responseElements": {
        "namespace": {
            "adminUsername": "HIDDEN_DUE_TO_SECURITY_REASONS",
            "creationDate": "Mar 21, 2022 11:15:40 PM",
            "defaultIamRoleArn": "",
            "iamRoles": [],
            "logExports": [],
            "namespaceArn": "arn:aws:redshift-serverless:us-east-1:111111111111:namespace/befa5123-16c2-4449-afca-1d27cb40fc99",
            "namespaceId": "8b726a0c-16ca-4799-acca-1d27cb403599",
            "namespaceName": "testnamespace",
            "status": "AVAILABLE"
        }
    },
    "requestID": "ed4bb777-8127-4dae-aea3-bac009999163",
    "eventID": "1dbee944-f889-4beb-b228-7ad0f312464",
    "readOnly": false,
    "eventType": "AwsApiCall",
    "managementEvent": true,
    "recipientAccountId": "111111111111",
    "eventCategory": "Management",
}
```

## Amazon Redshift Data API examples


The following example shows a CloudTrail log entry that demonstrates the `ExecuteStatement` action.

```
{
    "eventVersion":"1.05",
    "userIdentity":{
        "type":"IAMUser",
        "principalId":"AKIAIOSFODNN7EXAMPLE:janedoe",
        "arn":"arn:aws:sts::123456789012:user/janedoe",
        "accountId":"123456789012",
        "accessKeyId":"AKIAI44QH8DHBEXAMPLE",
        "userName": "janedoe"
    },
    "eventTime":"2020-08-19T17:55:59Z",
    "eventSource":"redshift-data.amazonaws.com",
    "eventName":"ExecuteStatement",
    "awsRegion":"us-east-1",
    "sourceIPAddress":"192.0.2.0",
    "userAgent":"aws-cli/1.18.118 Python/3.6.10 Linux/4.9.217-0.1.ac.205.84.332.metal1.x86_64 botocore/1.17.41",
    "requestParameters":{
        "clusterIdentifier":"example-cluster-identifier",
        "database":"example-database-name",
        "dbUser":"example_db_user_name",
        "sql":"***OMITTED***"
    },
    "responseElements":{
        "clusterIdentifier":"example-cluster-identifier",
        "createdAt":"Aug 19, 2020 5:55:58 PM",
        "database":"example-database-name",
        "dbUser":"example_db_user_name",
        "id":"5c52b37b-9e07-40c1-98de-12ccd1419be7"
    },
    "requestID":"00c924d3-652e-4939-8a7a-cd0612eeb8ac",
    "eventID":"c1fb7076-102f-43e5-9ec9-40820bcc1175",
    "readOnly":false,
    "eventType":"AwsApiCall",
    "recipientAccountId":"123456789012"
}
```

The following example shows a CloudTrail log entry that demonstrates the `ExecuteStatement` action showing the `clientToken` used for idempotency.

```
{
    "eventVersion":"1.05",
    "userIdentity":{
        "type":"IAMUser",
        "principalId":"AKIAIOSFODNN7EXAMPLE:janedoe",
        "arn":"arn:aws:sts::123456789012:user/janedoe",
        "accountId":"123456789012",
        "accessKeyId":"AKIAI44QH8DHBEXAMPLE",
        "userName": "janedoe"
    },
    "eventTime":"2020-08-19T17:55:59Z",
    "eventSource":"redshift-data.amazonaws.com",
    "eventName":"ExecuteStatement",
    "awsRegion":"us-east-1",
    "sourceIPAddress":"192.0.2.0",
    "userAgent":"aws-cli/1.18.118 Python/3.6.10 Linux/4.9.217-0.1.ac.205.84.332.metal1.x86_64 botocore/1.17.41",
    "requestParameters":{
        "clusterIdentifier":"example-cluster-identifier",
        "database":"example-database-name",
        "dbUser":"example_db_user_name",
        "sql":"***OMITTED***",
        "clientToken":"32db2e10-69ac-4534-b3fc-a191052616ce"
    },
    "responseElements":{
        "clusterIdentifier":"example-cluster-identifier",
        "createdAt":"Aug 19, 2020 5:55:58 PM",
        "database":"example-database-name",
        "dbUser":"example_db_user_name",
        "id":"5c52b37b-9e07-40c1-98de-12ccd1419be7"
    },
    "requestID":"00c924d3-652e-4939-8a7a-cd0612eeb8ac",
    "eventID":"c1fb7076-102f-43e5-9ec9-40820bcc1175",
    "readOnly":false,
    "eventType":"AwsApiCall",
    "recipientAccountId":"123456789012"
}
```

## Amazon Redshift query editor v2 example


The following example shows a CloudTrail log entry that demonstrates the `CreateConnection` action.

```
{
    "eventVersion": "1.08",
    "userIdentity": {
        "type": "AssumedRole",
        "principalId": "AAKEOFPINEXAMPLE:session",
        "arn": "arn:aws:sts::123456789012:assumed-role/MyRole/session",
        "accountId": "123456789012",
        "accessKeyId": "AKIAI44QH8DHBEXAMPLE",
        "sessionContext": {
            "sessionIssuer": {
                "type": "Role",
                "principalId": "AAKEOFPINEXAMPLE",
                "arn": "arn:aws:iam::123456789012:role/MyRole",
                "accountId": "123456789012",
                "userName": "MyRole"
            },
            "webIdFederationData": {},
            "attributes": {
                "creationDate": "2022-09-21T17:19:02Z",
                "mfaAuthenticated": "false"
            }
        }
    },
    "eventTime": "2022-09-21T22:22:05Z",
    "eventSource": "sqlworkbench.amazonaws.com",
    "eventName": "CreateConnection",
    "awsRegion": "ca-central-1",
    "sourceIPAddress": "192.2.0.2",
    "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Firefox/102.0",
    "requestParameters": {
        "password": "***",
        "databaseName": "***",
        "isServerless": false,
        "name": "***",
        "host": "redshift-cluster-2.c8robpbxvbf9.ca-central-1.redshift.amazonaws.com",
        "authenticationType": "***",
        "clusterId": "redshift-cluster-2",
        "username": "***",
        "tags": {
            "sqlworkbench-resource-owner": "AAKEOFPINEXAMPLE:session"
        }
    },
    "responseElements": {
        "result": true,
        "code": "",
        "data": {
            "id": "arn:aws:sqlworkbench:ca-central-1:123456789012:connection/ce56b1be-dd65-4bfb-8b17-12345123456",
            "name": "***",
            "authenticationType": "***",
            "databaseName": "***",
            "secretArn": "arn:aws:secretsmanager:ca-central-1:123456789012:secret:sqlworkbench!7da333b4-9a07-4917-b1dc-12345123456-qTCoFm",
            "clusterId": "redshift-cluster-2",
            "dbUser": "***",
            "userSettings": "***",
            "recordDate": "2022-09-21 22:22:05",
            "updatedDate": "2022-09-21 22:22:05",
            "accountId": "123456789012",
            "tags": {
                "sqlworkbench-resource-owner": "AAKEOFPINEXAMPLE:session"
            },
            "isServerless": false
        }
    },
    "requestID": "9b82f483-9c03-4cdd-bb49-a7009e7da714",
    "eventID": "a7cdd442-e92f-46a2-bc82-2325588d41c3",
    "readOnly": false,
    "eventType": "AwsApiCall",
    "managementEvent": true,
    "recipientAccountId": "123456789012",
    "eventCategory": "Management"
}
```

## Amazon Redshift account IDs in Amazon CloudTrail logs


When Amazon Redshift calls another Amazon service for you, the call is logged with an account ID that belongs to Amazon Redshift. It isn't logged with your account ID. For example, suppose that Amazon Redshift calls Amazon Key Management Service (Amazon KMS) operations such as `CreateGrant`, `Decrypt`, `Encrypt`, and `RetireGrant` to manage encryption on your cluster. In this case, the calls are logged by Amazon CloudTrail using an Amazon Redshift account ID.

Amazon Redshift uses the account IDs in the following table when calling other Amazon services.

[\[See the AWS documentation website for more details\]](http://docs.amazonaws.cn/en_us/redshift/latest/mgmt/logging-with-cloudtrail.html)

The following example shows a CloudTrail log entry for the Amazon KMS Decrypt operation that was called by Amazon Redshift.

```
{

    "eventVersion": "1.05",
    "userIdentity": {
        "type": "AssumedRole",
        "principalId": "AROAI5QPCMKLTL4VHFCYY:i-0f53e22dbe5df8a89",
        "arn": "arn:aws:sts::790247189693:assumed-role/prod-23264-role-wp/i-0f53e22dbe5df8a89",
        "accountId": "790247189693",
        "accessKeyId": "AKIAIOSFODNN7EXAMPLE",
        "sessionContext": {
            "attributes": {
                "mfaAuthenticated": "false",
                "creationDate": "2017-03-03T16:24:54Z"
            },
            "sessionIssuer": {
                "type": "Role",
                "principalId": "AROAI5QPCMKLTL4VHFCYY",
                "arn": "arn:aws:iam::790247189693:role/prod-23264-role-wp",
                "accountId": "790247189693",
                "userName": "prod-23264-role-wp"
            }
        }
    },
    "eventTime": "2017-03-03T17:16:51Z",
    "eventSource": "kms.amazonaws.com",
    "eventName": "Decrypt",
    "awsRegion": "us-east-2",
    "sourceIPAddress": "52.14.143.61",
    "userAgent": "aws-internal/3",
    "requestParameters": {
        "encryptionContext": {
            "aws:redshift:createtime": "20170303T1710Z",
            "aws:redshift:arn": "arn:aws:redshift:us-east-2:123456789012:cluster:my-dw-instance-2"
        }
    },
    "responseElements": null,
    "requestID": "30d2fe51-0035-11e7-ab67-17595a8411c8",
    "eventID": "619bad54-1764-4de4-a786-8898b0a7f40c",
    "readOnly": true,
    "resources": [
        {
            "ARN": "arn:aws:kms:us-east-2:123456789012:key/f8f4f94f-e588-4254-b7e8-078b99270be7",
            "accountId": "123456789012",
            "type": "AWS::KMS::Key"
        }
    ],
    "eventType": "AwsApiCall",
    "recipientAccountId": "123456789012",
    "sharedEventID": "c1daefea-a5c2-4fab-b6f4-d8eaa1e522dc"

}
```