

# CloudFront and edge function logging
<a name="logging"></a>

Amazon CloudFront provides different kinds of logging. You can log the viewer requests that come to your CloudFront distributions, or you can log the CloudFront service activity (API activity) in your Amazon account. You can also get logs from your CloudFront Functions and Lambda@Edge functions.

## Logging requests
<a name="logging-requests"></a>

CloudFront provides the following ways to log the requests that come to your distributions.

**Access logs (standard logs)**  
CloudFront access logs provide detailed records about every request that's made to a distribution. You can use the logs for scenarios, such as security and access audits.   
CloudFront access logs are delivered to the delivery destination that you specify.   
Use access logs when you need:  
+ Historical analysis and reporting
+ Security audits and compliance requirements
+ Cost-effective long-term log retention
For more information, see [Access logs (standard logs)](AccessLogs.md).

**Real-time access logs**  
CloudFront real-time access logs are delivered within seconds of receiving the requests and provide information about requests made to a distribution in real time.  You can choose the *sampling rate* for your real-time access logs—that is, the percentage of requests for which you want to receive real-time access log records. You can also choose the specific fields that you want to receive in the log records. Real-time access logs are ideal for live monitoring for content delivery performance.  
CloudFront real-time access logs are delivered to the data stream of your choice in Amazon Kinesis Data Streams. CloudFront charges for real-time access logs, in addition to the charges you incur for using Kinesis Data Streams.  
Use real-time access logs when you need:  
+ Real-time monitoring and alerts
+ Live dashboards and operational insights
For more information, see [Use real-time access logs](real-time-logs.md).

**Connection logs**  
Connection logs provide detailed information about the connection between the server and the client for mTLS enabled distributions. Connection logs provide visibility into client certificate information, reasons for mTLS authentication failures and whether a connection was permitted or refused.  
Like access logs (standard logs), connection logs are delivered to the delivery destination that you specify.   
 To enable connection logs, you must first [enable mTLS ](mtls-authentication.md) for your distribution. 
Use connection logs when you need:  
+ Reasons for successful or unsuccessful connections during the TLS handshake 
+ Visibility into the client certificate information
For more information, see [Observability using connection logs](connection-logs.md).

## Logging edge functions
<a name="logging-edge-functions"></a>

You can use Amazon CloudWatch Logs to get logs for your edge functions, both Lambda@Edge and CloudFront Functions. You can access the logs using the CloudWatch console or the CloudWatch Logs API. For more information, see [Edge function logs](edge-functions-logs.md).

## Logging service activity
<a name="logging-service-activity"></a>

You can use Amazon CloudTrail to log the CloudFront service activity (API activity) in your Amazon account. CloudTrail provides a record of API actions taken by a user, role, or Amazon service in CloudFront. Using the information collected by CloudTrail, you can determine the API request that was made to CloudFront, the IP address from which the request was made, who made the request, when it was made, and additional details.

For more information, see [Logging Amazon CloudFront API calls using Amazon CloudTrail](logging_using_cloudtrail.md).

For more information about logging, see the following topics:

**Topics**
+ [

## Logging requests
](#logging-requests)
+ [

## Logging edge functions
](#logging-edge-functions)
+ [

## Logging service activity
](#logging-service-activity)
+ [

# Access logs (standard logs)
](AccessLogs.md)
+ [

# Use real-time access logs
](real-time-logs.md)
+ [

# Edge function logs
](edge-functions-logs.md)
+ [

# Logging Amazon CloudFront API calls using Amazon CloudTrail
](logging_using_cloudtrail.md)

# Access logs (standard logs)
<a name="AccessLogs"></a>

You can configure CloudFront to create log files that contain detailed information about every user (viewer) request that CloudFront receives. These are called *access logs*, also known as *standard logs*. 

Each log contains information such as the time the request was received, the processing time, request paths, and server responses. You can use these access logs to analyze response times and to troubleshoot issues.

The following diagram shows how CloudFront logs information about requests for your objects. In this example, the distributions are configured to send access logs to an Amazon S3 bucket.

![\[Basic flow for access logs\]](http://docs.amazonaws.cn/en_us/AmazonCloudFront/latest/DeveloperGuide/images/Logging.png)


1. In this example, you have two websites, A and B, and two corresponding CloudFront distributions. Users request your objects using URLs that are associated with your distributions.

1. CloudFront routes each request to the appropriate edge location.

1. CloudFront writes data about each request to a log file specific to that distribution. In this example, information about requests related to Distribution A goes into a log file for Distribution A. Information about requests related to Distribution B goes into a log file for Distribution B.

1. CloudFront periodically saves the log file for a distribution in the Amazon S3 bucket that you specified when you enabled logging. CloudFront then starts saving information about subsequent requests in a new log file for the distribution.

   If viewers don't access your content during a given hour, you don't receive any log files for that hour.

**Note**  
We recommend that you use the logs to understand the nature of the requests for your content, not as a complete accounting of all requests. CloudFront delivers access logs on a best-effort basis. The log entry for a particular request might be delivered long after the request was actually processed and, in rare cases, a log entry might not be delivered at all. When a log entry is omitted from access logs, the number of entries in the access logs won't match the usage that appears in the Amazon billing and usage reports.

CloudFront supports two versions of standard logging. Standard logging (legacy) supports sending your access logs to Amazon S3 *only*. Standard logging (v2) supports additional delivery destinations. You can configure both or either logging option for your distribution. For more information, see the following topics:

**Topics**
+ [

# Configure standard logging (v2)
](standard-logging.md)
+ [

# Configure standard logging (legacy)
](standard-logging-legacy-s3.md)
+ [

# Standard logging reference
](standard-logs-reference.md)

**Tip**  
CloudFront also offers real-time access logs, which give you information about requests made to a distribution in real time (logs are delivered within seconds of receiving the requests). You can use real-time access logs to monitor, analyze, and take action based on content delivery performance. For more information, see [Use real-time access logs](real-time-logs.md).

# Configure standard logging (v2)
<a name="standard-logging"></a>

**Note**  
The standard logging (v2) feature isn't available in the China Regions.

You can enable access logs (standard logs) when you create or update a distribution. Standard logging (v2) includes the following features:
+ Send access logs to Amazon CloudWatch Logs, Amazon Data Firehose, and Amazon Simple Storage Service (Amazon S3).
+ Select the log fields that you want. You can also select a [subset of real-time access log fields](#standard-logging-real-time-log-selection).
+ Select additional [output log file ](#supported-log-file-format)formats.

If you’re using Amazon S3, you have the following optional features:
+ Send logs to opt-in Amazon Web Services Regions.
+ Organize your logs with partitioning.
+ Enable Hive-compatible file names.

For more information, see [Send logs to Amazon S3](#send-logs-s3).

To get started with standard logging, complete the following steps:

1. Set up your required permissions for the specified Amazon Web Services service that will receive your logs.

1. Configure standard logging from the CloudFront console or the CloudWatch API.

1. View your access logs.

**Note**  
If you enable standard logging (v2), this doesn’t affect or change standard logging (legacy). You can continue to use standard logging (legacy) for your distribution, in addition to using standard logging (v2). For more information, see [Configure standard logging (legacy)](standard-logging-legacy-s3.md).
If you already enabled standard logging (legacy) and you want to enable standard logging (v2) to Amazon S3, we recommend that you specify a *different* Amazon S3 bucket or use a *separate path* in the same bucket (for example, use a log prefix or partitioning). This helps you keep track of which log files are associated with which distribution and prevents log files from overwriting each other.

## Permissions
<a name="permissions-standard-logging"></a>

CloudFront uses CloudWatch vended logs to deliver access logs. To do so, you need permissions to the specified Amazon Web Services service so that you can enable logging delivery.

To see the required permissions for each logging destination, choose from the following topics in the *Amazon CloudWatch Logs User Guide*.
+ [CloudWatch Logs](https://docs.amazonaws.cn/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html#AWS-logs-infrastructure-V2-CloudWatchLogs)
+ [Firehose](https://docs.amazonaws.cn/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html#AWS-logs-infrastructure-V2-Firehose)
+ [Amazon S3](https://docs.amazonaws.cn/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html#AWS-logs-infrastructure-V2-S3)

After you have set up permissions to your logging destination, you can enable standard logging for your distribution.

**Note**  
CloudFront supports sending access logs to different Amazon Web Services accounts (cross accounts). To enable cross-account delivery, both accounts (your account and the receiving account) must have the required permissions. For more information, see the [Enable standard logging for cross-account delivery](#enable-standard-logging-cross-accounts) section or the [Cross-account delivery example](https://docs.amazonaws.cn/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html#vended-logs-crossaccount-example) in the *Amazon CloudWatch Logs User Guide*. 

## Enable standard logging
<a name="set-up-standard-logging"></a>

To enable standard logging, you can use the CloudFront console or the CloudWatch API.

**Contents**
+ [

### Enable standard logging (CloudFront console)
](#access-logging-console)
+ [

### Enable standard logging (CloudWatch API)
](#enable-access-logging-api)

### Enable standard logging (CloudFront console)
<a name="access-logging-console"></a>

**To enable standard logging for a CloudFront distribution (console)**

1. Use the CloudFront console to [update an existing distribution](HowToUpdateDistribution.md#HowToUpdateDistributionProcedure).

1. Choose the **Logging** tab.

1. Choose **Add**, then select the service to receive your logs:
   + CloudWatch Logs
   + Firehose
   + Amazon S3

1. For the **Destination**, select the resource for your service. If you haven’t already created your resource, you can choose **Create** or see the following documentation.
   + For CloudWatch Logs, enter the **[Log group name](https://docs.amazonaws.cn/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html)**.
   + For Firehose, enter the **[Firehose delivery stream](https://docs.amazonaws.cn/firehose/latest/dev/basic-create.html)**.
   + For Amazon S3, enter the **[Bucket name](https://docs.amazonaws.cn/AmazonS3/latest/userguide/create-bucket-overview.html)**. 
**Tip**  
To specify a prefix, enter the prefix after the bucket name, such as `amzn-s3-demo-bucket.s3.amazonaws.com/MyLogPrefix`. If you don't specify a prefix, CloudFront will automatically add one for you. For more information, see [Send logs to Amazon S3](#send-logs-s3).

1. For **Additional settings – *optional***, you can specify the following options:

   1. For **Field selection**, select the log field names that you want to deliver to your destination. You can select [access log fields](standard-logs-reference.md#BasicDistributionFileFormat) and a subset of [real-time access log fields](#standard-logging-real-time-log-selection).

   1. (Amazon S3 only) For **Partitioning**, specify the path to partition your log file data. 

   1. (Amazon S3 only) For **Hive-compatible file format**, you can select the checkbox to use Hive-compatible S3 paths. This helps simplify loading new data into your Hive-compatible tools.

   1. For **Output format**, specify your preferred format.
**Note**  
If you choose **Parquet**, this option incurs CloudWatch charges for converting your access logs to Apache Parquet. For more information, see the [Vended Logs section for CloudWatch pricing](https://www.amazonaws.cn/cloudwatch/pricing/).

   1. For **Field delimiter**, specify how to separate log fields. 

1. Complete the steps to update or create your distribution.

1. To add another destination, repeat steps 3–6.

1. From the **Logs** page, verify that the standard logs status is **Enabled** next to the distribution.

1. (Optional) To enable cookie logging, choose **Manage**, **Settings** and turn on **Cookie logging**, then choose **Save changes**.
**Tip**  
Cookie logging is a global setting that applies to *all* standard logging for your distribution. You can’t override this setting for separate delivery destinations.

For more information about the standard logging delivery and log fields, see the [Standard logging reference](standard-logs-reference.md).

### Enable standard logging (CloudWatch API)
<a name="enable-access-logging-api"></a>

You can also use the CloudWatch API to enable standard logging for your distributions. 

**Notes**  
When calling the CloudWatch API to enable standard logging, you must specify the US East (N. Virginia) Region (`us-east-1`), even if you want to enable cross Region delivery to another destination. For example, if you want to send your access logs to an S3 bucket in the Europe (Ireland) Region (`eu-west-1`), use the CloudWatch API in the `us-east-1` Region.
There is an additional option to include cookies in standard logging. In the CloudFront API, this is the `IncludeCookies` parameter. If you configure access logging by using the CloudWatch API and you specify that you want to include cookies, you must use the CloudFront console or CloudFront API to update your distribution to include cookies. Otherwise, CloudFront can’t send cookies to your log destination. For more information, see [Cookie logging](DownloadDistValuesGeneral.md#DownloadDistValuesCookieLogging).

**To enable standard logging for a distribution (CloudWatch API)**

1. After you a create a distribution, get the Amazon Resource Name (ARN). 

   You can find the ARN from the **Distribution** page in the CloudFront console or you can use the [GetDistribution](https://docs.amazonaws.cn/cloudfront/latest/APIReference/API_GetDistribution.html) API operation. A distribution ARN follows the format: `arn:aws:cloudfront::123456789012:distribution/d111111abcdef8` 

1. Next, use the CloudWatch [PutDeliverySource](https://docs.amazonaws.cn/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliverySource.html) API operation to create a delivery source for the distribution. 

   1. Enter a name for the delivery source.

   1. Pass the `resourceArn` of the distribution. 

   1. For `logType`, specify `ACCESS_LOGS` as the type of logs that are collected. 

   1.   
**Example Amazon CLI put-delivery-source command**  

      The following is an example of configuring a delivery source for a distribution.

      ```
      aws logs put-delivery-source --name S3-delivery --resource-arn arn:aws:cloudfront::123456789012:distribution/d111111abcdef8 --log-type ACCESS_LOGS
      ```

      **Output**

      ```
      {
       "deliverySource": {
       "name": "S3-delivery",
       "arn": "arn:aws:logs:us-east-1:123456789012:delivery-source:S3-delivery",
       "resourceArns": [
       "arn:aws:cloudfront::123456789012:distribution/d111111abcdef8"
       ],
       "service": "cloudfront",
       "logType": "ACCESS_LOGS"
       }
      }
      ```

1. Use the [PutDeliveryDestination](https://docs.amazonaws.cn/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestination.html) API operation to configure where to store your logs. 

   1. For `destinationResourceArn`, specify the ARN of the destination. This can be a CloudWatch Logs log group, a Firehose delivery stream, or an Amazon S3 bucket.

   1. For `outputFormat`, specify the output format for your logs.

   1.   
**Example Amazon CLI put-delivery-destination command**  

      The following is an example of configuring a delivery destination to an Amazon S3 bucket.

      ```
      aws logs put-delivery-destination --name S3-destination --delivery-destination-configuration destinationResourceArn=arn:aws:s3:::amzn-s3-demo-bucket
      ```

      **Output**

      ```
      {
          "name": "S3-destination",
          "arn": "arn:aws:logs:us-east-1:123456789012:delivery-destination:S3-destination",
          "deliveryDestinationType": "S3",
          "deliveryDestinationConfiguration": {
              "destinationResourceArn": "arn:aws:s3:::amzn-s3-demo-bucket"
          }
      }
      ```
**Note**  
If you're delivering logs cross-account, you must use the [PutDeliveryDestinationPolicy](https://docs.amazonaws.cn/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestinationPolicy.html) API operation to assign an Amazon Identity and Access Management (IAM) policy to the destination account. The IAM policy allows delivery from one account to another account.

1. Use the [CreateDelivery](https://docs.amazonaws.cn/AmazonCloudWatchLogs/latest/APIReference/API_CreateDelivery.html) API operation to link the delivery source to the destination that you created in the previous steps. This API operation associates the delivery source with the end destination.

   1. For `deliverySourceName`, specify the source name.

   1. For `deliveryDestinationArn`, specify the ARN for the delivery destination.

   1. For `fieldDelimiter`, specify the string to separate each log field.

   1. For `recordFields`, specify the log fields that you want.

   1. If you’re using S3, specify whether to use `enableHiveCompatiblePath` and `suffixPath`.  
**Example Amazon CLI create-delivery command**  

   The following is an example of creating a delivery. 

   ```
   aws logs create-delivery --delivery-source-name cf-delivery --delivery-destination-arn arn:aws:logs:us-east-1:123456789012:delivery-destination:S3-destination
   ```

   **Output**

   ```
   {
       "id": "abcNegnBoTR123",
       "arn": "arn:aws:logs:us-east-1:123456789012:delivery:abcNegnBoTR123",
       "deliverySourceName": "cf-delivery",
       "deliveryDestinationArn": "arn:aws:logs:us-east-1:123456789012:delivery-destination:S3-destination",
       "deliveryDestinationType": "S3",
       "recordFields": [
           "date",
           "time",
           "x-edge-location",
           "sc-bytes",
           "c-ip",
           "cs-method",
           "cs(Host)",
           "cs-uri-stem",
           "sc-status",
           "cs(Referer)",
           "cs(User-Agent)",
           "cs-uri-query",
           "cs(Cookie)",
           "x-edge-result-type",
           "x-edge-request-id",
           "x-host-header",
           "cs-protocol",
           "cs-bytes",
           "time-taken",
           "x-forwarded-for",
           "ssl-protocol",
           "ssl-cipher",
           "x-edge-response-result-type",
           "cs-protocol-version",
           "fle-status",
           "fle-encrypted-fields",
           "c-port",
           "time-to-first-byte",
           "x-edge-detailed-result-type",
           "sc-content-type",
           "sc-content-len",
           "sc-range-start",
           "sc-range-end",
           "c-country",
           "cache-behavior-path-pattern"
       ],
        "fieldDelimiter": ""
   }
   ```

1. From the CloudFront console, on the **Logs** page, verify that the standard logs status is **Enabled** next to the distribution.

   For more information about the standard logging delivery and log fields, see the [Standard logging reference](standard-logs-reference.md).

**Note**  
To enable standard logging (v2) for CloudFront by using Amazon CloudFormation, you can use the following CloudWatch Logs properties:  
[Delivery](https://docs.amazonaws.cn/AWSCloudFormation/latest/UserGuide/aws-resource-logs-delivery.html)
[DeliveryDestination](https://docs.amazonaws.cn/AWSCloudFormation/latest/UserGuide/aws-resource-logs-deliverydestination.html)
[DeliverySource](https://docs.amazonaws.cn/AWSCloudFormation/latest/UserGuide/aws-resource-logs-deliverysource.html)
The `ResourceArn` is the CloudFront distribution and `LogType` must be `ACCESS_LOGS` as the supported log type.

## Enable standard logging for cross-account delivery
<a name="enable-standard-logging-cross-accounts"></a>

If you enable standard logging for your Amazon Web Services account and you want to deliver your access logs to another account, make sure that you configure the source account and the destination account correctly. The *source account * with the CloudFront distribution sends its access logs to the *destination account*.

In this example procedure, the source account *111111111111*) sends its access logs to an Amazon S3 bucket in the destination account (*222222222222*). To send access logs to an Amazon S3 bucket in the destination account, use the Amazon CLI. 

### Configure the destination account
<a name="steps-destination-account"></a>

For destination account, complete the following procedure.

**To configure the destination account**

1. To create the log delivery destination, you can enter the following Amazon CLI command. This example uses the `MyLogPrefix` string to create a prefix for your access logs.

   ```
   aws logs put-delivery-destination --name cloudfront-delivery-destination --delivery-destination-configuration "destinationResourceArn=arn:aws:s3:::amzn-s3-demo-bucket-cloudfront-logs/MyLogPrefix"
   ```

   **Output**

   ```
   {
       "deliveryDestination": {
           "name": "cloudfront-delivery-destination",
           "arn": "arn:aws:logs:us-east-1:222222222222:delivery-destination:cloudfront-delivery-destination",
           "deliveryDestinationType": "S3",
           "deliveryDestinationConfiguration": {"destinationResourceArn": "arn:aws:s3:::amzn-s3-demo-bucket-cloudfront-logs/MyLogPrefix"}
       }
   }
   ```
**Note**  
If you specify an S3 bucket *without* a prefix, CloudFront will automatically append the `AWSLogs/<account-ID>/CloudFront` as a prefix that appears in the `suffixPath` of the S3 delivery destination. For more information, see [S3DeliveryConfiguration](https://docs.amazonaws.cn/AmazonCloudWatchLogs/latest/APIReference/API_S3DeliveryConfiguration.html).

1. Add the resource policy for the log delivery destination to allow the source account to create a log delivery.

   In the following policy, replace *111111111111* with the source account ID and specify the delivery destination ARN from the output in step 1. 

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "AllowCreateDelivery",
               "Effect": "Allow",
               "Principal": {"AWS": "111111111111"},
               "Action": ["logs:CreateDelivery"],
               "Resource": "arn:aws-cn:logs:us-east-1:222222222222:delivery-destination:cloudfront-delivery-destination"
           }
       ]
   }
   ```

------

1. Save the file, such as `deliverypolicy.json`.

1. To attach the previous policy to the delivery destination, enter the following Amazon CLI command.

   ```
   aws logs put-delivery-destination-policy --delivery-destination-name cloudfront-delivery-destination --delivery-destination-policy file://deliverypolicy.json
   ```

1. Add the below statement to the destination Amazon S3 bucket policy, replacing the resource ARN and the source account ID. This policy allows the `delivery.logs.amazonaws.com` service principal to perform the `s3:PutObject` action.

   ```
   {
       "Sid": "AWSLogsDeliveryWrite",
       "Effect": "Allow",
       "Principal": {"Service": "delivery.logs.amazonaws.com"},
       "Action": "s3:PutObject",
       "Resource": "arn:aws:s3:::amzn-s3-demo-bucket-cloudfront-logs/*",
       "Condition": {
           "StringEquals": {
               "s3:x-amz-acl": "bucket-owner-full-control",
               "aws:SourceAccount": "111111111111"
           },
           "ArnLike": {"aws:SourceArn": "arn:aws:logs:us-east-1:111111111111:delivery-source:*"}
       }
   }
   ```

1. If you're using Amazon KMS for your bucket, add the following statement to the KMS key policy to grant permissions to the `delivery.logs.amazonaws.com` service principal.

   ```
   {
       "Sid": "Allow Logs Delivery to use the key",
       "Effect": "Allow",
       "Principal": {"Service": "delivery.logs.amazonaws.com"},
       "Action": [
           "kms:Encrypt",
           "kms:Decrypt",
           "kms:ReEncrypt*",
           "kms:GenerateDataKey*",
           "kms:DescribeKey"
       ],
       "Resource": "*",
       "Condition": {
           "StringEquals": {"aws:SourceAccount": "111111111111"},
           "ArnLike": {"aws:SourceArn": "arn:aws:logs:us-east-1:111111111111:delivery-source:*"}
       }
   }
   ```

### Configure the source account
<a name="steps-source-account"></a>

After you configure the destination account, follow this procedure to create the delivery source and enable logging for the distribution in the source account.

**To configure the source account**

1. Create a delivery source for CloudFront standard logging so that you can send log files to CloudWatch Logs. 

   You can enter the following Amazon CLI command, replacing the name and your distribution ARN.

   ```
   aws logs put-delivery-source --name s3-cf-delivery --resource-arn arn:aws:cloudfront::111111111111:distribution/E1TR1RHV123ABC --log-type ACCESS_LOGS
   ```

   **Output**

   ```
   {
       "deliverySource": {
           "name": "s3-cf-delivery",
           "arn": "arn:aws:logs:us-east-1:111111111111:delivery-source:s3-cf-delivery",
           "resourceArns": ["arn:aws:cloudfront::111111111111:distribution/E1TR1RHV123ABC"],
           "service": "cloudfront",
           "logType": "ACCESS_LOGS"
       }
   }
   ```

1. Create a delivery to map the source account's log delivery source and the destination account's log delivery destination.

   In the following Amazon CLI command, specify the delivery destination ARN from the output in [Step 1: Configure the destination account](#steps-destination-account).

   ```
   aws logs create-delivery --delivery-source-name s3-cf-delivery --delivery-destination-arn arn:aws:logs:us-east-1:222222222222:delivery-destination:cloudfront-delivery-destination
   ```

   **Output**

   ```
   {
       "delivery": {
           "id": "OPmOpLahVzhx1234",
           "arn": "arn:aws:logs:us-east-1:111111111111:delivery:OPmOpLahVzhx1234",
           "deliverySourceName": "s3-cf-delivery",
           "deliveryDestinationArn": "arn:aws:logs:us-east-1:222222222222:delivery-destination:cloudfront-delivery-destination",
           "deliveryDestinationType": "S3",
           "recordFields": [
               "date",
               "time",
               "x-edge-location",
               "sc-bytes",
               "c-ip",
               "cs-method",
               "cs(Host)",
               "cs-uri-stem",
               "sc-status",
               "cs(Referer)",
               "cs(User-Agent)",
               "cs-uri-query",
               "cs(Cookie)",
               "x-edge-result-type",
               "x-edge-request-id",
               "x-host-header",
               "cs-protocol",
               "cs-bytes",
               "time-taken",
               "x-forwarded-for",
               "ssl-protocol",
               "ssl-cipher",
               "x-edge-response-result-type",
               "cs-protocol-version",
               "fle-status",
               "fle-encrypted-fields",
               "c-port",
               "time-to-first-byte",
               "x-edge-detailed-result-type",
               "sc-content-type",
               "sc-content-len",
               "sc-range-start",
               "sc-range-end",
               "c-country",
               "cache-behavior-path-pattern"
           ],
           "fieldDelimiter": "\t"
       }
   }
   ```

1. Verify your cross-account delivery is successful.

   1. From the *source* account, sign in to the CloudFront console and choose your distribution. On the **Logging** tab, under **Type**, you will see an entry created for the S3 cross-account log delivery.

   1. From the *destination* account, sign in to the Amazon S3 console and choose your Amazon S3 bucket. You will see the prefix `MyLogPrefix` in the bucket name and any access logs delivered to that folder. 

## Output file format
<a name="supported-log-file-format"></a>

Depending on the delivery destination that you choose, you can specify one of the following formats for log files:
+ JSON
+ Plain
+ w3c
+ Raw
+ Parquet (Amazon S3 only)

**Note**  
You can only set the output format when you first create the delivery destination. This can't be updated later. To change the output format, delete the delivery and create another one.

For more information, see [PutDeliveryDestination](https://docs.amazonaws.cn/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestination.html) in the *Amazon CloudWatch Logs API Reference*.

## Edit standard logging settings
<a name="standard-logs-v2-edit-settings"></a>

You can enable or disable logging and update other log settings by using the [CloudFront console](https://console.amazonaws.cn/cloudfront/v4/home) or the CloudWatch API. Your changes to logging settings take effect within 12 hours.

For more information, see the following topics:
+ To update a distribution by using the CloudFront console, see [Update a distribution](HowToUpdateDistribution.md).
+ To update a distribution by using the CloudFront API, see [UpdateDistribution](https://docs.amazonaws.cn/cloudfront/latest/APIReference/API_UpdateDistribution.html) in the *Amazon CloudFront API Reference*.
+ For more information about CloudWatch Logs API operations, see the [Amazon CloudWatch Logs API Reference](https://docs.amazonaws.cn/AmazonCloudWatchLogs/latest/APIReference/Welcome.html).

## Access log fields
<a name="standard-logging-real-time-log-selection"></a>

You can select the same log fields that standard logging (legacy) supports. For more information, see [log file fields](standard-logs-reference.md#BasicDistributionFileFormat).

In addition, you can select the following [real-time access log fields](real-time-logs.md#understand-real-time-log-config).

1. `timestamp(ms)` – Timestamp in milliseconds.

1. `origin-fbl` – The number of seconds of first-byte latency between CloudFront and your origin. 

1. `origin-lbl` – The number of seconds of last-byte latency between CloudFront and your origin. 

1. `asn` – The autonomous system number (ASN) of the viewer. 

1. `c-country` – A country code that represents the viewer's geographic location, as determined by the viewer's IP address. For a list of country codes, see [ISO 3166-1 alpha-2](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2).

1. `cache-behavior-path-pattern` – The path pattern that identifies the cache behavior that matched the viewer request. 

## Send logs to CloudWatch Logs
<a name="send-logs-cloudwatch-logs"></a>

To send logs to CloudWatch Logs, create or use an existing CloudWatch Logs log group. For more information about configuring a CloudWatch Logs log group, see [Working with Log Groups and Log Streams](https://docs.amazonaws.cn/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html).

After you create your log group, you must have the required permissions to allow standard logging. For more information about the required permissions, see [Logs sent to CloudWatch Logs](https://docs.amazonaws.cn/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html#AWS-logs-infrastructure-V2-CloudWatchLogs) in the *Amazon CloudWatch Logs User Guide*. 

**Notes**  
When you specify the name of the CloudWatch Logs log group, only use the regex pattern `[\w-]`. For more information, see the [PutDeliveryDestination](https://docs.amazonaws.cn/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestination.html#API_PutDeliveryDestination_RequestSyntax) API operation in the *Amazon CloudWatch Logs API Reference*.
Verify that your log group resource policy doesn't exceed the size limit. See the [Log group resource policy size limit considerations](https://docs.amazonaws.cn/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html#AWS-logs-infrastructure-V2-CloudWatchLogs) section in the CloudWatch Logs topic.

### Example access log sent to CloudWatch Logs
<a name="example-access-logs-cwl"></a>

```
{ 
"date": "2024-11-14", 
"time": "21:34:06", 
"x-edge-location": "SOF50-P2", 
"asn": "16509", 
"timestamp(ms)": "1731620046814", 
"origin-fbl": "0.251", 
"origin-lbl": "0.251", 
"x-host-header": "d111111abcdef8.cloudfront.net", 
"cs(Cookie)": "examplecookie=value" 
}
```

## Send logs to Firehose
<a name="send-logs-kinesis"></a>

To send logs to Firehose, create or use an existing Firehose delivery stream. Then, specify the Firehose delivery stream as the log delivery destination. You must specify a Firehose delivery stream in the US East (N. Virginia) us-east-1 Region.

For information about creating your delivery stream, see [Creating an Amazon Data Firehose delivery stream](https://docs.amazonaws.cn/firehose/latest/dev/basic-create.html). 

After you create your delivery stream, you must have the required permissions to allow standard logging. For more information, see [Logs sent to Firehose](https://docs.amazonaws.cn/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html#AWS-logs-infrastructure-V2-Firehose) in the *Amazon CloudWatch Logs User Guide*.

**Note**  
When you specify the name of the Firehose stream, only use the regex pattern `[\w-]`. For more information, see the [PutDeliveryDestination](https://docs.amazonaws.cn/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestination.html#API_PutDeliveryDestination_RequestSyntax) API operation in the *Amazon CloudWatch Logs API Reference*.

### Example access log sent to Firehose
<a name="example-access-logs-firehose"></a>

```
{"date":"2024-11-15","time":"19:45:51","x-edge-location":"SOF50-P2","asn":"16509","timestamp(ms)":"1731699951183","origin-fbl":"0.254","origin-lbl":"0.254","x-host-header":"d111111abcdef8.cloudfront.net","cs(Cookie)":"examplecookie=value"}
{"date":"2024-11-15","time":"19:45:52","x-edge-location":"SOF50-P2","asn":"16509","timestamp(ms)":"1731699952950","origin-fbl":"0.125","origin-lbl":"0.125","x-host-header":"d111111abcdef8.cloudfront.net","cs(Cookie)":"examplecookie=value"}
```

## Send logs to Amazon S3
<a name="send-logs-s3"></a>

To send your access logs to Amazon S3, create or use an existing S3 bucket. When you enable logging in CloudFront, specify the bucket name. For information about creating a bucket, see [Create a bucket](https://docs.amazonaws.cn/AmazonS3/latest/userguide/create-bucket-overview.html) in the *Amazon Simple Storage Service User Guide*.

After you create your bucket, you must have the required permissions to allow standard logging. For more information, see [Logs sent to Amazon S3](https://docs.amazonaws.cn/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html#AWS-logs-infrastructure-V2-S3) in the *Amazon CloudWatch Logs User Guide*.
+ After you enable logging, Amazon automatically adds the required bucket policies for you.
+ You can also use S3 buckets in the [opt-in Amazon Web Services Regions](https://docs.amazonaws.cn/accounts/latest/reference/manage-acct-regions.html).

**Note**  
If you already enabled standard logging (legacy) and you want to enable standard logging (v2) to Amazon S3, we recommend that you specify a *different* Amazon S3 bucket or use a *separate path* in the same bucket (for example, use a log prefix or partitioning). This helps you keep track of which log files are associated with which distribution and prevents log files from overwriting each other.

**Topics**
+ [

### Specify an S3 bucket
](#prefix-s3-buckets)
+ [

### Partitioning
](#partitioning)
+ [

### Hive-compatible file name format
](#hive-compatible-file-name-format)
+ [

### Example paths to access logs
](#bucket-path-examples)
+ [

### Example access log sent to Amazon S3
](#example-access-logs-s3)

### Specify an S3 bucket
<a name="prefix-s3-buckets"></a>

When you specify an S3 bucket as the delivery destination, note the following.

The S3 bucket name can only use the regex pattern `[\w-]`. For more information, see the [PutDeliveryDestination](https://docs.amazonaws.cn/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestination.html#API_PutDeliveryDestination_RequestSyntax) API operation in the *Amazon CloudWatch Logs API Reference*.

If you specified a prefix for your S3 bucket, your logs appear under that path. If you don't specify a prefix, CloudFront will automatically append the `AWSLogs/{account-id}/CloudFront` prefix for you. 

For more information, see [Example paths to access logs](#bucket-path-examples).

### Partitioning
<a name="partitioning"></a>

You can use partitioning to organize your access logs when CloudFront sends them to your S3 bucket. This helps you organize and locate your access logs based on the path that you want.

You can use the following variables to create a folder path.
+ `{DistributionId}` or `{distributionid}`
+ `{yyyy}`
+ `{MM}`
+ `{dd}`
+ `{HH}`
+ `{accountid}`

You can use any number of variables and specify folder names in your path. CloudFront then uses this path to create a folder structure for you in the S3 bucket.

**Examples**
+ `my_distribution_log_data/{DistributionId}/logs`
+ `/cloudfront/{DistributionId}/my_distribution_log_data/{yyyy}/{MM}/{dd}/{HH}/logs `

**Note**  
 You can use either variable for distribution ID in the suffix path. However, if you're sending access logs to Amazon Glue, you must use the `{distributionid}` variable because Amazon Glue expects partition names to be in lowercase. Update your existing log configuration in CloudFront to replace `{DistributionId}` with `{distributionid}`. 

### Hive-compatible file name format
<a name="hive-compatible-file-name-format"></a>

You can use this option so that S3 objects that contain delivered access logs use a prefix structure that allows for integration with Apache Hive. For more information, see the [CreateDelivery](https://docs.amazonaws.cn/AmazonCloudWatchLogs/latest/APIReference/API_CreateDelivery.html) API operation.

**Example**  

```
/cloudfront/DistributionId={DistributionId}/my_distribution_log_data/year={yyyy}/month={MM}/day={dd}/hour={HH}/logs
```

For more information about partitioning and the Hive-compatible options, see the [S3DeliveryConfiguration](https://docs.amazonaws.cn/AmazonCloudWatchLogs/latest/APIReference/API_S3DeliveryConfiguration.html) element in the *Amazon CloudWatch Logs API Reference*.

### Example paths to access logs
<a name="bucket-path-examples"></a>

When you specify an S3 bucket as the destination, you can use the following options to create the path to your access logs:
+ An Amazon S3 bucket, with or without a prefix
+ Partitioning, by using a CloudFront provided variable or entering your own
+ Enabling the Hive-compatible option

The following tables show how your access logs appear in your bucket, depending on the options that you choose.

#### Amazon S3 bucket with a prefix
<a name="bucket-with-prefix"></a>


| Amazon S3 bucket name | Partition that you specify in the suffix path | Updated suffix path | Hive-compatible enabled? | Access logs are sent to | 
| --- | --- | --- | --- | --- | 
| amzn-s3-demo-bucket/MyLogPrefix | None | None | No | amzn-s3-demo-bucket/MyLogPrefix/ | 
| amzn-s3-demo-bucket/MyLogPrefix | myFolderA/ | myFolderA/ | No | amzn-s3-demo-bucket/MyLogPrefix/myFolderA/ | 
| amzn-s3-demo-bucket/MyLogPrefix | myFolderA/\$1yyyy\$1 | myFolderA/\$1yyyy\$1 | Yes | amzn-s3-demo-bucket/MyLogPrefix/myFolderA/year=2025 | 

#### Amazon S3 bucket without a prefix
<a name="bucket-without-prefix"></a>


| Amazon S3 bucket name | Partition that you specify in the suffix path | Updated suffix path | Hive-compatible enabled? | Access logs are sent to | 
| --- | --- | --- | --- | --- | 
| amzn-s3-demo-bucket | None | AWSLogs/\$1account-id\$1/CloudFront/ | No | amzn-s3-demo-bucket/AWSLogs/<your-account-ID>/CloudFront/ | 
| amzn-s3-demo-bucket | myFolderA/ | AWSLogs/\$1account-id\$1/CloudFront/myFolderA/ | No | amzn-s3-demo-bucket/AWSLogs/<your-account-ID>/CloudFront/myFolderA/ | 
| amzn-s3-demo-bucket | myFolderA/ | AWSLogs/\$1account-id\$1/CloudFront/myFolderA/ | Yes | amzn-s3-demo-bucket/AWSLogs/aws-account-id=<your-account-ID>/CloudFront/myFolderA/ | 
| amzn-s3-demo-bucket | myFolderA/\$1yyyy\$1 | AWSLogs/\$1account-id\$1/CloudFront/myFolderA/\$1yyyy\$1 | Yes | amzn-s3-demo-bucket/AWSLogs/aws-account-id=<your-account-ID>/CloudFront/myFolderA/year=2025 | 

#### Amazon Web Services account ID as a partition
<a name="bucket-account-id-partition"></a>


| Amazon S3 bucket name | Partition that you specify in the suffix path | Updated suffix path | Hive-compatible enabled? | Access logs are sent to | 
| --- | --- | --- | --- | --- | 
| amzn-s3-demo-bucket | None | AWSLogs/\$1account-id\$1/CloudFront/ | Yes | amzn-s3-demo-bucket/AWSLogs/aws-account-id=<your-account-ID>/CloudFront/ | 
| amzn-s3-demo-bucket | myFolderA/\$1accountid\$1 | AWSLogs/\$1account-id\$1/CloudFront/myFolderA/\$1accountid\$1 | Yes | amzn-s3-demo-bucket/AWSLogs/aws-account-id=<your-account-ID>/CloudFront/myFolderA/accountid=<your-account-ID> | 

**Notes**  
The `{account-id}` variable is reserved for CloudFront. CloudFront automatically adds this variable to your suffix path if you specify an Amazon S3 bucket *without* a prefix. If your logs are Hive-compatible, this variable appears as `aws-account-id`.
You can use the `{accountid}` variable so that CloudFront adds your account ID to the suffix path. If your logs are Hive-compatible, this variable appears as `accountid`.
For more information about the suffix path, see [S3DeliveryConfiguration](https://docs.amazonaws.cn/AmazonCloudWatchLogs/latest/APIReference/API_S3DeliveryConfiguration.html).

### Example access log sent to Amazon S3
<a name="example-access-logs-s3"></a>

```
#Fields: date time x-edge-location asn timestamp(ms) x-host-header cs(Cookie)
2024-11-14    22:30:25    SOF50-P2    16509    1731623425421    
d111111abcdef8.cloudfront.net    examplecookie=value2
```

## Disable standard logging
<a name="delete-standard-log-destination"></a>

You can disable standard logging for your distribution if you no longer need it.

**To disable standard logging**

1. Sign in to the CloudFront console.

1. Choose **Distribution** and then choose your distribution ID. 

1. Choose **Logging** and then under **Access log destinations**, select the destination.

1. Choose **Manage** and then choose **Delete**.

1. Repeat the previous step if you have more than one standard logging.

**Note**  
When you delete standard logging from the CloudFront console, this action only deletes the delivery and the delivery destination. It doesn't delete the delivery source from your Amazon Web Services account. To delete a delivery source, specify the delivery source name in the `aws logs delete-delivery-source --name DeliverySourceName` command. For more information, see [DeleteDeliverySource](https://docs.amazonaws.cn/AmazonCloudWatchLogs/latest/APIReference/API_DeleteDeliverySource.html) in the *Amazon CloudWatch Logs API Reference*.

## Troubleshoot
<a name="troubleshooting-access-logs-v2"></a>

Use the following information to fix common issues when you work with CloudFront standard logging (v2).

### Delivery source already exists
<a name="access-logging-resource-already-used"></a>

When you enable standard logging for a distribution, you create a delivery source. You then use that delivery source to create deliveries to destination type that you want: CloudWatch Logs, Firehose, Amazon S3. Currently, you can only have one delivery source per distribution. If you try to create another delivery source for the same distribution, the following error message appears.

```
This ResourceId has already been used in another Delivery Source in this account
```

To create another delivery source, delete the existing one first. For more information, see [DeleteDeliverySource](https://docs.amazonaws.cn/AmazonCloudWatchLogs/latest/APIReference/API_DeleteDeliverySource.html) in the *Amazon CloudWatch Logs API Reference*.

### I changed the suffix path and the Amazon S3 bucket can't receive my logs
<a name="access-logging-s3-permission"></a>

If you enabled standard logging (v2) and specify a bucket ARN without a prefix, CloudFront will append the following default to the suffix path: `AWSLogs/{account-id}/CloudFront`. If you use the CloudFront console or the [UpdateDeliveryConfiguration](https://docs.amazonaws.cn/AmazonCloudWatchLogs/latest/APIReference/API_UpdateDeliveryConfiguration.html) API operation to specify a different suffix path, you must update the Amazon S3 bucket policy to use the same path.

**Example: Updating the suffix path**  

1. Your default suffix path is `AWSLogs/{account-id}/CloudFront` and you replace it with `myFolderA`. 

1. Because your new suffix path is different than the path specified in the Amazon S3 bucket policy, your access logs won't be delivered.

1. You can do one of the following steps:
   + Update the Amazon S3 bucket permission from `amzn-s3-demo-bucket/AWSLogs/<your-account-ID>/CloudFront/*` to `amzn-s3-demo-bucket/myFolderA/*`.
   + Update your logging configuration to use the default suffix again: `AWSLogs/{account-id}/CloudFront` 
For more information, see [Permissions](#permissions-standard-logging).

## Delete log files
<a name="standard-logs-v2-delete"></a>

CloudFront doesn't automatically delete log files from your destination. For information about deleting log files, see the following topics:

**Amazon S3**
+ [Deleting objects](https://docs.amazonaws.cn/AmazonS3/latest/userguide/DeletingObjects.html) in the *Amazon Simple Storage Service Console User Guide*

**CloudWatch Logs**
+ [Working with log groups and log streams](https://docs.amazonaws.cn/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html) in the *Amazon CloudWatch Logs User Guide*
+ [DeleteLogGroup](https://docs.amazonaws.cn/AmazonCloudWatchLogs/latest/APIReference/API_DeleteLogGroup.html) in the *Amazon CloudWatch Logs API Reference*

**Firehose**
+ [DeleteDeliveryStream](https://docs.amazonaws.cn/firehose/latest/APIReference/API_DeleteDeliveryStream.html) in the *Amazon Data Firehose API Reference*

## Pricing
<a name="pricing-standard-logs"></a>

CloudFront doesn’t charge for enabling standard logs. However, you can incur charges for the delivery, ingestion, storage or access, depending on the log delivery destination that you select. For more information, see [Amazon CloudWatch Logs Pricing](https://www.amazonaws.cn/cloudwatch/pricing/). Under **Paid Tier**, choose the **Logs** tab, and then under **Vended Logs**, see the information for each delivery destination.

For more information about pricing for each Amazon Web Services service, see the following topics:
+ [Amazon CloudWatch Logs Pricing](https://www.amazonaws.cn/cloudwatch/pricing/)
+ [Amazon Data Firehose Pricing](https://www.amazonaws.cn/kinesis/data-firehose/pricing/)
+ [Amazon S3 Pricing](https://www.amazonaws.cn/s3/pricing/) 
**Note**  
There are no additional charges for log delivery to Amazon S3, though you incur Amazon S3 charges for storing and accessing the log files. If you enable the **Parquet** option to convert your access logs to Apache Parquet, this option incurs CloudWatch charges. For more information, see the [Vended Logs section for CloudWatch pricing](https://www.amazonaws.cn/cloudwatch/pricing/).

# Configure standard logging (legacy)
<a name="standard-logging-legacy-s3"></a>

**Notes**  
This topic is for the previous version of standard logging. For the latest version, see [Configure standard logging (v2)](standard-logging.md).
If you already enabled standard logging (legacy) and you want to enable standard logging (v2) to Amazon S3, we recommend that you specify a *different* Amazon S3 bucket or use a *separate path* in the same bucket (for example, use a log prefix or partitioning). This helps you keep track of which log files are associated with which distribution and prevents log files from overwriting each other.

To get started with standard logging (legacy), complete the following steps:

1. Choose an Amazon S3 bucket that will receive your logs and add the required permissions.

1. Configure standard logging (legacy) from the CloudFront console or the CloudFront API. You can only choose an Amazon S3 bucket to receive your logs.

1. View your access logs.

## Choose an Amazon S3 bucket for standard logs
<a name="access-logs-choosing-s3-bucket"></a>

When you enable logging for a distribution, you specify the Amazon S3 bucket that you want CloudFront to store log files in. If you're using Amazon S3 as your origin, we recommend that you use a *separate* bucket for your log files.

Specify the Amazon S3 bucket that you want CloudFront to store access logs in, for example, `amzn-s3-demo-bucket.s3.amazonaws.com`.

You can store the log files for multiple distributions in the same bucket. When you enable logging, you can specify an optional prefix for the file names, so you can keep track of which log files are associated with which distributions.

**About choosing an S3 bucket**  
Your bucket must have access control list (ACL) enabled. If you choose a bucket without ACL enabled from the CloudFront console, an error message will appear. See [Permissions](#AccessLogsBucketAndFileOwnership).
Don't choose an Amazon S3 bucket with [S3 Object Ownership](https://docs.amazonaws.cn/AmazonS3/latest/userguide/about-object-ownership.html) set to **bucket owner enforced**. That setting disables ACLs for the bucket and the objects in it, which prevents CloudFront from delivering log files to the bucket.
Legacy logging does not support Amazon S3 buckets in opt-in regions. Please choose a region that is enabled by default or use [Standard Logging V2](standard-logging.md) which does support opt-in regions and additional features. For a list of default and opt-in regions, see [Amazon Web Services Regions](https://docs.amazonaws.cn/global-infrastructure/latest/regions/aws-regions.html).

## Permissions
<a name="AccessLogsBucketAndFileOwnership"></a>

**Important**  
Starting in April 2023, you must enable S3 ACLs for new S3 buckets used for CloudFront standard logs. You can enable ACLs when you [create a bucket](https://docs.amazonaws.cn/AmazonS3/latest/userguide/object-ownership-new-bucket.html), or enable ACLs for an [existing bucket](https://docs.amazonaws.cn/AmazonS3/latest/userguide/object-ownership-existing-bucket.html).  
For more information about the changes, see [Default settings for new S3 buckets FAQ](https://docs.amazonaws.cn/AmazonS3/latest/userguide/create-bucket-faq.html) in the *Amazon Simple Storage Service User Guide* and [Heads-Up: Amazon S3 Security Changes Are Coming in April of 2023](https://amazonaws-china.com/blogs/aws/heads-up-amazon-s3-security-changes-are-coming-in-april-of-2023/) in the *Amazon News Blog*.

Your Amazon Web Services account must have the following permissions for the bucket that you specify for log files:
+ The ACL for the bucket must grant you `FULL_CONTROL`. If you're the bucket owner, your account has this permission by default. If you're not, the bucket owner must update the ACL for the bucket.
+ `s3:GetBucketAcl`
+ `s3:PutBucketAcl`

**ACL for the bucket**  
When you create or update a distribution and enable logging, CloudFront uses these permissions to update the ACL for the bucket to give the `awslogsdelivery` account `FULL_CONTROL` permission. The `awslogsdelivery` account writes log files to the bucket. If your account doesn't have the required permissions to update the ACL, creating or updating the distribution will fail.  
In some circumstances, if you programmatically submit a request to create a bucket but a bucket with the specified name already exists, S3 resets permissions on the bucket to the default value. If you configured CloudFront to save access logs in an S3 bucket and you stop getting logs in that bucket, check permissions on the bucket to ensure that CloudFront has the necessary permissions.

**Restoring the ACL for the bucket**  
If you remove permissions for the `awslogsdelivery` account, CloudFront won't be able to save logs to the S3 bucket. To enable CloudFront to start saving logs for your distribution again, restore the ACL permission by doing one of the following:  
+ Disable logging for your distribution in CloudFront, and then enable it again. For more information, see [Standard logging](DownloadDistValuesGeneral.md#DownloadDistValuesLoggingOnOff).
+ Add the ACL permission for `awslogsdelivery` manually by navigating to the S3 bucket in the Amazon S3 console and adding permission. To add the ACL for `awslogsdelivery`, you must provide the canonical ID for the account, which is the following:

  

  `a52cb28745c0c06e84ec548334e44bfa7fc2a85c54af20cd59e4969344b7af56`

  For more information about adding ACLs to S3 buckets, see [Configuring ACLs](https://docs.amazonaws.cn/AmazonS3/latest/userguide/managing-acls.html) in the *Amazon Simple Storage Service User Guide*.

**ACL for each log file**  
In addition to the ACL on the bucket, there's an ACL on each log file. The bucket owner has `FULL_CONTROL` permission on each log file, the distribution owner (if different from the bucket owner) has no permission, and the `awslogsdelivery` account has read and write permissions. 

**Disabling logging**  
If you disable logging, CloudFront doesn't delete the ACLs for either the bucket or the log files. You can delete the ACLs if needed.

### Required key policy for SSE-KMS buckets
<a name="AccessLogsKMSPermissions"></a>

If the S3 bucket for your standard logs uses server-side encryption with Amazon KMS keys (SSE-KMS) by using a customer managed key, you must add the following statement to the key policy for your customer managed key. This allows CloudFront to write log files to the bucket. You can't use SSE-KMS with the Amazon managed key because CloudFront won't be able to write log files to the bucket.

```
{
    "Sid": "Allow CloudFront to use the key to deliver logs",
    "Effect": "Allow",
    "Principal": {
        "Service": "delivery.logs.amazonaws.com"
    },
    "Action": "kms:GenerateDataKey*",
    "Resource": "*"
}
```

If the S3 bucket for your standard logs uses SSE-KMS with an [S3 Bucket Key](https://docs.amazonaws.cn/AmazonS3/latest/userguide/bucket-key.html), you also need to add the `kms:Decrypt` permission to the policy statement. In that case, the full policy statement looks like the following.

```
{
    "Sid": "Allow CloudFront to use the key to deliver logs",
    "Effect": "Allow",
    "Principal": {
        "Service": "delivery.logs.amazonaws.com"
    },
    "Action": [
        "kms:GenerateDataKey*",
        "kms:Decrypt"
    ],
    "Resource": "*"
}
```

**Note**  
When you enable SSE-KMS for your S3 bucket, specify the complete ARN for the customer managed key. For more information, see [Specifying server-side encryption with Amazon KMS keys (SSE-KMS)](https://docs.amazonaws.cn/AmazonS3/latest/userguide/specifying-kms-encryption.html) in the *Amazon Simple Storage Service User Guide*.

## Enable standard logging (legacy)
<a name="standard-logs-legacy-enable"></a>

To enable standard logs, use the CloudFront console or the CloudFront API.

**Contents**
+ [

### Enable standard logging (legacy) (CloudFront console)
](#standard-logs-legacy-enable-console)
+ [

### Enable standard logging (legacy) (CloudFront API)
](#standard-logs-legacy-enable-api)

### Enable standard logging (legacy) (CloudFront console)
<a name="standard-logs-legacy-enable-console"></a>

**To enable standard logs for a CloudFront distribution (console)**

1. Use the CloudFront console to create a [new distribution](distribution-web-creating-console.md) or [update an existing one](HowToUpdateDistribution.md#HowToUpdateDistributionProcedure).

1. For the **Standard logging** section, for **Log delivery**, choose **On**.

1. (Optional) For **Cookie logging**, choose **On** if you want to include cookies in your logs. For more information, see [Cookie logging](DownloadDistValuesGeneral.md#DownloadDistValuesCookieLogging).
**Tip**  
Cookie logging is a global setting that applies to *all * standard logs for your distribution. You can’t override this setting for separate delivery destinations.

1. For the **Deliver to** section, specify **Amazon S3 (Legacy)**.

1. Specify your Amazon S3 bucket. If you don't have one already, you can choose **Create** or see the documentation to [create a bucket](https://docs.amazonaws.cn/AmazonS3/latest/userguide/create-bucket-overview.html).

1. (Optional) For **Log prefix**, specify the string, if any, that you want CloudFront to prefix to the access log file names for this distribution, for example, `exampleprefix/`. The trailing slash ( / ) is optional but recommended to simplify browsing your log files. For more information, see [Log prefix](DownloadDistValuesGeneral.md#DownloadDistValuesLogPrefix).

1. Complete the steps to update or create your distribution.

1. From the **Logs** page, verify that the standard logs status is **Enabled** next to the distribution.

   For more information about the standard logging delivery and log fields, see the [Standard logging reference](standard-logs-reference.md).

### Enable standard logging (legacy) (CloudFront API)
<a name="standard-logs-legacy-enable-api"></a>

You can also use the CloudFront API to enable standard logs for your distributions. 

**To enable standard logs for a distribution (CloudFront API)**
+ Use the [CreateDistribution](https://docs.amazonaws.cn/cloudfront/latest/APIReference/API_CreateDistribution.html) or [UpdateDistribution](https://docs.amazonaws.cn/cloudfront/latest/APIReference/API_UpdateDistribution.html) API operation and configure the [LoggingConfig](https://docs.amazonaws.cn/cloudfront/latest/APIReference/API_LoggingConfig.html) object.

## Edit standard logging settings
<a name="ChangeSettings"></a>

You can enable or disable logging, change the Amazon S3 bucket where your logs are stored, and change the prefix for log files by using the [CloudFront console](https://console.amazonaws.cn/cloudfront/v4/home) or the CloudFront API. Your changes to logging settings take effect within 12 hours.

For more information, see the following topics:
+ To update a distribution using the CloudFront console, see [Update a distribution](HowToUpdateDistribution.md).
+ To update a distribution using the CloudFront API, see [UpdateDistribution](https://docs.amazonaws.cn/cloudfront/latest/APIReference/API_UpdateDistribution.html) in the *Amazon CloudFront API Reference*.

## Send logs to Amazon S3
<a name="standard-logs-in-s3"></a>

When you send your logs to Amazon S3, your logs appear in the following format.

### File name format
<a name="AccessLogsFileNaming"></a>

The name of each log file that CloudFront saves in your Amazon S3 bucket uses the following file name format:

`<optional prefix>/<distribution ID>.YYYY-MM-DD-HH.unique-ID.gz`

The date and time are in Coordinated Universal Time (UTC).

For example, if you use `example-prefix` as the prefix, and your distribution ID is `EMLARXS9EXAMPLE`, your file names look similar to this:

`example-prefix/EMLARXS9EXAMPLE.2019-11-14-20.RT4KCN4SGK9.gz`

When you enable logging for a distribution, you can specify an optional prefix for the file names, so you can keep track of which log files are associated with which distributions. If you include a value for the log file prefix and your prefix doesn't end with a forward slash (`/`), CloudFront appends one automatically. If your prefix does end with a forward slash, CloudFront doesn't add another one.

The `.gz` at the end of the file name indicates that CloudFront has compressed the log file using gzip.

## Standard log file format
<a name="LogFileFormat"></a>

Each entry in a log file gives details about a single viewer request. The log files have the following characteristics:
+ Use the [W3C extended log file format](https://www.w3.org/TR/WD-logfile.html).
+ Contain tab-separated values.
+ Contain records that are not necessarily in chronological order.
+ Contain two header lines: one with the file format version, and another that lists the W3C fields included in each record.
+ Contain URL-encoded equivalents for spaces and certain other characters in field values.

  URL-encoded equivalents are used for the following characters:
  + ASCII character codes 0 through 32, inclusive
  + ASCII character codes 127 and higher
  + All characters in the following table

  The URL encoding standard is defined in [RFC 1738](https://tools.ietf.org/html/rfc1738.html).


|  URL-Encoded value  |  Character  | 
| --- | --- | 
|  %3C  |  <  | 
|  %3E  |  >  | 
|  %22  |  "  | 
|  %23  |  \$1  | 
|  %25  |  %  | 
|  %7B  |  \$1  | 
|  %7D  |  \$1  | 
|  %7C  |  \$1  | 
|  %5C  |  \$1  | 
|  %5E  |  ^  | 
|  %7E  |  \$1  | 
|  %5B  |  [  | 
|  %5D  |  ]  | 
|  %60  |  `  | 
|  %27  |  '  | 
|  %20  |  space  | 

## Delete log files
<a name="DeletingLogFiles"></a>

CloudFront doesn't automatically delete log files from your Amazon S3 bucket. For information about deleting log files from an Amazon S3 bucket, see [Deleting objects](https://docs.amazonaws.cn/AmazonS3/latest/userguide/DeletingObjects.html) in the *Amazon Simple Storage Service Console User Guide*.

## Pricing
<a name="AccessLogsCharges"></a>

Standard logging is an optional feature of CloudFront. CloudFront doesn’t charge for enabling standard logs. However, you accrue the usual Amazon S3 charges for storing and accessing the files on Amazon S3. You can delete them at any time.

For more information about Amazon S3 pricing, see [Amazon S3 Pricing](http://www.amazonaws.cn/s3/pricing/).

For more information about CloudFront pricing, see [CloudFront Pricing](http://www.amazonaws.cn/cloudfront/pricing/).

# Standard logging reference
<a name="standard-logs-reference"></a>

The following sections apply to both standard logging (v2) and standard logging (legacy).

**Topics**
+ [

## Timing of log file delivery
](#access-logs-timing)
+ [

## How requests are logged when the request URL or headers exceed the maximum size
](#access-logs-request-URL-size)
+ [

## Log file fields
](#BasicDistributionFileFormat)
+ [

## Analyze logs
](#access-logs-analyzing)

## Timing of log file delivery
<a name="access-logs-timing"></a>

CloudFront delivers logs for a distribution up to several times an hour. In general, a log file contains information about the requests that CloudFront received during a given time period. CloudFront usually delivers the log file for that time period to your destination within an hour of the events that appear in the log. Note, however, that some or all log file entries for a time period can sometimes be delayed by up to 24 hours. When log entries are delayed, CloudFront saves them in a log file for which the file name includes the date and time of the period in which the requests *occurred*, not the date and time when the file was delivered.

When creating a log file, CloudFront consolidates information for your distribution from all of the edge locations that received requests for your objects during the time period that the log file covers.

CloudFront can save more than one file for a time period depending on how many requests CloudFront receives for the objects associated with a distribution.

CloudFront begins to reliably deliver access logs about four hours after you enable logging. You might get a few access logs before that time.

**Note**  
If no users request your objects during the time period, you don't receive any log files for that period.

## How requests are logged when the request URL or headers exceed the maximum size
<a name="access-logs-request-URL-size"></a>

If the total size of all request headers, including cookies, exceeds 20 KB, or if the URL exceeds 8192 bytes, CloudFront can't parse the request completely and can't log the request. Because the request isn't logged, you won't see in the log files the HTTP error status code returned.

If the request body exceeds the maximum size, the request is logged, including the HTTP error status code.

## Log file fields
<a name="BasicDistributionFileFormat"></a>

The log file for a distribution contains 33 fields. The following list contains each field name, in order, along with a description of the information in that field.

1. **`date`**

   The date on which the event occurred in the format `YYYY-MM-DD`. For example, `2019-06-30`. The date and time are in Coordinated Universal Time (UTC). For WebSocket connections, this is the date when the connection closed.

1. **`time`**

   The time when the CloudFront server finished responding to the request (in UTC), for example, `01:42:39`. For WebSocket connections, this is the time when the connection is closed.

1. **`x-edge-location`**

   The edge location that served the request. Each edge location is identified by a three-letter code and an arbitrarily assigned number (for example, DFW3). The three-letter code typically corresponds with the International Air Transport Association (IATA) airport code for an airport near the edge location's geographic location. (These abbreviations might change in the future.)

1. **`sc-bytes`**

   The total number of bytes that the server sent to the viewer in response to the request, including headers. For WebSocket and gRPC connections, this is the total number of bytes sent from the server to the client through the connection.

1. **`c-ip`**

   The IP address of the viewer that made the request, for example, `192.0.2.183` or `2001:0db8:85a3::8a2e:0370:7334`. If the viewer used an HTTP proxy or a load balancer to send the request, the value of this field is the IP address of the proxy or load balancer. See also the `x-forwarded-for` field.

1. **`cs-method`**

   The HTTP request method received from the viewer.

1. **`cs(Host)`**

   The domain name of the CloudFront distribution (for example, d111111abcdef8.cloudfront.net).

1. **`cs-uri-stem`**

   The portion of the request URL that identifies the path and object (for example, `/images/cat.jpg`). Question marks (?) in URLs and query strings are not included in the log.

1. **`sc-status`**

   Contains one of the following values:
   + The HTTP status code of the server's response (for example, `200`).
   + `000`, which indicates that the viewer closed the connection before the server could respond to the request. If the viewer closes the connection after the server starts to send the response, this field contains the HTTP status code of the response that the server started to send.

1. **`cs(Referer)`**

   The value of the `Referer` header in the request. This is the name of the domain that originated the request. Common referrers include search engines, other websites that link directly to your objects, and your own website.

1. **`cs(User-Agent)`**

   The value of the `User-Agent` header in the request. The `User-Agent` header identifies the source of the request, such as the type of device and browser that submitted the request or, if the request came from a search engine, which search engine.

1. **`cs-uri-query`**

   The query string portion of the request URL, if any.

   When a URL doesn't contain a query string, this field's value is a hyphen (-). For more information, see [Cache content based on query string parameters](QueryStringParameters.md).

1. **`cs(Cookie)`**

   The `Cookie` header in the request, including name—value pairs and the associated attributes.

   If you enable cookie logging, CloudFront logs the cookies in all requests regardless of which cookies you choose to forward to the origin. When a request doesn't include a cookie header, this field's value is a hyphen (-). For more information about cookies, see [Cache content based on cookies](Cookies.md).

1. **`x-edge-result-type`**

   How the server classified the response after the last byte left the server. In some cases, the result type can change between the time that the server is ready to send the response and the time that it finishes sending the response. See also the `x-edge-response-result-type` field.

   For example, in HTTP streaming, suppose the server finds a segment of the stream in the cache. In that scenario, the value of this field would ordinarily be `Hit`. However, if the viewer closes the connection before the server has delivered the entire segment, the final result type (and the value of this field) is `Error`.

   WebSocket and gRPC connections will have a value of `Miss` for this field because the content is not cacheable and is proxied directly to the origin.

   Possible values include:
   + `Hit` – The server served the object to the viewer from the cache.
   + `RefreshHit` – The server found the object in the cache but the object had expired, so the server contacted the origin to verify that the cache had the latest version of the object.
   + `Miss` – The request could not be satisfied by an object in the cache, so the server forwarded the request to the origin and returned the result to the viewer.
   + `LimitExceeded` – The request was denied because a CloudFront quota (formerly referred to as a limit) was exceeded.
   + `CapacityExceeded` – The server returned an HTTP 503 status code because it didn't have enough capacity at the time of the request to serve the object.
   + `Error` – Typically, this means the request resulted in a client error (the value of the `sc-status` field is in the `4xx` range) or a server error (the value of the `sc-status` field is in the `5xx` range). If the value of the `sc-status` field is `200`, or if the value of this field is `Error` and the value of the `x-edge-response-result-type` field is not `Error`, it means the HTTP request was successful but the client disconnected before receiving all of the bytes.
   + `Redirect` – The server redirected the viewer from HTTP to HTTPS according to the distribution settings.
   + `LambdaExecutionError` – The Lambda@Edge function associated with the distribution didn't complete due to a malformed association, a function timeout, an Amazon dependency issue, or another general availability problem.

1. **`x-edge-request-id`**

   An opaque string that uniquely identifies a request. CloudFront also sends this string in the `x-amz-cf-id` response header.

1. **`x-host-header`**

   The value that the viewer included in the `Host` header of the request. If you're using the CloudFront domain name in your object URLs (such as d111111abcdef8.cloudfront.net), this field contains that domain name. If you're using alternate domain names (CNAMEs) in your object URLs (such as www.example.com), this field contains the alternate domain name.

   If you're using alternate domain names, see `cs(Host)` in field 7 for the domain name that is associated with your distribution.

1. **`cs-protocol`**

   The protocol of the viewer request (`http`, `https`, `grpcs`, `ws`, or `wss`).

1. **`cs-bytes`**

   The total number of bytes of data that the viewer included in the request, including headers. For WebSocket and gRPC connections, this is the total number of bytes sent from the client to the server on the connection.

1. **`time-taken`**

   The number of seconds (to the thousandth of a second, for example, 0.082) from when the server receives the viewer's request to when the server writes the last byte of the response to the output queue, as measured on the server. From the perspective of the viewer, the total time to get the full response will be longer than this value because of network latency and TCP buffering.

1. **`x-forwarded-for`**

   If the viewer used an HTTP proxy or a load balancer to send the request, the value of the `c-ip` field is the IP address of the proxy or load balancer. In that case, this field is the IP address of the viewer that originated the request. This field can contain multiple comma-separated IP addresses. Each IP address can be an IPv4 address (for example, `192.0.2.183`) or an IPv6 address (for example, `2001:0db8:85a3::8a2e:0370:7334`).

   If the viewer did not use an HTTP proxy or a load balancer, the value of this field is a hyphen (-).

1. **`ssl-protocol`**

   When the request used HTTPS, this field contains the SSL/TLS protocol that the viewer and server negotiated for transmitting the request and response. For a list of possible values, see the supported SSL/TLS protocols in [Supported protocols and ciphers between viewers and CloudFront](secure-connections-supported-viewer-protocols-ciphers.md).

   When `cs-protocol` in field 17 is `http`, the value for this field is a hyphen (-).

1. **`ssl-cipher`**

   When the request used HTTPS, this field contains the SSL/TLS cipher that the viewer and server negotiated for encrypting the request and response. For a list of possible values, see the supported SSL/TLS ciphers in [Supported protocols and ciphers between viewers and CloudFront](secure-connections-supported-viewer-protocols-ciphers.md).

   When `cs-protocol` in field 17 is `http`, the value for this field is a hyphen (-).

1. **`x-edge-response-result-type`**

   How the server classified the response just before returning the response to the viewer. See also the `x-edge-result-type` field. Possible values include:
   + `Hit` – The server served the object to the viewer from the cache.
   + `RefreshHit` – The server found the object in the cache but the object had expired, so the server contacted the origin to verify that the cache had the latest version of the object.
   + `Miss` – The request could not be satisfied by an object in the cache, so the server forwarded the request to the origin server and returned the result to the viewer.
   + `LimitExceeded` – The request was denied because a CloudFront quota (formerly referred to as a limit) was exceeded.
   + `CapacityExceeded` – The server returned a 503 error because it didn't have enough capacity at the time of the request to serve the object.
   + `Error` – Typically, this means the request resulted in a client error (the value of the `sc-status` field is in the `4xx` range) or a server error (the value of the `sc-status` field is in the `5xx` range).

     If the value of the `x-edge-result-type` field is `Error` and the value of this field is not `Error`, the client disconnected before finishing the download.
   + `Redirect` – The server redirected the viewer from HTTP to HTTPS according to the distribution settings.
   + `LambdaExecutionError` – The Lambda@Edge function associated with the distribution didn't complete due to a malformed association, a function timeout, an Amazon dependency issue, or another general availability problem.

1. **`cs-protocol-version`**

   The HTTP version that the viewer specified in the request. Possible values include `HTTP/0.9`, `HTTP/1.0`, `HTTP/1.1`, `HTTP/2.0`, and `HTTP/3.0`.

1. **`fle-status`**

   When [field-level encryption](https://docs.amazonaws.cn/AmazonCloudFront/latest/DeveloperGuide/field-level-encryption.html) is configured for a distribution, this field contains a code that indicates whether the request body was successfully processed. When the server successfully processes the request body, encrypts values in the specified fields, and forwards the request to the origin, the value of this field is `Processed`. The value of `x-edge-result-type` can still indicate a client-side or server-side error in this case.

   Possible values for this field include:
   + `ForwardedByContentType` – The server forwarded the request to the origin without parsing or encryption because no content type was configured.
   + `ForwardedByQueryArgs` – The server forwarded the request to the origin without parsing or encryption because the request contains a query argument that wasn't in the configuration for field-level encryption.
   + `ForwardedDueToNoProfile` – The server forwarded the request to the origin without parsing or encryption because no profile was specified in the configuration for field-level encryption.
   + `MalformedContentTypeClientError` – The server rejected the request and returned an HTTP 400 status code to the viewer because the value of the `Content-Type` header was in an invalid format.
   + `MalformedInputClientError` – The server rejected the request and returned an HTTP 400 status code to the viewer because the request body was in an invalid format.
   + `MalformedQueryArgsClientError` – The server rejected the request and returned an HTTP 400 status code to the viewer because a query argument was empty or in an invalid format.
   + `RejectedByContentType` – The server rejected the request and returned an HTTP 400 status code to the viewer because no content type was specified in the configuration for field-level encryption.
   + `RejectedByQueryArgs` – The server rejected the request and returned an HTTP 400 status code to the viewer because no query argument was specified in the configuration for field-level encryption.
   + `ServerError` – The origin server returned an error.

   If the request exceeds a field-level encryption quota (formerly referred to as a limit), this field contains one of the following error codes, and the server returns HTTP status code 400 to the viewer. For a list of the current quotas on field-level encryption, see [Quotas on field-level encryption](cloudfront-limits.md#limits-field-level-encryption).
   + `FieldLengthLimitClientError` – A field that is configured to be encrypted exceeded the maximum length allowed.
   + `FieldNumberLimitClientError` – A request that the distribution is configured to encrypt contains more than the number of fields allowed.
   + `RequestLengthLimitClientError` – The length of the request body exceeded the maximum length allowed when field-level encryption is configured.

   If field-level encryption is not configured for the distribution, the value of this field is a hyphen (-).

1. **`fle-encrypted-fields`**

   The number of [field-level encryption](field-level-encryption.md) fields that the server encrypted and forwarded to the origin. CloudFront servers stream the processed request to the origin as they encrypt data, so this field can have a value even if the value of `fle-status` is an error.

   If field-level encryption is not configured for the distribution, the value of this field is a hyphen (-).

1. **`c-port`**

   The port number of the request from the viewer.

1. **`time-to-first-byte`**

   The number of seconds between receiving the request and writing the first byte of the response, as measured on the server.

1. **`x-edge-detailed-result-type`**

   This field contains the same value as the `x-edge-result-type` field, except in the following cases:
   + When the object was served to the viewer from the [Origin Shield](origin-shield.md) layer, this field contains `OriginShieldHit`.
   + When the object was not in the CloudFront cache and the response was generated by an [origin request Lambda@Edge function](lambda-at-the-edge.md), this field contains `MissGeneratedResponse`.
   + When the value of the `x-edge-result-type` field is `Error`, this field contains one of the following values with more information about the error:
     + `AbortedOrigin` – The server encountered an issue with the origin.
     + `ClientCommError` – The response to the viewer was interrupted due to a communication problem between the server and the viewer.
     + `ClientGeoBlocked` – The distribution is configured to refuse requests from the viewer's geographic location.
     + `ClientHungUpRequest` – The viewer stopped prematurely while sending the request.
     + `Error` – An error occurred for which the error type doesn't fit any of the other categories. This error type can occur when the server serves an error response from the cache.
     + `InvalidRequest` – The server received an invalid request from the viewer.
     + `InvalidRequestBlocked` – Access to the requested resource is blocked.
     + `InvalidRequestCertificate` – The distribution doesn't match the SSL/TLS certificate for which the HTTPS connection was established.
     + `InvalidRequestHeader` – The request contained an invalid header.
     + `InvalidRequestMethod` – The distribution is not configured to handle the HTTP request method that was used. This can happen when the distribution supports only cacheable requests.
     + `OriginCommError` – The request timed out while connecting to the origin, or reading data from the origin.
     + `OriginConnectError` – The server couldn't connect to the origin.
     + `OriginContentRangeLengthError` – The `Content-Length` header in the origin's response doesn't match the length in the `Content-Range` header.
     + `OriginDnsError` – The server couldn't resolve the origin's domain name.
     + `OriginError` – The origin returned an incorrect response.
     + `OriginHeaderTooBigError` – A header returned by the origin is too big for the edge server to process.
     + `OriginInvalidResponseError` – The origin returned an invalid response.
     + `OriginReadError` – The server couldn't read from the origin.
     + `OriginWriteError` – The server couldn't write to the origin.
     + `OriginZeroSizeObjectError` – A zero size object sent from the origin resulted in an error.
     + `SlowReaderOriginError` – The viewer was slow to read the message that caused the origin error.

1. **`sc-content-type`**

   The value of the HTTP `Content-Type` header of the response.

1. **`sc-content-len`**

   The value of the HTTP `Content-Length` header of the response.

1. **`sc-range-start`**

   When the response contains the HTTP `Content-Range` header, this field contains the range start value.

1. **`sc-range-end`**

   When the response contains the HTTP `Content-Range` header, this field contains the range end value.

1. **`distribution-tenant-id`**

   The ID of the distribution tenant.

1. **`connection-id`**

   A unique identifier for the TLS connection. 

   You must enable mTLS for your distributions before you can get information for this field. For more information, see [Mutual TLS authentication with CloudFront (Viewer mTLS)Origin mutual TLS with CloudFront](mtls-authentication.md).

   

The following is an example log file for a distribution.

```
#Version: 1.0
#Fields: date time x-edge-location sc-bytes c-ip cs-method cs(Host) cs-uri-stem sc-status cs(Referer) cs(User-Agent) cs-uri-query cs(Cookie) x-edge-result-type x-edge-request-id x-host-header cs-protocol cs-bytes time-taken x-forwarded-for ssl-protocol ssl-cipher x-edge-response-result-type cs-protocol-version fle-status fle-encrypted-fields c-port time-to-first-byte x-edge-detailed-result-type sc-content-type sc-content-len sc-range-start sc-range-end
2019-12-04	21:02:31	LAX1	392	192.0.2.100	GET	d111111abcdef8.cloudfront.net	/index.html	200	-	Mozilla/5.0%20(Windows%20NT%2010.0;%20Win64;%20x64)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/78.0.3904.108%20Safari/537.36	-	-	Hit	SOX4xwn4XV6Q4rgb7XiVGOHms_BGlTAC4KyHmureZmBNrjGdRLiNIQ==	d111111abcdef8.cloudfront.net	https	23	0.001	-	TLSv1.2	ECDHE-RSA-AES128-GCM-SHA256	Hit	HTTP/2.0	-	-	11040	0.001	Hit	text/html	78	-	-
2019-12-04	21:02:31	LAX1	392	192.0.2.100	GET	d111111abcdef8.cloudfront.net	/index.html	200	-	Mozilla/5.0%20(Windows%20NT%2010.0;%20Win64;%20x64)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/78.0.3904.108%20Safari/537.36	-	-	Hit	k6WGMNkEzR5BEM_SaF47gjtX9zBDO2m349OY2an0QPEaUum1ZOLrow==	d111111abcdef8.cloudfront.net	https	23	0.000	-	TLSv1.2	ECDHE-RSA-AES128-GCM-SHA256	Hit	HTTP/2.0	-	-	11040	0.000	Hit	text/html	78	-	-
2019-12-04	21:02:31	LAX1	392	192.0.2.100	GET	d111111abcdef8.cloudfront.net	/index.html	200	-	Mozilla/5.0%20(Windows%20NT%2010.0;%20Win64;%20x64)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/78.0.3904.108%20Safari/537.36	-	-	Hit	f37nTMVvnKvV2ZSvEsivup_c2kZ7VXzYdjC-GUQZ5qNs-89BlWazbw==	d111111abcdef8.cloudfront.net	https	23	0.001	-	TLSv1.2	ECDHE-RSA-AES128-GCM-SHA256	Hit	HTTP/2.0	-	-	11040	0.001	Hit	text/html	78	-	-	
2019-12-13	22:36:27	SEA19-C1	900	192.0.2.200	GET	d111111abcdef8.cloudfront.net	/favicon.ico	502	http://www.example.com/	Mozilla/5.0%20(Windows%20NT%2010.0;%20Win64;%20x64)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/78.0.3904.108%20Safari/537.36	-	-	Error	1pkpNfBQ39sYMnjjUQjmH2w1wdJnbHYTbag21o_3OfcQgPzdL2RSSQ==	www.example.com	http	675	0.102	-	-	-	Error	HTTP/1.1	-	-	25260	0.102	OriginDnsError	text/html	507	-	-
2019-12-13	22:36:26	SEA19-C1	900	192.0.2.200	GET	d111111abcdef8.cloudfront.net	/	502	-	Mozilla/5.0%20(Windows%20NT%2010.0;%20Win64;%20x64)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/78.0.3904.108%20Safari/537.36	-	-	Error	3AqrZGCnF_g0-5KOvfA7c9XLcf4YGvMFSeFdIetR1N_2y8jSis8Zxg==	www.example.com	http	735	0.107	-	-	-	Error	HTTP/1.1	-	-	3802	0.107	OriginDnsError	text/html	507	-	-
2019-12-13	22:37:02	SEA19-C2	900	192.0.2.200	GET	d111111abcdef8.cloudfront.net	/	502	-	curl/7.55.1	-	-	Error	kBkDzGnceVtWHqSCqBUqtA_cEs2T3tFUBbnBNkB9El_uVRhHgcZfcw==	www.example.com	http	387	0.103	-	-	-	Error	HTTP/1.1	-	-	12644	0.103	OriginDnsError	text/html	507	-	-
```

## Analyze logs
<a name="access-logs-analyzing"></a>

Because you can receive multiple access logs per hour, we recommend that you combine all the log files you receive for a given time period into one file. You can then analyze the data for that period more accurately and completely.

One way to analyze your access logs is to use [Amazon Athena](http://www.amazonaws.cn/athena/). Athena is an interactive query service that can help you analyze data for Amazon services, including CloudFront. To learn more, see [ Querying Amazon CloudFront Logs](https://docs.amazonaws.cn/athena/latest/ug/cloudfront-logs.html) in the *Amazon Athena User Guide*.

In addition, the following Amazon blog posts discuss some ways to analyze access logs.
+ [ Amazon CloudFront Request Logging](http://www.amazonaws.cn/blogs/aws/amazon-cloudfront-request-logging/) (for content delivered via HTTP)
+ [ Enhanced CloudFront Logs, Now With Query Strings](http://www.amazonaws.cn/blogs/aws/enhanced-cloudfront-logs-now-with-query-strings/)

# Use real-time access logs
<a name="real-time-logs"></a>

With CloudFront real-time access logs, you can get information about requests made to a distribution in real time (logs are delivered within seconds of receiving the requests). You can use real-time access logs to monitor, analyze, and take action based on content delivery performance.

CloudFront real-time access logs are configurable. You can choose:
+ The *sampling rate* for your real-time logs—that is, the percentage of requests for which you want to receive real-time access log records.
+ The specific fields that you want to receive in the log records.
+ The specific cache behaviors (path patterns) that you want to receive real-time logs for.

CloudFront real-time access logs are delivered to the data stream of your choice in Amazon Kinesis Data Streams. You can build your own [Kinesis data stream consumer](https://docs.amazonaws.cn/streams/latest/dev/amazon-kinesis-consumers.html), or use Amazon Data Firehose to send the log data to Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon OpenSearch Service (OpenSearch Service), or a third-party log processing service.

CloudFront charges for real-time access logs, in addition to the charges you incur for using Kinesis Data Streams. For more information about pricing, see [Amazon CloudFront Pricing](http://www.amazonaws.cn/cloudfront/pricing/) and [Amazon Kinesis Data Streams pricing](http://www.amazonaws.cn/kinesis/data-streams/pricing/).

**Important**  
We recommend that you use the logs to understand the nature of the requests for your content, not as a complete accounting of all requests. CloudFront delivers real-time access logs on a best-effort basis. The log entry for a particular request might be delivered long after the request was actually processed and, in rare cases, a log entry might not be delivered at all. When a log entry is omitted from real-time access logs, the number of entries in the real-time access logs won't match the usage that appears in the Amazon billing and usage reports.

**Topics**
+ [

## Create and use real-time access log configurations
](#create-real-time-log-config)
+ [

## Understand real-time access log configurations
](#understand-real-time-log-config)
+ [

## Create a Kinesis Data Streams consumer
](#real-time-log-consumer-guidance)
+ [

## Troubleshoot real-time access logs
](#real-time-log-troubleshooting)

## Create and use real-time access log configurations
<a name="create-real-time-log-config"></a>

To get information about requests made to a distribution in real time. you can use a real-time access log configurations. Logs are delivered within seconds of receiving the requests. You can create a real-time access log configuration in the CloudFront console, with the Amazon Command Line Interface (Amazon CLI), or with the CloudFront API.

To use a real-time access log configuration, you attach it to one or more cache behaviors in a CloudFront distribution.

------
#### [ Console ]

**To create a real-time access log configuration**

1. Sign in to the Amazon Web Services Management Console and open the **Logs** page in the CloudFront console at [https://console.amazonaws.cn/cloudfront/v4/home?#/logs](https://console.amazonaws.cn/cloudfront/v4/home?#/logs).

1. Choose the **Real-time configurations** tab.

1. Choose **Create configuration**.

1. For **Name**, enter a name for the configuration.

1. For **Sampling rate**, enter the percentage of requests for which you want to receive log records.

1. For **Fields**, choose the fields to receive in the real-time access logs.
   + To include all [CMCD fields](#CMCD-real-time-logging-fields) for your logs, choose **CMCD all keys**.

1. For **Endpoint**, choose one or more Kinesis data streams to receive real-time access logs.
**Note**  
CloudFront real-time access logs are delivered to the data stream that you specify in Kinesis Data Streams. To read and analyze your real-time access logs, you can build your own Kinesis data stream consumer. You can also use Firehose to send the log data to Amazon S3, Amazon Redshift, Amazon OpenSearch Service, or a third-party log processing service.

1. For **IAM role**, choose **Create new service role** or choose an existing role. You must have permission to create IAM roles.

1. (Optional) For **Distribution**, choose a CloudFront distribution and cache behavior to attach to the real-time access log configuration.

1. Choose **Create configuration**.

If successful, the console shows the details of the real-time access log configuration that you just created.

For more information, see [Understand real-time access log configurations](#understand-real-time-log-config).

------
#### [ Amazon CLI ]

To create a real-time access log configuration with the Amazon CLI, use the **aws cloudfront create-realtime-log-config** command. You can use an input file to provide the command's input parameters, rather than specifying each individual parameter as command line input.

**To create a real-time access log configuration (CLI with input file)**

1. Use the following command to create a file named `rtl-config.yaml` that contains all of the input parameters for the **create-realtime-log-config** command.

   ```
   aws cloudfront create-realtime-log-config --generate-cli-skeleton yaml-input > rtl-config.yaml
   ```

1. Open the file named `rtl-config.yaml` that you just created. Edit the file to specify the real-time access log configuration settings that you want, then save the file. Note the following:
   + For `StreamType`, the only valid value is `Kinesis`.

   For more information about the real-time long configuration settings, see [Understand real-time access log configurations](#understand-real-time-log-config).

1. Use the following command to create the real-time access log configuration using input parameters from the `rtl-config.yaml` file.

   ```
   aws cloudfront create-realtime-log-config --cli-input-yaml file://rtl-config.yaml
   ```

If successful, the command's output shows the details of the real-time access log configuration that you just created.

**To attach a real-time access log configuration to an existing distribution (CLI with input file)**

1. Use the following command to save the distribution configuration for the CloudFront distribution that you want to update. Replace *distribution\$1ID* with the distribution's ID.

   ```
   aws cloudfront get-distribution-config --id distribution_ID --output yaml > dist-config.yaml
   ```

1. Open the file named `dist-config.yaml` that you just created. Edit the file, making the following changes to each cache behavior that you are updating to use a real-time access log configuration.
   + In the cache behavior, add a field named `RealtimeLogConfigArn`. For the field's value, use the ARN of the real-time access log configuration that you want to attach to this cache behavior.
   + Rename the `ETag` field to `IfMatch`, but don't change the field's value.

   Save the file when finished.

1. Use the following command to update the distribution to use the real-time access log configuration. Replace *distribution\$1ID* with the distribution's ID.

   ```
   aws cloudfront update-distribution --id distribution_ID --cli-input-yaml file://dist-config.yaml
   ```

If successful, the command's output shows the details of the distribution that you just updated.

------
#### [ API ]

To create a real-time access log configuration with the CloudFront API, use the [CreateRealtimeLogConfig](https://docs.amazonaws.cn/cloudfront/latest/APIReference/API_CreateRealtimeLogConfig.html) API operation. For more information about the parameters that you specify in this API call, see [Understand real-time access log configurations](#understand-real-time-log-config) and the API reference documentation for your Amazon SDK or other API client.

After you create a real-time access log configuration, you can attach it to a cache behavior, by using one of the following API operations:
+ To attach it to a cache behavior in an existing distribution, use [UpdateDistribution](https://docs.amazonaws.cn/cloudfront/latest/APIReference/API_UpdateDistribution.html).
+ To attach it to a cache behavior in a new distribution, use [CreateDistribution](https://docs.amazonaws.cn/cloudfront/latest/APIReference/API_CreateDistribution.html).

For both of these API operations, provide the ARN of the real-time access log configuration in the `RealtimeLogConfigArn` field, inside a cache behavior. For more information about the other fields that you specify in these API calls, see [All distribution settings reference](distribution-web-values-specify.md) and the API reference documentation for your Amazon SDK or other API client.

------

## Understand real-time access log configurations
<a name="understand-real-time-log-config"></a>

To use CloudFront real-time access logs, you start by creating a real-time access log configuration. The real-time access log configuration contains information about which log fields you want to receive, the *sampling rate* for log records, and the Kinesis data stream where you want to deliver the logs.

Specifically, a real-time access log configuration contains the following settings:

**Contents**
+ [

### Name
](#real-time-logs-name)
+ [

### Sampling rate
](#real-time-logs-sampling-rate)
+ [

### Fields
](#real-time-logs-fields)
+ [

### Endpoint (Kinesis Data Streams)
](#real-time-logs-endpoint)
+ [

### IAM role
](#real-time-logs-IAM)

### Name
<a name="real-time-logs-name"></a>

A name to identify the real-time access log configuration.

### Sampling rate
<a name="real-time-logs-sampling-rate"></a>

The sampling rate is a whole number between 1 and 100 (inclusive) that determines the percentage of viewer requests that are sent to Kinesis Data Streams as real-time access log records. To include every viewer request in your real-time access logs, specify 100 for the sampling rate. You might choose a lower sampling rate to reduce costs while still receiving a representative sample of request data in your real-time access logs.

### Fields
<a name="real-time-logs-fields"></a>

A list of the fields that are included in each real-time access log record. Each log record can contain up to 40 fields, and you can choose to receive all of the available fields, or only the fields that you need for monitoring and analyzing performance.

The following list contains each field name and a description of the information in that field. The fields are listed in the order in which they appear in the log records that are delivered to Kinesis Data Streams.

Fields 46-63 are [common media client data (CMCD)](#CMCD-real-time-logging-fields) that media player clients can send to CDNs with each request. You can use this data to understand each request, such as the media type (audio, video), playback rate, and streaming length. These fields will only appear in your real-time access logs if they're sent to CloudFront. 

1. **`timestamp`**

   The date and time at which the edge server finished responding to the request.

1. **`c-ip`**

   The IP address of the viewer that made the request, for example, `192.0.2.183` or `2001:0db8:85a3::8a2e:0370:7334`. If the viewer used an HTTP proxy or a load balancer to send the request, the value of this field is the IP address of the proxy or load balancer. See also the `x-forwarded-for` field.

1. **`s-ip`**

   The IP address of the CloudFront server that served the request, for example, `192.0.2.183` or `2001:0db8:85a3::8a2e:0370:7334`.

1. **`time-to-first-byte`**

   The number of seconds between receiving the request and writing the first byte of the response, as measured on the server.

1. **`sc-status`**

   The HTTP status code of the server's response (for example, `200`).

1. **`sc-bytes`**

   The total number of bytes that the server sent to the viewer in response to the request, including headers. For WebSocket and gRPC connections, this is the total number of bytes sent from the server to the client through the connection.

1. **`cs-method`**

   The HTTP request method received from the viewer.

1. **`cs-protocol`**

   The protocol of the viewer request (`http`, `https`, `grpcs`, `ws`, or `wss`).

1. **`cs-host`**

   The value that the viewer included in the `Host` header of the request. If you're using the CloudFront domain name in your object URLs (such as d111111abcdef8.cloudfront.net), this field contains that domain name. If you're using alternate domain names (CNAMEs) in your object URLs (such as www.example.com), this field contains the alternate domain name.

1. **`cs-uri-stem`**

   The entire request URL, including the query string (if one exists), but without the domain name. For example, `/images/cat.jpg?mobile=true`.
**Note**  
In [standard logs](AccessLogs.md), the `cs-uri-stem` value doesn't include the query string.

1. **`cs-bytes`**

   The total number of bytes of data that the viewer included in the request, including headers. For WebSocket and gRPC connections, this is the total number of bytes sent from the client to the server on the connection.

1. **`x-edge-location`**

   The edge location that served the request. Each edge location is identified by a three-letter code and an arbitrarily assigned number (for example, DFW3). The three-letter code typically corresponds with the International Air Transport Association (IATA) airport code for an airport near the edge location's geographic location. (These abbreviations might change in the future.)

1. **`x-edge-request-id`**

   An opaque string that uniquely identifies a request. CloudFront also sends this string in the `x-amz-cf-id` response header.

1. **`x-host-header`**

   The domain name of the CloudFront distribution (for example, d111111abcdef8.cloudfront.net).

1. **`time-taken`**

   The number of seconds (to the thousandth of a second, for example, 0.082) from when the server receives the viewer's request to when the server writes the last byte of the response to the output queue, as measured on the server. From the perspective of the viewer, the total time to get the full response will be longer than this value because of network latency and TCP buffering.

1. **`cs-protocol-version`**

   The HTTP version that the viewer specified in the request. Possible values include `HTTP/0.9`, `HTTP/1.0`, `HTTP/1.1`, `HTTP/2.0`, and `HTTP/3.0`.

1. **`c-ip-version`**

   The IP version of the request (IPv4 or IPv6).

1. **`cs-user-agent`**

   The value of the `User-Agent` header in the request. The `User-Agent` header identifies the source of the request, such as the type of device and browser that submitted the request or, if the request came from a search engine, which search engine.

1. **`cs-referer`**

   The value of the `Referer` header in the request. This is the name of the domain that originated the request. Common referrers include search engines, other websites that link directly to your objects, and your own website.

1. **`cs-cookie`**

   The `Cookie` header in the request, including name—value pairs and the associated attributes.
**Note**  
This field is truncated to 800 bytes.

1. **`cs-uri-query`**

   The query string portion of the request URL, if any.

1. **`x-edge-response-result-type`**

   How the server classified the response just before returning the response to the viewer. See also the `x-edge-result-type` field. Possible values include:
   + `Hit` – The server served the object to the viewer from the cache.
   + `RefreshHit` – The server found the object in the cache but the object had expired, so the server contacted the origin to verify that the cache had the latest version of the object.
   + `Miss` – The request could not be satisfied by an object in the cache, so the server forwarded the request to the origin server and returned the result to the viewer.
   + `LimitExceeded` – The request was denied because a CloudFront quota (formerly referred to as a limit) was exceeded.
   + `CapacityExceeded` – The server returned a 503 error because it didn't have enough capacity at the time of the request to serve the object.
   + `Error` – Typically, this means the request resulted in a client error (the value of the `sc-status` field is in the `4xx` range) or a server error (the value of the `sc-status` field is in the `5xx` range).

     If the value of the `x-edge-result-type` field is `Error` and the value of this field is not `Error`, the client disconnected before finishing the download.
   + `Redirect` – The server redirected the viewer from HTTP to HTTPS according to the distribution settings.
   + `LambdaExecutionError` – The Lambda@Edge function associated with the distribution didn't complete due to a malformed association, a function timeout, an Amazon dependency issue, or another general availability problem.

1. **`x-forwarded-for`**

   If the viewer used an HTTP proxy or a load balancer to send the request, the value of the `c-ip` field is the IP address of the proxy or load balancer. In that case, this field is the IP address of the viewer that originated the request. This field can contain multiple comma-separated IP addresses. Each IP address can be an IPv4 address (for example, `192.0.2.183`) or an IPv6 address (for example, `2001:0db8:85a3::8a2e:0370:7334`).

1. **`ssl-protocol`**

   When the request used HTTPS, this field contains the SSL/TLS protocol that the viewer and server negotiated for transmitting the request and response. For a list of possible values, see the supported SSL/TLS protocols in [Supported protocols and ciphers between viewers and CloudFront](secure-connections-supported-viewer-protocols-ciphers.md).

1. **`ssl-cipher`**

   When the request used HTTPS, this field contains the SSL/TLS cipher that the viewer and server negotiated for encrypting the request and response. For a list of possible values, see the supported SSL/TLS ciphers in [Supported protocols and ciphers between viewers and CloudFront](secure-connections-supported-viewer-protocols-ciphers.md).

1. **`x-edge-result-type`**

   How the server classified the response after the last byte left the server. In some cases, the result type can change between the time that the server is ready to send the response and the time that it finishes sending the response. See also the `x-edge-response-result-type` field.

   For example, in HTTP streaming, suppose the server finds a segment of the stream in the cache. In that scenario, the value of this field would ordinarily be `Hit`. However, if the viewer closes the connection before the server has delivered the entire segment, the final result type (and the value of this field) is `Error`.

   WebSocket and gRPC connections will have a value of `Miss` for this field because the content is not cacheable and is proxied directly to the origin.

   Possible values include:
   + `Hit` – The server served the object to the viewer from the cache.
   + `RefreshHit` – The server found the object in the cache but the object had expired, so the server contacted the origin to verify that the cache had the latest version of the object.
   + `Miss` – The request could not be satisfied by an object in the cache, so the server forwarded the request to the origin and returned the result to the viewer.
   + `LimitExceeded` – The request was denied because a CloudFront quota (formerly referred to as a limit) was exceeded.
   + `CapacityExceeded` – The server returned an HTTP 503 status code because it didn't have enough capacity at the time of the request to serve the object.
   + `Error` – Typically, this means the request resulted in a client error (the value of the `sc-status` field is in the `4xx` range) or a server error (the value of the `sc-status` field is in the `5xx` range). If the value of the `sc-status` field is `200`, or if the value of this field is `Error` and the value of the `x-edge-response-result-type` field is not `Error`, it means the HTTP request was successful but the client disconnected before receiving all of the bytes.
   + `Redirect` – The server redirected the viewer from HTTP to HTTPS according to the distribution settings.
   + `LambdaExecutionError` – The Lambda@Edge function associated with the distribution didn't complete due to a malformed association, a function timeout, an Amazon dependency issue, or another general availability problem.

1. **`fle-encrypted-fields`**

   The number of [field-level encryption](field-level-encryption.md) fields that the server encrypted and forwarded to the origin. CloudFront servers stream the processed request to the origin as they encrypt data, so this field can have a value even if the value of `fle-status` is an error.

1. **`fle-status`**

   When [field-level encryption](https://docs.amazonaws.cn/AmazonCloudFront/latest/DeveloperGuide/field-level-encryption.html) is configured for a distribution, this field contains a code that indicates whether the request body was successfully processed. When the server successfully processes the request body, encrypts values in the specified fields, and forwards the request to the origin, the value of this field is `Processed`. The value of `x-edge-result-type` can still indicate a client-side or server-side error in this case.

   Possible values for this field include:
   + `ForwardedByContentType` – The server forwarded the request to the origin without parsing or encryption because no content type was configured.
   + `ForwardedByQueryArgs` – The server forwarded the request to the origin without parsing or encryption because the request contains a query argument that wasn't in the configuration for field-level encryption.
   + `ForwardedDueToNoProfile` – The server forwarded the request to the origin without parsing or encryption because no profile was specified in the configuration for field-level encryption.
   + `MalformedContentTypeClientError` – The server rejected the request and returned an HTTP 400 status code to the viewer because the value of the `Content-Type` header was in an invalid format.
   + `MalformedInputClientError` – The server rejected the request and returned an HTTP 400 status code to the viewer because the request body was in an invalid format.
   + `MalformedQueryArgsClientError` – The server rejected the request and returned an HTTP 400 status code to the viewer because a query argument was empty or in an invalid format.
   + `RejectedByContentType` – The server rejected the request and returned an HTTP 400 status code to the viewer because no content type was specified in the configuration for field-level encryption.
   + `RejectedByQueryArgs` – The server rejected the request and returned an HTTP 400 status code to the viewer because no query argument was specified in the configuration for field-level encryption.
   + `ServerError` – The origin server returned an error.

   If the request exceeds a field-level encryption quota (formerly referred to as a limit), this field contains one of the following error codes, and the server returns HTTP status code 400 to the viewer. For a list of the current quotas on field-level encryption, see [Quotas on field-level encryption](cloudfront-limits.md#limits-field-level-encryption).
   + `FieldLengthLimitClientError` – A field that is configured to be encrypted exceeded the maximum length allowed.
   + `FieldNumberLimitClientError` – A request that the distribution is configured to encrypt contains more than the number of fields allowed.
   + `RequestLengthLimitClientError` – The length of the request body exceeded the maximum length allowed when field-level encryption is configured.

1. **`sc-content-type`**

   The value of the HTTP `Content-Type` header of the response.

1. **`sc-content-len`**

   The value of the HTTP `Content-Length` header of the response.

1. **`sc-range-start`**

   When the response contains the HTTP `Content-Range` header, this field contains the range start value.

1. **`sc-range-end`**

   When the response contains the HTTP `Content-Range` header, this field contains the range end value.

1. **`c-port`**

   The port number of the request from the viewer.

1. **`x-edge-detailed-result-type`**

   This field contains the same value as the `x-edge-result-type` field, except in the following cases:
   + When the object was served to the viewer from the [Origin Shield](origin-shield.md) layer, this field contains `OriginShieldHit`.
   + When the object was not in the CloudFront cache and the response was generated by an [origin request Lambda@Edge function](lambda-at-the-edge.md), this field contains `MissGeneratedResponse`.
   + When the value of the `x-edge-result-type` field is `Error`, this field contains one of the following values with more information about the error:
     + `AbortedOrigin` – The server encountered an issue with the origin.
     + `ClientCommError` – The response to the viewer was interrupted due to a communication problem between the server and the viewer.
     + `ClientGeoBlocked` – The distribution is configured to refuse requests from the viewer's geographic location.
     + `ClientHungUpRequest` – The viewer stopped prematurely while sending the request.
     + `Error` – An error occurred for which the error type doesn't fit any of the other categories. This error type can occur when the server serves an error response from the cache.
     + `InvalidRequest` – The server received an invalid request from the viewer.
     + `InvalidRequestBlocked` – Access to the requested resource is blocked.
     + `InvalidRequestCertificate` – The distribution doesn't match the SSL/TLS certificate for which the HTTPS connection was established.
     + `InvalidRequestHeader` – The request contained an invalid header.
     + `InvalidRequestMethod` – The distribution is not configured to handle the HTTP request method that was used. This can happen when the distribution supports only cacheable requests.
     + `OriginCommError` – The request timed out while connecting to the origin, or reading data from the origin.
     + `OriginConnectError` – The server couldn't connect to the origin.
     + `OriginContentRangeLengthError` – The `Content-Length` header in the origin's response doesn't match the length in the `Content-Range` header.
     + `OriginDnsError` – The server couldn't resolve the origin's domain name.
     + `OriginError` – The origin returned an incorrect response.
     + `OriginHeaderTooBigError` – A header returned by the origin is too big for the edge server to process.
     + `OriginInvalidResponseError` – The origin returned an invalid response.
     + `OriginReadError` – The server couldn't read from the origin.
     + `OriginWriteError` – The server couldn't write to the origin.
     + `OriginZeroSizeObjectError` – A zero size object sent from the origin resulted in an error.
     + `SlowReaderOriginError` – The viewer was slow to read the message that caused the origin error.

1. **`c-country`**

   A country code that represents the viewer's geographic location, as determined by the viewer's IP address. For a list of country codes, see [ISO 3166-1 alpha-2](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2).

1. **`cs-accept-encoding`**

    The value of the `Accept-Encoding` header in the viewer request.

1. **`cs-accept`**

   The value of the `Accept` header in the viewer request.

1. **`cache-behavior-path-pattern`**

   The path pattern that identifies the cache behavior that matched the viewer request.

1. **`cs-headers`**

   The HTTP headers (names and values) in the viewer request.
**Note**  
This field is truncated to 800 bytes.

1. **`cs-header-names`**

   The names of the HTTP headers (not values) in the viewer request.
**Note**  
This field is truncated to 800 bytes.

1. **`cs-headers-count`**

    The number of HTTP headers in the viewer request.

1. **`primary-distribution-id`**

   When continuous deployment is enabled, this ID identifies which distribution is the primary in the current distribution.

1. **`primary-distribution-dns-name`**

   When continuous deployment is enabled, this value shows the primary domain name that is related to the current CloudFront distribution (for example, d111111abcdef8.cloudfront.net).

1. **`origin-fbl`**

   The number of seconds of first-byte latency between CloudFront and your origin.

1. **`origin-lbl`**

   The number of seconds of last-byte latency between CloudFront and your origin.

1. **`asn`**

   The autonomous system number (ASN) of the viewer.

1. <a name="CMCD-real-time-logging-fields"></a>
**CMCD fields in real-time access logs**  
For more information about these fields, see the [CTA Specification Web Application Video Ecosystem - Common Media Client Data CTA-5004](https://cdn.cta.tech/cta/media/media/resources/standards/pdfs/cta-5004-final.pdf) document.

1. **`cmcd-encoded-bitrate`**

   The encoded bitrate of the requested audio or video object. 

1. **`cmcd-buffer-length`**

   The buffer length of the requested media object.

1. **`cmcd-buffer-starvation`**

   Whether the buffer was starved at some point between the prior request and the object request. This can cause the player to be in a rebuffering stat, which can stall the video or audio playback. 

1. **`cmcd-content-id`**

   A unique string that identifies the current content.

1. **`cmcd-object-duration`**

   The playback duration of the requested object (in milliseconds). 

1. **`cmcd-deadline`**

   The deadline from the request time that the first sample of this object must be available, so that a buffer underrun state or other playback problems are avoided. 

1. **`cmcd-measured-throughput`**

   The throughput between the client and server, as measured by the client.

1. **`cmcd-next-object-request`**

   The relative path of the next requested object.

1. **`cmcd-next-range-request`**

   If the next request is a partial object request, this string denotes the byte range to be requested.

1. **`cmcd-object-type`**

   The media type of the current object being requested.

1. **`cmcd-playback-rate`**

   1 if real-time, 2 if double-speed, 0 if not playing. 

1. **`cmcd-requested-maximum-throughput`**

   The requested maximum throughput that the client considers sufficient for asset delivery.

1. **`cmcd-streaming-format`**

   The streaming format that defines the current request.

1. **`cmcd-session-id`**

   A GUID identifying the current playback session.

1. **`cmcd-stream-type`**

   Token identifying segment availability. `v` = all segments are available. `l` = segments become available over time.

1. **`cmcd-startup`**

   Key is included without a value if the object is needed urgently during startup, seeking, or recovery after a buffer-empty event.

1. **`cmcd-top-bitrate`**

   The highest bitrate rendition that the client can play.

1. **`cmcd-version`**

   The version of this specification used for interpreting the defined key names and values. If this key is omitted, the client and server *must* interpret the values as being defined by version 1.

1. **`r-host`**

   This field is sent for origin requests and it indicates the domain of the origin server used to serve the object. In case of errors, you can use this field to find the last origin attempted, for example: `cd8jhdejh6a.mediapackagev2.us-east-1.amazonaws.com`.

1. **`sr-reason`**

   This field provides a reason why the origin was selected. It's empty when a request to the primary origin succeeds.

   If origin failover occurs, the field will contain the HTTP error code that led to the failover, such as `Failover:403` or `Failover:502`. In case of origin failover, if the retried request also fails and you have not configured custom error pages, then `r-status` indicates the response of the second origin. However, if you have configured custom error pages along with origin failover, then this will contain the response of the second origin if the request failed and a custom error page was returned instead.

   If no origin failover occurs but media quality-aware resilience (MQAR) origin selection occurs, then this will be logged as `MediaQuality`. For more information, see [Media quality-aware resiliency](media-quality-score.md).

1. **`x-edge-mqcs`**

   This field indicates the Media Quality Confidence Score (MQCS) (range: 0 – 100) for media segments that CloudFront retrieved in the CMSD response headers from MediaPackage v2. This field is available for requests matching a cache behavior that has an MQAR-enabled origin group. CloudFront logs this field for media segments that are also served from its cache in addition to origin requests. For more information, see [Media quality-aware resiliency](media-quality-score.md).

1. **`distribution-tenant-id`**

   The ID of the distribution tenant.

1. **`connection-id`**

   A unique identifier for the TLS connection. 

   You must enable mTLS for your distributions before you can get information for this field. For more information, see [Mutual TLS authentication with CloudFront (Viewer mTLS)Origin mutual TLS with CloudFront](mtls-authentication.md).

### Endpoint (Kinesis Data Streams)
<a name="real-time-logs-endpoint"></a>

The endpoint contains information about the Kinesis Data Streams where you want to send real-time logs. You provide the Amazon Resource Name (ARN) of the data stream.

For more information about creating a Kinesis Data Streams, see the following topics in the *Amazon Kinesis Data Streams Developer Guide*.
+ [Creating and managing streams](https://docs.amazonaws.cn/streams/latest/dev/working-with-streams.html)
+ [Perform basic Kinesis Data Streams operations using the Amazon CLI](https://docs.amazonaws.cn/streams/latest/dev/fundamental-stream.html)
+ [Creating a stream](https://docs.amazonaws.cn/streams/latest/dev/kinesis-using-sdk-java-create-stream.html) (uses the Amazon SDK for Java)

When you create a data stream, you need to specify the number of shards. Use the following information to help you estimate the number of shards you need.

**To estimate the number of shards for your Kinesis data stream**

1. Calculate (or estimate) the number of requests per second that your CloudFront distribution receives.

   You can use the [CloudFront usage reports](https://console.amazonaws.cn/cloudfront/v4/home#/usage) (in the CloudFront console) and the [CloudFront metrics](viewing-cloudfront-metrics.md#monitoring-console.distributions) (in the CloudFront and Amazon CloudWatch consoles) to help you calculate your requests per second.

1. Determine the typical size of a single real-time access log record.

   In general, a single log record is around 500 bytes. A large record that includes all available fields is generally around 1 KB.

   If you're not sure what your log record size is, you can enable real-time logs with a low sampling rate (for example, 1%), and then calculate the average record size using monitoring data in Kinesis Data Streams (total incoming bytes divided by total number of records).

1. On the [Amazon Kinesis Data Streams pricing page](https://www.amazonaws.cn/kinesis/data-streams/pricing/), under Amazon Pricing Calculator, choose **Create your custom estimate now**.
   + In the calculator, enter the number of requests (records) per second.
   + Enter the average record size of a single log record.
   + Choose **Show calculations**.

   The pricing calculator shows you the number of shards you need and the estimated cost.

### IAM role
<a name="real-time-logs-IAM"></a>

The Amazon Identity and Access Management (IAM) role that gives CloudFront permission to deliver real-time access logs to your Kinesis data stream.

When you create a real-time access log configuration with the CloudFront console, you can choose **Create new service role** to let the console create the IAM role for you.

When you create a real-time access log configuration with Amazon CloudFormation or the CloudFront API (Amazon CLI or SDK), you must create the IAM role yourself and provide the role ARN. To create the IAM role yourself, use the following policies.

**IAM role trust policy**

To use the following IAM role trust policy, replace *111122223333* with your Amazon Web Services account number. The `Condition` element in this policy helps to prevent the [confused deputy problem](https://docs.amazonaws.cn/IAM/latest/UserGuide/confused-deputy.html) because CloudFront can only assume this role on behalf of a distribution in your Amazon Web Services account.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "cloudfront.amazonaws.com"
            },
            "Action": "sts:AssumeRole",
            "Condition": {
                "StringEquals": {
                    "aws:SourceAccount": "111122223333"
                }
            }
        }
    ]
}
```

------

**IAM role permissions policy for an unencrypted data stream**

To use the following policy, replace *arn:aws:kinesis:us-west-2:123456789012:stream/StreamName* with the ARN of your Kinesis data stream.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "kinesis:DescribeStreamSummary",
                "kinesis:DescribeStream",
                "kinesis:PutRecord",
                "kinesis:PutRecords"
            ],
            "Resource": [
                "arn:aws-cn:kinesis:us-west-2:123456789012:stream/StreamName"
            ]
        }
    ]
}
```

------

**IAM role permissions policy for an encrypted data stream**

To use the following policy, replace *arn:aws:kinesis:us-west-2:123456789012:stream/StreamName* with the ARN of your Kinesis data stream and *arn:aws:kms:us-west-2:123456789012:key/e58a3d0b-fe4f-4047-a495-ae03cc73d486* with the ARN of your Amazon KMS key.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "kinesis:DescribeStreamSummary",
                "kinesis:DescribeStream",
                "kinesis:PutRecord",
                "kinesis:PutRecords"
            ],
            "Resource": [
                "arn:aws-cn:kinesis:us-west-2:123456789012:stream/StreamName"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "kms:GenerateDataKey"
            ],
            "Resource": [
                "arn:aws-cn:kms:us-west-2:123456789012:key/e58a3d0b-fe4f-4047-a495-ae03cc73d486"
            ]
        }
    ]
}
```

------

****  

## Create a Kinesis Data Streams consumer
<a name="real-time-log-consumer-guidance"></a>

To read and analyze your real-time access logs, you build or use a Kinesis Data Streams *consumer*. When you build a consumer for CloudFront real-time logs, it's important to know that the fields in every real-time access log record are always delivered in the same order, as listed in the [Fields](#real-time-logs-fields) section. Make sure that you build your consumer to accommodate this fixed order.

For example, consider a real-time access log configuration that includes only these three fields: `time-to-first-byte`, `sc-status`, and `c-country`. In this scenario, the last field, `c-country`, is always field number 3 in every log record. However, if you later add fields to the real-time access log configuration, the placement of each field in a record can change.

For example, if you add the fields `sc-bytes` and `time-taken` to the real-time access log configuration, these fields are inserted into each log record according to the order shown in the [Fields](#real-time-logs-fields) section. The resulting order of all five fields is `time-to-first-byte`, `sc-status`, `sc-bytes`, `time-taken`, and `c-country`. The `c-country` field was originally field number 3, but is now field number 5. Make sure that your consumer application can handle fields that change position in a log record, in case you add fields to your real-time access log configuration.

## Troubleshoot real-time access logs
<a name="real-time-log-troubleshooting"></a>

After you create a real-time access log configuration, you might find that no records (or not all records) are delivered to Kinesis Data Streams. In this case, you should first verify that your CloudFront distribution is receiving viewer requests. If it is, you can check the following setting to continue troubleshooting.

**IAM role permissions**  
To deliver real-time access log records to your Kinesis data stream, CloudFront uses the IAM role in the real-time access log configuration. Make sure that the role trust policy and the role permissions policy match the policies shown in [IAM role](#real-time-logs-IAM).

**Kinesis Data Streams throttling**  
If CloudFront writes real-time access log records to your Kinesis data stream faster than the stream can handle, Kinesis Data Streams might throttle the requests from CloudFront. In this case, you can increase the number of shards in your Kinesis data stream. Each shard can support writes up to 1,000 records per second, up to a maximum data write of 1 MB per second.

# Edge function logs
<a name="edge-functions-logs"></a>

You can use Amazon CloudWatch Logs to get logs for your edge functions, both [Lambda@Edge](lambda-at-the-edge.md) and [CloudFront Functions](cloudfront-functions.md). You can access the logs by using the CloudWatch console or the CloudWatch Logs API.

**Important**  
We recommend that you use the logs to understand the nature of the requests for your content, not as a complete accounting of all requests. CloudFront delivers edge function logs on a best-effort basis. The log entry for a particular request might be delivered long after the request was actually processed and, in rare cases, a log entry might not be delivered at all. When a log entry is omitted from edge function logs, the number of entries in the edge function logs won't match the usage that appears in the Amazon billing and usage reports.

**Topics**
+ [

## Lambda@Edge logs
](#lambda-at-edge-logs)
+ [

## CloudFront Functions logs
](#cloudfront-function-logs)

## Lambda@Edge logs
<a name="lambda-at-edge-logs"></a>

Lambda@Edge automatically sends function logs to CloudWatch Logs, creating log streams in the Amazon Web Services Regions where the functions are invoked. When you create or modify a function in Amazon Lambda, you can either use the default CloudWatch log group name or customize it.
+ The default log group name is `/aws/lambda/<FunctionName>` where `<FunctionName>` is the name that you specified when you created the function. When sending logs to CloudWatch, Lambda@Edge will automatically add the `us-east-1` prefix to the function name, so that the log group name is `/aws/lambda/us-east-1.<FunctionName>`. This prefix corresponds to the Amazon Web Services Region where the function was created. This prefix remains part of the log group name, even in other Regions where the function is invoked. 
+ If you specify a custom log group name, such as `/MyLogGroup`, Lambda@Edge won't add the Region prefix. The log group name remains the same across all other Regions where the function is invoked.

**Note**  
If you create a custom log group and specify the same name as the default `/aws/lambda/<FunctionName>`, Lambda@Edge adds the `us-east-1` prefix to the function name.

In addition to customizing the log group name, Lambda@Edge functions support JSON and plain text log formats, and log-level filtering. For more information , see [Configuring advance logging controls for Lambda function](https://docs.amazonaws.cn/lambda/latest/dg/monitoring-cloudwatchlogs-advanced.html) in the *Amazon Lambda Developer Guide*.

**Note**  
Lambda@Edge throttles logs based on the request volume and the size of logs.

You must review CloudWatch log files in the correct Region to see your Lambda@Edge function log files. To see the Regions where your Lambda@Edge function is running, view graphs of metrics for the function in the CloudFront console. Metrics are displayed for each Region. On the same page, you can choose a Region and then view log files for that Region to investigate issues.

To learn more about how to use CloudWatch Logs with Lambda@Edge functions, see the following topics:
+ For more information about viewing graphs in the **Monitoring** section of the CloudFront console, see [Monitor CloudFront metrics with Amazon CloudWatch](monitoring-using-cloudwatch.md).
+ For information about the permissions required to send data to CloudWatch Logs, see [Set up IAM permissions and roles for Lambda@Edge](lambda-edge-permissions.md).
+ For information about adding logging to a Lambda@Edge function, see [Amazon Lambda function logging in Node.js](https://docs.amazonaws.cn/lambda/latest/dg/nodejs-logging.html) or [Amazon Lambda function logging in Python](https://docs.amazonaws.cn/lambda/latest/dg/python-logging.html) in the *Amazon Lambda Developer Guide*.
+ For information about CloudWatch Logs quotas (formerly known as limits), see [CloudWatch Logs quotas](https://docs.amazonaws.cn/AmazonCloudWatch/latest/logs/cloudwatch_limits_cwl.html) in the *Amazon CloudWatch Logs User Guide*.

## CloudFront Functions logs
<a name="cloudfront-function-logs"></a>

If a CloudFront function's code contains `console.log()` statements, CloudFront Functions automatically sends these log lines to CloudWatch Logs. If there are no `console.log()` statements, nothing is sent to CloudWatch Logs.

CloudFront Functions always creates log streams in the US East (N. Virginia) Region (`us-east-1`), no matter which edge location ran the function. The log stream name is in the format `YYYY/M/D/UUID`.

The log group name uses the following format:
+ For CloudFront Functions at the cache behavior level, the format is `/aws/cloudfront/function/<FunctionName>`
+ For CloudFront Functions at the distribution level (Connection Functions), the format is `/aws/cloudfront/connection-function/<FunctionName>`

The `<FunctionName>` is the name that you gave to the function when you created it. .

**Example Viewer requests**  
The following shows an example log message sent to CloudWatch Logs. Each line begins with an ID that uniquely identifies a CloudFront request. The message begins with a `START` line that includes the CloudFront distribution ID, and ends with an `END` line. Between the `START` and `END` lines are the log lines generated by `console.log()` statements in the function.  

```
U7b4hR_RaxMADupvKAvr8_m9gsGXvioUggLV5Oyq-vmAtH8HADpjhw== START DistributionID: E3E5D42GADAXZZ
U7b4hR_RaxMADupvKAvr8_m9gsGXvioUggLV5Oyq-vmAtH8HADpjhw== Example function log output
U7b4hR_RaxMADupvKAvr8_m9gsGXvioUggLV5Oyq-vmAtH8HADpjhw== END
```

**Example Connection requests**  
The following shows an example log message sent to CloudWatch Logs. Each line begins with an ID that uniquely identifies a CloudFront request. The message begins with a `START` line that includes the CloudFront distribution ID, and ends with an `END` line. Between the `START` and `END` lines are the log lines generated by `console.log()` statements in the function.  

```
U7b4hR_RaxMADupvKAvr8_m9gsGXvioUggLV5Oyq-vmAtH8HADpjhw== START DistributionID: E3E5D42GADA123
U7b4hR_RaxMADupvKAvr8_m9gsGXvioUggLV5Oyq-vmAtH8HADpjhw== 1.2.3.4
U7b4hR_RaxMADupvKAvr8_m9gsGXvioUggLV5Oyq-vmAtH8HADpjhw== END
```

**Note**  
CloudFront Functions sends logs to CloudWatch only for functions in the `LIVE` stage that run in response to production requests and responses. When you [test a function](test-function.md), CloudFront doesn't send any logs to CloudWatch. The test output contains information about errors, compute utilization, and function logs (`console.log()` statements), but this information isn't sent to CloudWatch.

CloudFront Functions uses an Amazon Identity and Access Management (IAM) [service-linked role](https://docs.amazonaws.cn/IAM/latest/UserGuide/id_roles_terms-and-concepts.html#iam-term-service-linked-role) to send logs to CloudWatch Logs in your account. A service-linked role is an IAM role that is linked directly to an Amazon Web Services service. Service-linked roles are predefined by the service and include all of the permissions that the service requires to call other Amazon Web Services services for you. CloudFront Functions uses the **AWSServiceRoleForCloudFrontLogger** service-linked role. For more information about this role, see [Service-linked roles for Lambda@Edge](lambda-edge-permissions.md#using-service-linked-roles-lambda-edge) (Lambda@Edge uses the same service-linked role).

When a function fails with a validation error or an execution error, the information is logged in [standard logs](AccessLogs.md) and [real-time access logs](real-time-logs.md). For specific information about the error, see the `x-edge-result-type`, `x-edge-response-result-type`, and `x-edge-detailed-result-type` fields.

# Logging Amazon CloudFront API calls using Amazon CloudTrail
<a name="logging_using_cloudtrail"></a>

CloudFront is integrated with [Amazon CloudTrail](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/cloudtrail-user-guide.html), a service that provides a record of actions taken by a user, role, or an Amazon Web Services service. CloudTrail captures all API calls for CloudFront as events. The calls captured include calls from the CloudFront console and code calls to the CloudFront API operations. Using the information collected by CloudTrail, you can determine the request that was made to CloudFront, the IP address from which the request was made, when it was made, and additional details.

Every event or log entry contains information about who generated the request. The identity information helps you determine the following:
+ Whether the request was made with root user or user credentials.
+ Whether the request was made on behalf of an IAM Identity Center user.
+ Whether the request was made with temporary security credentials for a role or federated user.
+ Whether the request was made by another Amazon Web Services service.

CloudTrail is active in your Amazon Web Services account when you create the account and you automatically have access to the CloudTrail **Event history**. The CloudTrail **Event history** provides a viewable, searchable, downloadable, and immutable record of the past 90 days of recorded management events in an Amazon Web Services Region. For more information, see [Working with CloudTrail Event history](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/view-cloudtrail-events.html) in the *Amazon CloudTrail User Guide*. There are no CloudTrail charges for viewing the **Event history**.

For an ongoing record of events in your Amazon Web Services account past 90 days, create a trail or a [CloudTrail Lake](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/cloudtrail-lake.html) event data store.

**CloudTrail trails**  
A *trail* enables CloudTrail to deliver log files to an Amazon S3 bucket. All trails created using the Amazon Web Services Management Console are multi-Region. You can create a single-Region or a multi-Region trail by using the Amazon CLI. Creating a multi-Region trail is recommended because you capture activity in all Amazon Web Services Regions in your account. If you create a single-Region trail, you can view only the events logged in the trail's Amazon Web Services Region. For more information about trails, see [Creating a trail for your Amazon Web Services account](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/cloudtrail-create-and-update-a-trail.html) and [Creating a trail for an organization](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/creating-trail-organization.html) in the *Amazon CloudTrail User Guide*.  
You can deliver one copy of your ongoing management events to your Amazon S3 bucket at no charge from CloudTrail by creating a trail, however, there are Amazon S3 storage charges. For more information about CloudTrail pricing, see [Amazon CloudTrail Pricing](https://www.amazonaws.cn/cloudtrail/pricing/). For information about Amazon S3 pricing, see [Amazon S3 Pricing](https://www.amazonaws.cn/s3/pricing/).

**CloudTrail Lake event data stores**  
*CloudTrail Lake* lets you run SQL-based queries on your events. CloudTrail Lake converts existing events in row-based JSON format to [ Apache ORC](https://orc.apache.org/) format. ORC is a columnar storage format that is optimized for fast retrieval of data. Events are aggregated into *event data stores*, which are immutable collections of events based on criteria that you select by applying [advanced event selectors](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/cloudtrail-lake-concepts.html#adv-event-selectors). The selectors that you apply to an event data store control which events persist and are available for you to query. For more information about CloudTrail Lake, see [Working with Amazon CloudTrail Lake](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/cloudtrail-lake.html) in the *Amazon CloudTrail User Guide*.  
CloudTrail Lake event data stores and queries incur costs. When you create an event data store, you choose the [pricing option](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/cloudtrail-lake-manage-costs.html#cloudtrail-lake-manage-costs-pricing-option) you want to use for the event data store. The pricing option determines the cost for ingesting and storing events, and the default and maximum retention period for the event data store. For more information about CloudTrail pricing, see [Amazon CloudTrail Pricing](https://www.amazonaws.cn/cloudtrail/pricing/).

**Note**  
CloudFront is a global service. CloudTrail records events for CloudFront in the US East (N. Virginia) Region. For more information, see [Global service events](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/cloudtrail-concepts.html#cloudtrail-concepts-global-service-events) in the *Amazon CloudTrail User Guide*.  
If you use temporary security credentials by using Amazon Security Token Service, calls to regional endpoints, such as `us-west-2`, are logged in CloudTrail to their appropriate Region.   
For more information about CloudFront endpoints, see [CloudFront endpoints and quotas](https://docs.amazonaws.cn/general/latest/gr/cf_region.html) in the *Amazon Web Services General Reference*.

## CloudFront data events in CloudTrail
<a name="cloudtrail-data-events"></a>

[Data events](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events) provide information about the resource operations performed on or in a resource (for example, reading or writing to a CloudFront distribution). These are also known as data plane operations. Data events are often high-volume activities. By default, CloudTrail doesn’t log data events. The CloudTrail **Event history** doesn't record data events.

Additional charges apply for data events. For more information about CloudTrail pricing, see [Amazon CloudTrail Pricing](https://www.amazonaws.cn/cloudtrail/pricing/).

You can log data events for the CloudFront resource types by using the CloudTrail console, Amazon CLI, or CloudTrail API operations. For more information about how to log data events, see [Logging data events with the Amazon Web Services Management Console](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events-console) and [Logging data events with the Amazon Command Line Interface](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#creating-data-event-selectors-with-the-AWS-CLI) in the *Amazon CloudTrail User Guide*.

The following table lists the CloudFront resource types for which you can log data events. The **Data event type (console)** column shows the value to choose from the **Data event type** list on the CloudTrail console. The **resources.type value** column shows the `resources.type` value, which you would specify when configuring advanced event selectors using the Amazon CLI or CloudTrail APIs. The **Data APIs logged to CloudTrail** column shows the API calls logged to CloudTrail for the resource type. 


| Data event type (console) | resources.type value | Data APIs logged to CloudTrail | 
| --- | --- | --- | 
| CloudFront KeyValueStore |  AWS::CloudFront::KeyValueStore  |  [\[See the AWS documentation website for more details\]](http://docs.amazonaws.cn/en_us/AmazonCloudFront/latest/DeveloperGuide/logging_using_cloudtrail.html)  | 

You can configure advanced event selectors to filter on the `eventName`, `readOnly`, and `resources.ARN` fields to log only those events that are important to you. For more information about these fields, see [https://docs.amazonaws.cn/awscloudtrail/latest/APIReference/API_AdvancedFieldSelector.html](https://docs.amazonaws.cn/awscloudtrail/latest/APIReference/API_AdvancedFieldSelector.html) in the *Amazon CloudTrail API Reference*.

## CloudFront management events in CloudTrail
<a name="cloudtrail-management-events"></a>

[Management events](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/logging-management-events-with-cloudtrail.html#logging-management-events) provide information about management operations that are performed on resources in your Amazon Web Services account. These are also known as control plane operations. By default, CloudTrail logs management events.

Amazon CloudFront logs all CloudFront control plane operations as management events. For a list of the Amazon CloudFront control plane operations that CloudFront logs to CloudTrail, see the [Amazon CloudFront API Reference](https://docs.amazonaws.cn/cloudfront/latest/APIReference/API_Operations_Amazon_CloudFront.html).

## CloudFront event examples
<a name="cloudtrail-event-examples"></a>

An event represents a single request from any source and includes information about the requested API operation, the date and time of the operation, request parameters, and so on. CloudTrail log files aren't an ordered stack trace of the public API calls, so events don't appear in any specific order.

**Contents**
+ [

### Example: UpdateDistribution
](#example-cloudfront-service-cloudtrail-log)
+ [

### Example: UpdateKeys
](#example-cloudfront-kvs-cloudtrail-log)

### Example: UpdateDistribution
<a name="example-cloudfront-service-cloudtrail-log"></a>

The following example shows a CloudTrail event that demonstrates the [UpdateDistribution](https://docs.amazonaws.cn/cloudfront/latest/APIReference/API_UpdateDistribution.html) operation.

For calls to the CloudFront API, the `eventSource` is `cloudfront.amazonaws.com`.

```
{
    "eventVersion": "1.08",
    "userIdentity": {
        "type": "AssumedRole",
        "principalId": "AIDACKCEVSQ6C2EXAMPLE:role-session-name",
        "arn": "arn:aws:sts::111122223333:assumed-role/Admin/role-session-name",
        "accountId": "111122223333",
        "accessKeyId": "ASIAIOSFODNN7EXAMPLE",
        "sessionContext": {
            "sessionIssuer": {
                "type": "Role",
                "principalId": "AIDACKCEVSQ6C2EXAMPLE",
                "arn": "arn:aws:iam::111122223333:role/Admin",
                "accountId": "111122223333",
                "userName": "Admin"
            },
            "webIdFederationData": {},
            "attributes": {
                "creationDate": "2024-02-02T19:23:50Z",
                "mfaAuthenticated": "false"
            }
        }
    },
    "eventTime": "2024-02-02T19:26:01Z",
    "eventSource": "cloudfront.amazonaws.com",
    "eventName": "UpdateDistribution",
    "awsRegion": "us-east-1",
    "sourceIPAddress": "52.94.133.137",
    "userAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/121.0.0.0 Safari/537.36",
    "requestParameters": {
        "distributionConfig": {
            "defaultRootObject": "",
            "aliases": {
                "quantity": 3,
                "items": [
                    "alejandro_rosalez.awsps.myinstance.com",
                    "cross-testing.alejandro_rosalez.awsps.myinstance.com",
                    "*.alejandro_rosalez.awsps.myinstance.com"
                ]
            },
            "cacheBehaviors": {
                "quantity": 0,
                "items": []
            },
            "httpVersion": "http2and3",
            "originGroups": {
                "quantity": 0,
                "items": []
            },
            "viewerCertificate": {
                "minimumProtocolVersion": "TLSv1.2_2021",
                "cloudFrontDefaultCertificate": false,
                "aCMCertificateArn": "arn:aws:acm:us-east-1:111122223333:certificate/a1b2c3d4-5678-90ab-cdef-EXAMPLE11111",
                "sSLSupportMethod": "sni-only"
            },
            "webACLId": "arn:aws:wafv2:us-east-1:111122223333:global/webacl/testing-acl/a1b2c3d4-5678-90ab-cdef-EXAMPLE22222",
            "customErrorResponses": {
                "quantity": 0,
                "items": []
            },
            "logging": {
                "includeCookies": false,
                "prefix": "",
                "enabled": false,
                "bucket": ""
            },
            "priceClass": "PriceClass_All",
            "restrictions": {
                "geoRestriction": {
                    "restrictionType": "none",
                    "quantity": 0,
                    "items": []
                }
            },
            "isIPV6Enabled": true,
            "callerReference": "1578329170895",
            "continuousDeploymentPolicyId": "",
            "enabled": true,
            "defaultCacheBehavior": {
                "targetOriginId": "d111111abcdef8",
                "minTTL": 0,
                "compress": false,
                "maxTTL": 31536000,
                "functionAssociations": {
                    "quantity": 0,
                    "items": []
                },
                "trustedKeyGroups": {
                    "quantity": 0,
                    "items": [],
                    "enabled": false
                },
                "smoothStreaming": false,
                "fieldLevelEncryptionId": "",
                "defaultTTL": 86400,
                "lambdaFunctionAssociations": {
                    "quantity": 0,
                    "items": []
                },
                "viewerProtocolPolicy": "redirect-to-https",
                "forwardedValues": {
                    "cookies": {"forward": "none"},
                    "queryStringCacheKeys": {
                        "quantity": 0,
                        "items": []
                    },
                    "queryString": false,
                    "headers": {
                        "quantity": 1,
                        "items": ["*"]
                    }
                },
                "trustedSigners": {
                    "items": [],
                    "enabled": false,
                    "quantity": 0
                },
                "allowedMethods": {
                    "quantity": 2,
                    "items": [
                        "HEAD",
                        "GET"
                    ],
                    "cachedMethods": {
                        "quantity": 2,
                        "items": [
                            "HEAD",
                            "GET"
                        ]
                    }
                }
            },
            "staging": false,
            "origins": {
                "quantity": 1,
                "items": [
                    {
                        "originPath": "",
                        "connectionTimeout": 10,
                        "customOriginConfig": {
                            "originReadTimeout": 30,
                            "hTTPSPort": 443,
                            "originProtocolPolicy": "https-only",
                            "originKeepaliveTimeout": 5,
                            "hTTPPort": 80,
                            "originSslProtocols": {
                                "quantity": 3,
                                "items": [
                                    "TLSv1",
                                    "TLSv1.1",
                                    "TLSv1.2"
                                ]
                            }
                        },
                        "id": "d111111abcdef8",
                        "domainName": "d111111abcdef8.cloudfront.net",
                        "connectionAttempts": 3,
                        "customHeaders": {
                            "quantity": 0,
                            "items": []
                        },
                        "originShield": {"enabled": false},
                        "originAccessControlId": ""
                    }
                ]
            },
            "comment": "HIDDEN_DUE_TO_SECURITY_REASONS"
        },
        "id": "EDFDVBD6EXAMPLE",
        "ifMatch": "E1RTLUR9YES76O"
    },
    "responseElements": {
        "distribution": {
            "activeTrustedSigners": {
                "quantity": 0,
                "enabled": false
            },
            "id": "EDFDVBD6EXAMPLE",
            "domainName": "d111111abcdef8.cloudfront.net",
            "distributionConfig": {
                "defaultRootObject": "",
                "aliases": {
                    "quantity": 3,
                    "items": [
                        "alejandro_rosalez.awsps.myinstance.com",
                        "cross-testing.alejandro_rosalez.awsps.myinstance.com",
                        "*.alejandro_rosalez.awsps.myinstance.com"
                    ]
                },
                "cacheBehaviors": {"quantity": 0},
                "httpVersion": "http2and3",
                "originGroups": {"quantity": 0},
                "viewerCertificate": {
                    "minimumProtocolVersion": "TLSv1.2_2021",
                    "cloudFrontDefaultCertificate": false,
                    "aCMCertificateArn": "arn:aws:acm:us-east-1:111122223333:certificate/a1b2c3d4-5678-90ab-cdef-EXAMPLE11111",
                    "sSLSupportMethod": "sni-only",
                    "certificateSource": "acm",
                    "certificate": "arn:aws:acm:us-east-1:111122223333:certificate/a1b2c3d4-5678-90ab-cdef-EXAMPLE11111"
                },
                "webACLId": "arn:aws:wafv2:us-east-1:111122223333:global/webacl/testing-acl/a1b2c3d4-5678-90ab-cdef-EXAMPLE22222",
                "customErrorResponses": {"quantity": 0},
                "logging": {
                    "includeCookies": false,
                    "prefix": "",
                    "enabled": false,
                    "bucket": ""
                },
                "priceClass": "PriceClass_All",
                "restrictions": {
                    "geoRestriction": {
                        "restrictionType": "none",
                        "quantity": 0
                    }
                },
                "isIPV6Enabled": true,
                "callerReference": "1578329170895",
                "continuousDeploymentPolicyId": "",
                "enabled": true,
                "defaultCacheBehavior": {
                    "targetOriginId": "d111111abcdef8",
                    "minTTL": 0,
                    "compress": false,
                    "maxTTL": 31536000,
                    "functionAssociations": {"quantity": 0},
                    "trustedKeyGroups": {
                        "quantity": 0,
                        "enabled": false
                    },
                    "smoothStreaming": false,
                    "fieldLevelEncryptionId": "",
                    "defaultTTL": 86400,
                    "lambdaFunctionAssociations": {"quantity": 0},
                    "viewerProtocolPolicy": "redirect-to-https",
                    "forwardedValues": {
                        "cookies": {"forward": "none"},
                        "queryStringCacheKeys": {"quantity": 0},
                        "queryString": false,
                        "headers": {
                            "quantity": 1,
                            "items": ["*"]
                        }
                    },
                    "trustedSigners": {
                        "enabled": false,
                        "quantity": 0
                    },
                    "allowedMethods": {
                        "quantity": 2,
                        "items": [
                            "HEAD",
                            "GET"
                        ],
                        "cachedMethods": {
                            "quantity": 2,
                            "items": [
                                "HEAD",
                                "GET"
                            ]
                        }
                    }
                },
                "staging": false,
                "origins": {
                    "quantity": 1,
                    "items": [
                        {
                            "originPath": "",
                            "connectionTimeout": 10,
                            "customOriginConfig": {
                                "originReadTimeout": 30,
                                "hTTPSPort": 443,
                                "originProtocolPolicy": "https-only",
                                "originKeepaliveTimeout": 5,
                                "hTTPPort": 80,
                                "originSslProtocols": {
                                    "quantity": 3,
                                    "items": [
                                        "TLSv1",
                                        "TLSv1.1",
                                        "TLSv1.2"
                                    ]
                                }
                            },
                            "id": "d111111abcdef8",
                            "domainName": "d111111abcdef8.cloudfront.net",
                            "connectionAttempts": 3,
                            "customHeaders": {"quantity": 0},
                            "originShield": {"enabled": false},
                            "originAccessControlId": ""
                        }
                    ]
                },
                "comment": "HIDDEN_DUE_TO_SECURITY_REASONS"
            },
            "aliasICPRecordals": [
                {
                    "cNAME": "alejandro_rosalez.awsps.myinstance.com",
                    "iCPRecordalStatus": "APPROVED"
                },
                {
                    "cNAME": "cross-testing.alejandro_rosalez.awsps.myinstance.com",
                    "iCPRecordalStatus": "APPROVED"
                },
                {
                    "cNAME": "*.alejandro_rosalez.awsps.myinstance.com",
                    "iCPRecordalStatus": "APPROVED"
                }
            ],
            "aRN": "arn:aws:cloudfront::111122223333:distribution/EDFDVBD6EXAMPLE",
            "status": "InProgress",
            "lastModifiedTime": "Feb 2, 2024 7:26:01 PM",
            "activeTrustedKeyGroups": {
                "enabled": false,
                "quantity": 0
            },
            "inProgressInvalidationBatches": 0
        },
        "eTag": "E1YHBLAB2BJY1G"
    },
    "requestID": "4e6b66f9-d548-11e3-a8a9-73e33example",
    "eventID": "5ab02562-0fc5-43d0-b7b6-90293example",
    "readOnly": false,
    "eventType": "AwsApiCall",
    "apiVersion": "2020_05_31",
    "managementEvent": true,
    "recipientAccountId": "111122223333",
    "eventCategory": "Management",
    "tlsDetails": {
        "tlsVersion": "TLSv1.3",
        "cipherSuite": "TLS_AES_128_GCM_SHA256",
        "clientProvidedHostHeader": "cloudfront.amazonaws.com"
    },
    "sessionCredentialFromConsole": "true"
}
```

### Example: UpdateKeys
<a name="example-cloudfront-kvs-cloudtrail-log"></a>

The following example shows a CloudTrail event that demonstrates the [UpdateKeys](https://docs.amazonaws.cn/cloudfront/latest/APIReference/API_kvs_UpdateKeys.html) operation.

For calls to the CloudFront KeyValueStore API, the `eventSource` is `edgekeyvaluestore.amazonaws.com` instead of `cloudfront.amazonaws.com`.

```
{
    "eventVersion": "1.09",
    "userIdentity": {
        "type": "AssumedRole",
        "principalId": "AIDACKCEVSQ6C2EXAMPLE:role-session-name",
        "arn": "arn:aws:sts::111122223333:assumed-role/Admin/role-session-name",
        "accountId": "111122223333",
        "accessKeyId": "ASIAIOSFODNN7EXAMPLE",
        "sessionContext": {
            "sessionIssuer": {
                "type": "Role",
                "principalId": "AIDACKCEVSQ6C2EXAMPLE",
                "arn": "arn:aws:iam::111122223333:role/Admin",
                "accountId": "111122223333",
                "userName": "Admin"
            },
            "attributes": {
                "creationDate": "2023-11-01T23:41:14Z",
                "mfaAuthenticated": "false"
            }
        }
    },
    "eventTime": "2023-11-01T23:41:28Z",
    "eventSource": "edgekeyvaluestore.amazonaws.com",
    "eventName": "UpdateKeys",
    "awsRegion": "us-east-1",
    "sourceIPAddress": "3.235.183.252",
    "userAgent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/121.0.0.0 Safari/537.36,
    "requestParameters": {
        "kvsARN": "arn:aws:cloudfront::111122223333:key-value-store/a1b2c3d4-5678-90ab-cdef-EXAMPLE11111",
        "ifMatch": "KV3O6B1CX531EBP",
        "deletes": [
            {"key": "key1"}
        ]
    },
    "responseElements": {
        "itemCount": 0,
        "totalSizeInBytes": 0,
        "eTag": "KVDC9VEVZ71ZGO"
    },
    "requestID": "5ccf104c-acce-4ea1-b7fc-73e33example",
    "eventID": "a0b1b5c7-906c-439d-9925-90293example",
    "readOnly": false,
    "resources": [
        {
            "accountId": "111122223333",
            "type": "AWS::CloudFront::KeyValueStore",
            "ARN": "arn:aws:cloudfront::111122223333:key-value-store/a1b2c3d4-5678-90ab-cdef-EXAMPLE11111"
        }
    ],
    "eventType": "AwsApiCall",
    "managementEvent": false,
    "recipientAccountId": "111122223333",
    "eventCategory": "Data",
    "tlsDetails": {
        "tlsVersion": "TLSv1.3",
        "cipherSuite": "TLS_AES_128_GCM_SHA256",
        "clientProvidedHostHeader": "111122223333.cloudfront-kvs.global.api.aws"
    }
}
```

For information about CloudTrail record contents, see [CloudTrail record contents](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/cloudtrail-event-reference-record-contents.html) in the *Amazon CloudTrail User Guide*.