

# Real-time processing of log data with subscriptions
<a name="Subscriptions"></a>

You can use subscriptions to get access to a real-time feed of log events from CloudWatch Logs and have it delivered to other services such as an Amazon Kinesis stream, an Amazon Data Firehose stream, or Amazon Lambda for custom processing, analysis, or loading to other systems. When log events are sent to the receiving service, they are base64 encoded and compressed with the gzip format.

You can also use CloudWatch Logs centralization to replicate log data from multiple accounts and regions into a central location. For more information, see [Cross-account cross-Region log centralization](CloudWatchLogs_Centralization.md).

To begin subscribing to log events, create the receiving resource, such as a Amazon Kinesis Data Streams stream, where the events will be delivered. A subscription filter defines the filter pattern to use for filtering which log events get delivered to your Amazon resource, as well as information about where to send matching log events to. Log events are sent to the receiving resource soon after being ingested, usually less than three minutes.

**Note**  
If a log group with a subscription uses log transformation, the filter pattern is compared to the transformed versions of the log events. For more information, see [Transform logs during ingestion](CloudWatch-Logs-Transformation.md).

You can create subscriptions at the account level and at the log group level. Each account can have one account-level subscription filter per Region. Each log group can have up to two subscription filters associated with it. 

**Note**  
If the destination service returns a retryable error such as a throttling exception or a retryable service exception (HTTP 5xx for example), CloudWatch Logs continues to retry delivery for up to 24 hours. CloudWatch Logs doesn't try to re-deliver if the error is a non-retryable error, such as AccessDeniedException or ResourceNotFoundException. In these cases the subscription filter is disabled for up to 10 minutes, and then CloudWatch Logs retries sending logs to the destination. During this disabled period, logs are skipped.

CloudWatch Logs also produces CloudWatch metrics about the forwarding of log events to subscriptions. For more information, see [Monitoring with CloudWatch metrics](CloudWatch-Logs-Monitoring-CloudWatch-Metrics.md).

 You can also use a CloudWatch Logs subscription to stream log data in near real time to an Amazon OpenSearch Service cluster. For more information, see [Streaming CloudWatch Logs data to Amazon OpenSearch Service](https://docs.amazonaws.cn/AmazonCloudWatch/latest/logs/CWL_OpenSearch_Stream.html). 

Subscriptions are supported only for log groups in the Standard log class. For more information about log classes, see [Log classes](CloudWatch_Logs_Log_Classes.md).

**Note**  
Subscription filters might batch log events to optimize transmission and reduce the amount of calls made to the destination. Batching is not guaranteed but is used when possible.

For batch processing and analysis of log data on a schedule, consider using [Automating log analysis with scheduled queries](ScheduledQueries.md). Scheduled queries run CloudWatch Logs Insights queries automatically and deliver results to destinations such as Amazon S3 buckets or Amazon EventBridge event buses.

**Note**  
Subscription filters ensure at least one time delivery of events, while duplicate events may occasionally occur.

**Topics**
+ [Concepts](subscription-concepts.md)
+ [Log group-level subscription filters](SubscriptionFilters.md)
+ [Account-level subscription filters](SubscriptionFilters-AccountLevel.md)
+ [Cross-account cross-Region subscriptions](CrossAccountSubscriptions.md)
+ [Confused deputy prevention](Subscriptions-confused-deputy.md)
+ [Log recursion prevention](Subscriptions-recursion-prevention.md)

# Concepts
<a name="subscription-concepts"></a>

Each subscription filter is made up of the following key elements:

**filter pattern**  
A symbolic description of how CloudWatch Logs should interpret the data in each log event, along with filtering expressions that restrict what gets delivered to the destination Amazon resource. For more information about the filter pattern syntax, see [Filter pattern syntax for metric filters, subscription filters, filter log events, and Live Tail](FilterAndPatternSyntax.md).

**destination arn**  
The Amazon Resource Name (ARN) of the Amazon Kinesis Data Streams stream, Firehose stream, or Lambda function you want to use as the destination of the subscription feed.

**role arn**  
An IAM role that grants CloudWatch Logs the necessary permissions to put data into the chosen destination. This role is not needed for Lambda destinations because CloudWatch Logs can get the necessary permissions from access control settings on the Lambda function itself.

**distribution**  
The method used to distribute log data to the destination, when the destination is a stream in Amazon Kinesis Data Streams. By default, log data is grouped by log stream. For a more even distribution, you can group log data randomly.

For log group-level subscriptions, the following key element is also included:

**log group name**  
The log group to associate the subscription filter with. All log events uploaded to this log group would be subject to the subscription filter, and those that match the filter are delivered to the destination service that is receiving the matching log events.

For account-level subscriptions, the following key element is also included:

**selection criteria**  
The criteria used for selecting which log groups have the account-level subscription filter applied. If you don't specify this, the account-level subscription filter is applied to all log groups in the account. This field is used to prevent infinite log loops. For more information about the infinite log loop issue, see [Log recursion prevention](Subscriptions-recursion-prevention.md).  
Selection criteria has a size limit of 25 KB.

For centralized log groups, the following key elements are also included. These elements can be used as field selection criteria to help in identifying the source of log data, allowing for more granular filtering and analysis of metrics derived from centralized logs.

**@aws.account**  
This field identifies the Amazon account ID from which the log event originated. 

**@aws.region**  
This field identifies the Amazon Region where the log event was generated. 

# Log group-level subscription filters
<a name="SubscriptionFilters"></a>

 You can use a subscription filter with Amazon Kinesis Data Streams, Amazon Lambda, Amazon Data Firehose, or Amazon OpenSearch Service. Logs sent to a service through a subscription filter are base64 encoded and compressed with the gzip format. If you are using centralized logs with your Amazon Organizations, you can choose to emit the `@aws.account` and `@aws.region` system field to identify which data comes from which accounts and regions in your organization. This section provides examples you can follow to create a CloudWatch Logs subscription filter that sends log data to Firehose, Lambda, Amazon Kinesis Data Streams, and OpenSearch Service. 

**Note**  
 If you want to search your log data, see [Filter and pattern syntax](https://docs.amazonaws.cn/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html). 

**Topics**
+ [Example 1: Subscription filters with Amazon Kinesis Data Streams](#DestinationKinesisExample)
+ [Example 2: Subscription filters with Amazon Lambda](#LambdaFunctionExample)
+ [Example 3: Subscription filters with Amazon Data Firehose](#FirehoseExample)
+ [Example 4: Subscription filters with Amazon OpenSearch Service](#OpenSearchExample)

## Example 1: Subscription filters with Amazon Kinesis Data Streams
<a name="DestinationKinesisExample"></a>

The following example associates a subscription filter with a log group containing Amazon CloudTrail events. The subscription filter delivers every logged activity made by "Root" Amazon credentials to a stream in Amazon Kinesis Data Streams called "RootAccess." For more information about how to send Amazon CloudTrail events to CloudWatch Logs, see [Sending CloudTrail Events to CloudWatch Logs](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/cw_send_ct_events.html) in the *Amazon CloudTrail User Guide*.

**Note**  
Before you create the stream, calculate the volume of log data that will be generated. Be sure to create a stream with enough shards to handle this volume. If the stream does not have enough shards, the log stream will be throttled. For more information about stream volume limits, see [Quotas and Limits](https://docs.amazonaws.cn/streams/latest/dev/service-sizes-and-limits.html).   
Throttled deliverables are retried for up to 24 hours. After 24 hours, the failed deliverables are dropped.  
To mitigate the risk of throttling, you can take the following steps:  
Specify `random` for `distribution` when you create the subscription filter with [ PutSubscriptionFilter](https://docs.amazonaws.cn/AmazonCloudWatchLogs/latest/APIReference/API_PutSubscriptionFilter.html#CWL-PutSubscriptionFilter-request-distribution) or [ put-subscription-filter](https://awscli.amazonaws.com/v2/documentation/api/2.4.18/reference/logs/put-subscription-filter.html). By default, the stream filter distribution is by log stream and this can cause throttling.
Monitor your stream using CloudWatch metrics. This helps you identify any throttling and adjust your configuration accordingly. For example, the `DeliveryThrottling` metric can be used to track the number of log events for which CloudWatch Logs was throttled when forwarding data to the subscription destination. For more information about monitoring, see [Monitoring with CloudWatch metrics](CloudWatch-Logs-Monitoring-CloudWatch-Metrics.md).
Use the on-demand capacity mode for your stream in Amazon Kinesis Data Streams. On-demand mode instantly accommodates your workloads as they ramp up or down. More information about on-demand capacity mode, see [ On-demand mode](https://docs.amazonaws.cn/streams/latest/dev/how-do-i-size-a-stream.html#ondemandmode).
Restrict your CloudWatch subscription filter pattern to match the capacity of your stream in Amazon Kinesis Data Streams. If you are sending too much data to the stream, you might need to reduce the filter size or adjust the filter criteria.

**To create a subscription filter for Amazon Kinesis Data Streams**

1. Create a destination stream using the following command: 

   ```
   $ C:\>  aws kinesis create-stream --stream-name "RootAccess" --shard-count 1
   ```

1. Wait until the stream becomes Active (this might take a minute or two). You can use the following Amazon Kinesis Data Streams [describe-stream](https://docs.amazonaws.cn/cli/latest/reference/kinesis/describe-stream.html) command to check the **StreamDescription.StreamStatus** property. In addition, note the **StreamDescription.StreamARN** value, as you will need it in a later step:

   ```
   aws kinesis describe-stream --stream-name "RootAccess"
   ```

   The following is example output:

   ```
   {
       "StreamDescription": {
           "StreamStatus": "ACTIVE",
           "StreamName": "RootAccess",
           "StreamARN": "arn:aws:kinesis:us-east-1:123456789012:stream/RootAccess",
           "Shards": [
               {
                   "ShardId": "shardId-000000000000",
                   "HashKeyRange": {
                       "EndingHashKey": "340282366920938463463374607431768211455",
                       "StartingHashKey": "0"
                   },
                   "SequenceNumberRange": {
                       "StartingSequenceNumber":
                       "49551135218688818456679503831981458784591352702181572610"
                   }
               }
           ]
       }
   }
   ```

1. Create the IAM role that will grant CloudWatch Logs permission to put data into your stream. First, you'll need to create a trust policy in a file (for example, `~/TrustPolicyForCWL-Kinesis.json`). Use a text editor to create this policy. Do not use the IAM console to create it.

   This policy includes a `aws:SourceArn` global condition context key to help prevent the confused deputy security problem. For more information, see [Confused deputy prevention](Subscriptions-confused-deputy.md).

   ```
   {
     "Statement": {
       "Effect": "Allow",
       "Principal": { "Service": "logs.amazonaws.com" },
       "Action": "sts:AssumeRole",
       "Condition": { 
           "StringLike": { "aws:SourceArn": "arn:aws:logs:region:123456789012:*" } 
        }
      }
   }
   ```

1. Use the **create-role** command to create the IAM role, specifying the trust policy file. Note the returned **Role.Arn** value, as you will also need it for a later step:

   ```
   aws iam create-role --role-name CWLtoKinesisRole --assume-role-policy-document file://~/TrustPolicyForCWL-Kinesis.json
   ```

   The following is an example of the output.

   ```
   {
       "Role": {
           "AssumeRolePolicyDocument": {
               "Statement": {
                   "Action": "sts:AssumeRole",
                   "Effect": "Allow",
                   "Principal": {
                       "Service": "logs.amazonaws.com"
                   },
                   "Condition": { 
                       "StringLike": { 
                           "aws:SourceArn": { "arn:aws:logs:region:123456789012:*" }
                       } 
                   }
               }
           },
           "RoleId": "AAOIIAH450GAB4HC5F431",
           "CreateDate": "2015-05-29T13:46:29.431Z",
           "RoleName": "CWLtoKinesisRole",
           "Path": "/",
           "Arn": "arn:aws:iam::123456789012:role/CWLtoKinesisRole"
       }
   }
   ```

1. Create a permissions policy to define what actions CloudWatch Logs can do on your account. First, you'll create a permissions policy in a file (for example, `~/PermissionsForCWL-Kinesis.json`). Use a text editor to create this policy. Do not use the IAM console to create it.

   ```
   {
     "Statement": [
       {
         "Effect": "Allow",
         "Action": "kinesis:PutRecord",
         "Resource": "arn:aws:kinesis:region:123456789012:stream/RootAccess"
       }
     ]
   }
   ```

1. Associate the permissions policy with the role using the following [put-role-policy](https://docs.amazonaws.cn/cli/latest/reference/iam/put-role-policy.html) command:

   ```
   aws iam put-role-policy  --role-name CWLtoKinesisRole  --policy-name Permissions-Policy-For-CWL  --policy-document file://~/PermissionsForCWL-Kinesis.json
   ```

1. After the stream is in **Active** state and you have created the IAM role, you can create the CloudWatch Logs subscription filter. The subscription filter immediately starts the flow of real-time log data from the chosen log group to your stream:

   ```
   aws logs put-subscription-filter \
       --log-group-name "CloudTrail/logs" \
       --filter-name "RootAccess" \
       --filter-pattern "{$.userIdentity.type = Root}" \
       --destination-arn "arn:aws:kinesis:region:123456789012:stream/RootAccess" \
       --role-arn "arn:aws:iam::123456789012:role/CWLtoKinesisRole"
   ```

1. After you set up the subscription filter, CloudWatch Logs forwards all the incoming log events that match the filter pattern to your stream. You can verify that this is happening by grabbing a Amazon Kinesis Data Streams shard iterator and using the Amazon Kinesis Data Streams get-records command to fetch some Amazon Kinesis Data Streams records:

   ```
   aws kinesis get-shard-iterator --stream-name RootAccess --shard-id shardId-000000000000 --shard-iterator-type TRIM_HORIZON
   ```

   ```
   {
       "ShardIterator":
       "AAAAAAAAAAFGU/kLvNggvndHq2UIFOw5PZc6F01s3e3afsSscRM70JSbjIefg2ub07nk1y6CDxYR1UoGHJNP4m4NFUetzfL+wev+e2P4djJg4L9wmXKvQYoE+rMUiFq+p4Cn3IgvqOb5dRA0yybNdRcdzvnC35KQANoHzzahKdRGb9v4scv+3vaq+f+OIK8zM5My8ID+g6rMo7UKWeI4+IWiK2OSh0uP"
   }
   ```

   ```
   aws kinesis get-records --limit 10 --shard-iterator "AAAAAAAAAAFGU/kLvNggvndHq2UIFOw5PZc6F01s3e3afsSscRM70JSbjIefg2ub07nk1y6CDxYR1UoGHJNP4m4NFUetzfL+wev+e2P4djJg4L9wmXKvQYoE+rMUiFq+p4Cn3IgvqOb5dRA0yybNdRcdzvnC35KQANoHzzahKdRGb9v4scv+3vaq+f+OIK8zM5My8ID+g6rMo7UKWeI4+IWiK2OSh0uP"
   ```

   Note that you might need to make this call a few times before Amazon Kinesis Data Streams starts to return data.

   You should expect to see a response with an array of records. The **Data** attribute in a Amazon Kinesis Data Streams record is base64 encoded and compressed with the gzip format. You can examine the raw data from the command line using the following Unix commands:

   ```
   echo -n "<Content of Data>" | base64 -d | zcat
   ```

   The base64 decoded and decompressed data is formatted as JSON with the following structure:

   ```
   {
       "owner": "111111111111",
       "logGroup": "CloudTrail/logs",
       "logStream": "111111111111_CloudTrail/logs_us-east-1",
       "subscriptionFilters": [
           "Destination"
       ],
       "messageType": "DATA_MESSAGE",
       "logEvents": [
           {
               "id": "31953106606966983378809025079804211143289615424298221568",
               "timestamp": 1432826855000,
               "message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root\"}"
           },
           {
               "id": "31953106606966983378809025079804211143289615424298221569",
               "timestamp": 1432826855000,
               "message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root\"}"
           },
           {
               "id": "31953106606966983378809025079804211143289615424298221570",
               "timestamp": 1432826855000,
               "message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root\"}"
           }
       ]
   }
   ```

   The key elements in the above data structure are the following:  
**owner**  
The Amazon Account ID of the originating log data.  
**logGroup**  
The log group name of the originating log data.  
**logStream**  
The log stream name of the originating log data.  
**subscriptionFilters**  
The list of subscription filter names that matched with the originating log data.  
**messageType**  
Data messages will use the "DATA\$1MESSAGE" type. Sometimes CloudWatch Logs may emit Amazon Kinesis Data Streams records with a "CONTROL\$1MESSAGE" type, mainly for checking if the destination is reachable.  
**logEvents**  
The actual log data, represented as an array of log event records. The "id" property is a unique identifier for every log event.

## Example 2: Subscription filters with Amazon Lambda
<a name="LambdaFunctionExample"></a>

In this example, you'll create a CloudWatch Logs subscription filter that sends log data to your Amazon Lambda function.

**Note**  
Before you create the Lambda function, calculate the volume of log data that will be generated. Be sure to create a function that can handle this volume. If the function does not have enough volume, the log stream will be throttled. For more information about Lambda limits, see [Amazon Lambda Limits](https://docs.amazonaws.cn/lambda/latest/dg/limits.html). 

**To create a subscription filter for Lambda**

1. Create the Amazon Lambda function.

   Ensure that you have set up the Lambda execution role. For more information, see [Step 2.2: Create an IAM Role (execution role)](https://docs.amazonaws.cn/lambda/latest/dg/lambda-intro-execution-role.html) in the *Amazon Lambda Developer Guide*.

1. Open a text editor and create a file named `helloWorld.js` with the following contents:

   ```
   var zlib = require('zlib');
   exports.handler = function(input, context) {
       var payload = Buffer.from(input.awslogs.data, 'base64');
       zlib.gunzip(payload, function(e, result) {
           if (e) { 
               context.fail(e);
           } else {
               result = JSON.parse(result.toString());
               console.log("Event Data:", JSON.stringify(result, null, 2));
               context.succeed();
           }
       });
   };
   ```

1. Zip the file helloWorld.js and save it with the name `helloWorld.zip`.

1. Use the following command, where the role is the Lambda execution role you set up in the first step:

   ```
   aws lambda create-function \
       --function-name helloworld \
       --zip-file fileb://file-path/helloWorld.zip \
       --role lambda-execution-role-arn \
       --handler helloWorld.handler \
       --runtime nodejs12.x
   ```

1. Grant CloudWatch Logs the permission to execute your function. Use the following command, replacing the placeholder account with your own account and the placeholder log group with the log group to process:

   ```
   aws lambda add-permission \
       --function-name "helloworld" \
       --statement-id "helloworld" \
       --principal "logs.amazonaws.com" \
       --action "lambda:InvokeFunction" \
       --source-arn "arn:aws:logs:region:123456789123:log-group:TestLambda:*" \
       --source-account "123456789012"
   ```

1. Create a subscription filter using the following command, replacing the placeholder account with your own account and the placeholder log group with the log group to process:

   ```
   aws logs put-subscription-filter \
       --log-group-name myLogGroup \
       --filter-name demo \
       --filter-pattern "" \
       --destination-arn arn:aws:lambda:region:123456789123:function:helloworld
   ```

1. (Optional) Test using a sample log event. At a command prompt, run the following command, which will put a simple log message into the subscribed stream.

   To see the output of your Lambda function, navigate to the Lambda function where you will see the output in /aws/lambda/helloworld:

   ```
   aws logs put-log-events --log-group-name myLogGroup --log-stream-name stream1 --log-events "[{\"timestamp\":<CURRENT TIMESTAMP MILLIS> , \"message\": \"Simple Lambda Test\"}]"
   ```

   You should expect to see a response with an array of Lambda. The **Data** attribute in the Lambda record is base64 encoded and compressed with the gzip format. The actual payload that Lambda receives is in the following format `{ "awslogs": {"data": "BASE64ENCODED_GZIP_COMPRESSED_DATA"} }` You can examine the raw data from the command line using the following Unix commands:

   ```
   echo -n "<BASE64ENCODED_GZIP_COMPRESSED_DATA>" | base64 -d | zcat
   ```

   The base64 decoded and decompressed data is formatted as JSON with the following structure:

   ```
   {
       "owner": "123456789012",
       "logGroup": "CloudTrail",
       "logStream": "123456789012_CloudTrail_us-east-1",
       "subscriptionFilters": [
           "Destination"
       ],
       "messageType": "DATA_MESSAGE",
       "logEvents": [
           {
               "id": "31953106606966983378809025079804211143289615424298221568",
               "timestamp": 1432826855000,
               "message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root\"}"
           },
           {
               "id": "31953106606966983378809025079804211143289615424298221569",
               "timestamp": 1432826855000,
               "message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root\"}"
           },
           {
               "id": "31953106606966983378809025079804211143289615424298221570",
               "timestamp": 1432826855000,
               "message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root\"}"
           }
       ]
   }
   ```

   The key elements in the above data structure are the following:  
**owner**  
The Amazon Account ID of the originating log data.  
**logGroup**  
The log group name of the originating log data.  
**logStream**  
The log stream name of the originating log data.  
**subscriptionFilters**  
The list of subscription filter names that matched with the originating log data.  
**messageType**  
Data messages will use the "DATA\$1MESSAGE" type. Sometimes CloudWatch Logs may emit Lambda records with a "CONTROL\$1MESSAGE" type, mainly for checking if the destination is reachable.  
**logEvents**  
The actual log data, represented as an array of log event records. The "id" property is a unique identifier for every log event.

## Example 3: Subscription filters with Amazon Data Firehose
<a name="FirehoseExample"></a>

In this example, you'll create a CloudWatch Logs subscription that sends any incoming log events that match your defined filters to your Amazon Data Firehose delivery stream. Data sent from CloudWatch Logs to Amazon Data Firehose is already compressed with gzip level 6 compression, so you do not need to use compression within your Firehose delivery stream. You can then use the decompression feature in Firehose to automatically decompress the logs. For more information, see [ Send CloudWatch Logs to Firehose](https://docs.amazonaws.cn/logs/SubscriptionFilters.html#FirehoseExample).

**Note**  
Before you create the Firehose stream, calculate the volume of log data that will be generated. Be sure to create a Firehose stream that can handle this volume. If the stream cannot handle the volume, the log stream will be throttled. For more information about Firehose stream volume limits, see [Amazon Data Firehose Data Limits](https://docs.amazonaws.cn/firehose/latest/dev/limits.html). 

**To create a subscription filter for Firehose**

1. Create an Amazon Simple Storage Service (Amazon S3) bucket. We recommend that you use a bucket that was created specifically for CloudWatch Logs. However, if you want to use an existing bucket, skip to step 2.

   Run the following command, replacing the placeholder Region with the Region you want to use:

   ```
   aws s3api create-bucket --bucket amzn-s3-demo-bucket2 --create-bucket-configuration LocationConstraint=region
   ```

   The following is example output:

   ```
   {
       "Location": "/amzn-s3-demo-bucket2"
   }
   ```

1. Create the IAM role that grants Amazon Data Firehose permission to put data into your Amazon S3 bucket.

   For more information, see [Controlling Access with Amazon Data Firehose](https://docs.amazonaws.cn/firehose/latest/dev/controlling-access.html) in the *Amazon Data Firehose Developer Guide*.

   First, use a text editor to create a trust policy in a file `~/TrustPolicyForFirehose.json` as follows:

   ```
   {
     "Statement": {
       "Effect": "Allow",
       "Principal": { "Service": "firehose.amazonaws.com" },
       "Action": "sts:AssumeRole"
       } 
   }
   ```

1. Use the **create-role** command to create the IAM role, specifying the trust policy file. Note of the returned **Role.Arn** value, as you will need it in a later step:

   ```
   aws iam create-role \
    --role-name FirehosetoS3Role \
    --assume-role-policy-document file://~/TrustPolicyForFirehose.json
   
   {
       "Role": {
           "AssumeRolePolicyDocument": {
               "Statement": {
                   "Action": "sts:AssumeRole",
                   "Effect": "Allow",
                   "Principal": {
                       "Service": "firehose.amazonaws.com"
                   }
               }
           },
           "RoleId": "AAOIIAH450GAB4HC5F431",
           "CreateDate": "2015-05-29T13:46:29.431Z",
           "RoleName": "FirehosetoS3Role",
           "Path": "/",
           "Arn": "arn:aws:iam::123456789012:role/FirehosetoS3Role"
       }
   }
   ```

1. Create a permissions policy to define what actions Firehose can do on your account. First, use a text editor to create a permissions policy in a file `~/PermissionsForFirehose.json`:

   ```
   {
     "Statement": [
       {
         "Effect": "Allow",
         "Action": [ 
             "s3:AbortMultipartUpload", 
             "s3:GetBucketLocation", 
             "s3:GetObject", 
             "s3:ListBucket", 
             "s3:ListBucketMultipartUploads", 
             "s3:PutObject" ],
         "Resource": [ 
             "arn:aws:s3:::amzn-s3-demo-bucket2", 
             "arn:aws:s3:::amzn-s3-demo-bucket2/*" ]
       }
     ]
   }
   ```

1. Associate the permissions policy with the role using the following put-role-policy command:

   ```
   aws iam put-role-policy --role-name FirehosetoS3Role --policy-name Permissions-Policy-For-Firehose --policy-document file://~/PermissionsForFirehose.json
   ```

1. Create a destination Firehose delivery stream as follows, replacing the placeholder values for **RoleARN** and **BucketARN** with the role and bucket ARNs that you created:

   ```
   aws firehose create-delivery-stream \
      --delivery-stream-name 'my-delivery-stream' \
      --s3-destination-configuration \
     '{"RoleARN": "arn:aws:iam::123456789012:role/FirehosetoS3Role", "BucketARN": "arn:aws:s3:::amzn-s3-demo-bucket2"}'
   ```

   Note that Firehose automatically uses a prefix in YYYY/MM/DD/HH UTC time format for delivered Amazon S3 objects. You can specify an extra prefix to be added in front of the time format prefix. If the prefix ends with a forward slash (/), it appears as a folder in the Amazon S3 bucket.

1. Wait until the stream becomes active (this might take a few minutes). You can use the Firehose **describe-delivery-stream** command to check the **DeliveryStreamDescription.DeliveryStreamStatus** property. In addition, note the **DeliveryStreamDescription.DeliveryStreamARN** value, as you will need it in a later step:

   ```
   aws firehose describe-delivery-stream --delivery-stream-name "my-delivery-stream"
   {
       "DeliveryStreamDescription": {
           "HasMoreDestinations": false,
           "VersionId": "1",
           "CreateTimestamp": 1446075815.822,
           "DeliveryStreamARN": "arn:aws:firehose:us-east-1:123456789012:deliverystream/my-delivery-stream",
           "DeliveryStreamStatus": "ACTIVE",
           "DeliveryStreamName": "my-delivery-stream",
           "Destinations": [
               {
                   "DestinationId": "destinationId-000000000001",
                   "S3DestinationDescription": {
                       "CompressionFormat": "UNCOMPRESSED",
                       "EncryptionConfiguration": {
                           "NoEncryptionConfig": "NoEncryption"
                       },
                       "RoleARN": "delivery-stream-role",
                       "BucketARN": "arn:aws:s3:::amzn-s3-demo-bucket2",
                       "BufferingHints": {
                           "IntervalInSeconds": 300,
                           "SizeInMBs": 5
                       }
                   }
               }
           ]
       }
   }
   ```

1. Create the IAM role that grants CloudWatch Logs permission to put data into your Firehose delivery stream. First, use a text editor to create a trust policy in a file `~/TrustPolicyForCWL.json`:

   This policy includes a `aws:SourceArn` global condition context key to help prevent the confused deputy security problem. For more information, see [Confused deputy prevention](Subscriptions-confused-deputy.md).

   ```
   {
     "Statement": {
       "Effect": "Allow",
       "Principal": { "Service": "logs.amazonaws.com" },
       "Action": "sts:AssumeRole",
       "Condition": { 
            "StringLike": { 
                "aws:SourceArn": "arn:aws:logs:region:123456789012:*"
            } 
        }
     }
   }
   ```

1. Use the **create-role** command to create the IAM role, specifying the trust policy file. Note of the returned **Role.Arn** value, as you will need it in a later step:

   ```
   aws iam create-role \
   --role-name CWLtoKinesisFirehoseRole \
   --assume-role-policy-document file://~/TrustPolicyForCWL.json
   
   {
       "Role": {
           "AssumeRolePolicyDocument": {
               "Statement": {
                   "Action": "sts:AssumeRole",
                   "Effect": "Allow",
                   "Principal": {
                       "Service": "logs.amazonaws.com"
                   },
                   "Condition": { 
                        "StringLike": { 
                            "aws:SourceArn": "arn:aws:logs:region:123456789012:*"
                        } 
                    }
               }
           },
           "RoleId": "AAOIIAH450GAB4HC5F431",
           "CreateDate": "2015-05-29T13:46:29.431Z",
           "RoleName": "CWLtoKinesisFirehoseRole",
           "Path": "/",
           "Arn": "arn:aws:iam::123456789012:role/CWLtoKinesisFirehoseRole"
       }
   }
   ```

1. Create a permissions policy to define what actions CloudWatch Logs can do on your account. First, use a text editor to create a permissions policy file (for example, `~/PermissionsForCWL.json`):

   ```
   {
       "Statement":[
         {
           "Effect":"Allow",
           "Action":["firehose:PutRecord"],
           "Resource":[
               "arn:aws:firehose:region:account-id:deliverystream/delivery-stream-name"]
         }
       ]
   }
   ```

1. Associate the permissions policy with the role using the put-role-policy command:

   ```
   aws iam put-role-policy --role-name CWLtoKinesisFirehoseRole --policy-name Permissions-Policy-For-CWL --policy-document file://~/PermissionsForCWL.json
   ```

1. After the Amazon Data Firehose delivery stream is in active state and you have created the IAM role, you can create the CloudWatch Logs subscription filter. The subscription filter immediately starts the flow of real-time log data from the chosen log group to your Amazon Data Firehose delivery stream:

   ```
   aws logs put-subscription-filter \
       --log-group-name "CloudTrail" \
       --filter-name "Destination" \
       --filter-pattern "{$.userIdentity.type = Root}" \
       --destination-arn "arn:aws:firehose:region:123456789012:deliverystream/my-delivery-stream" \
       --role-arn "arn:aws:iam::123456789012:role/CWLtoKinesisFirehoseRole"
   ```

1. After you set up the subscription filter, CloudWatch Logs will forward all the incoming log events that match the filter pattern to your Amazon Data Firehose delivery stream. Your data will start appearing in your Amazon S3 based on the time buffer interval set on your Amazon Data Firehose delivery stream. Once enough time has passed, you can verify your data by checking your Amazon S3 Bucket.

   ```
   aws s3api list-objects --bucket 'amzn-s3-demo-bucket2' --prefix 'firehose/'
   {
       "Contents": [
           {
               "LastModified": "2015-10-29T00:01:25.000Z",
               "ETag": "\"a14589f8897f4089d3264d9e2d1f1610\"",
               "StorageClass": "STANDARD",
               "Key": "firehose/2015/10/29/00/my-delivery-stream-2015-10-29-00-01-21-a188030a-62d2-49e6-b7c2-b11f1a7ba250",
               "Owner": {
                   "DisplayName": "cloudwatch-logs",
                   "ID": "1ec9cf700ef6be062b19584e0b7d84ecc19237f87b5"
               },
               "Size": 593
           },
           {
               "LastModified": "2015-10-29T00:35:41.000Z",
               "ETag": "\"a7035b65872bb2161388ffb63dd1aec5\"",
               "StorageClass": "STANDARD",
               "Key": "firehose/2015/10/29/00/my-delivery-stream-2015-10-29-00-35-40-7cc92023-7e66-49bc-9fd4-fc9819cc8ed3",
               "Owner": {
                   "DisplayName": "cloudwatch-logs",
                   "ID": "1ec9cf700ef6be062b19584e0b7d84ecc19237f87b6"
               },
               "Size": 5752
           }
       ]
   }
   ```

   ```
   aws s3api get-object --bucket 'amzn-s3-demo-bucket2' --key 'firehose/2015/10/29/00/my-delivery-stream-2015-10-29-00-01-21-a188030a-62d2-49e6-b7c2-b11f1a7ba250' testfile.gz
   
   {
       "AcceptRanges": "bytes",
       "ContentType": "application/octet-stream",
       "LastModified": "Thu, 29 Oct 2015 00:07:06 GMT",
       "ContentLength": 593,
       "Metadata": {}
   }
   ```

   The data in the Amazon S3 object is compressed with the gzip format. You can examine the raw data from the command line using the following Unix command:

   ```
   zcat testfile.gz
   ```

## Example 4: Subscription filters with Amazon OpenSearch Service
<a name="OpenSearchExample"></a>

In this example, you'll create a CloudWatch Logs subscription that sends incoming log events that match your defined filters to your OpenSearch Service domain.

**To create a subscription filter for OpenSearch Service**

1. Create an OpenSearch Service domain. For more information, see [ Creating OpenSearch Service domains](https://docs.amazonaws.cn/opensearch-service/latest/developerguide/createupdatedomains.html#createdomains)

1. Open the CloudWatch console at [https://console.amazonaws.cn/cloudwatch/](https://console.amazonaws.cn/cloudwatch/).

1.  In the navigation pane, choose **Log groups**. 

1. Select the name of the log group.

1. Choose **Actions**, **Subscription filters**, **Create Amazon OpenSearch Service subscription filter**.

1. Choose whether you want to stream to a cluster in this account or another account.
   + If you chose **This account**, select the domain that you created in step 1.
   + If you chose **Another account**, enter ARN and endpoint of that domain.

1.  If you chose another account, provide the domain ARN and endpoint.

1. For Amazon OpenSearch Service cluster choose the name of the cluster where the log group data will be delivered

1. Choose a log format.

1. For **Subscription filter pattern**, enter the terms or pattern to find in your log events. This ensures that you send only the data that you're interested in to your OpenSearch Service cluster. For more information, see [Filter pattern syntax for metric filters](FilterAndPatternSyntaxForMetricFilters.md).

1. (Optional) For **Select log data to test**, select a log stream and then choose **Test pattern** to verify that your search filter is returning the results you expect.

1. Choose **Start streaming**.

# Account-level subscription filters
<a name="SubscriptionFilters-AccountLevel"></a>

**Important**  
There is a risk of causing an infinite recursive loop with subscription filters that can lead to a large increase in ingestion billing if not addressed. To mitigate this risk, we recommend that you use selection criteria in your account-level subscription filters to exclude log groups that ingest log data from resources that are part of the subscription delivery workflow. For more information on this problem and determining which log groups to exclude, see [Log recursion prevention](Subscriptions-recursion-prevention.md).

 You can set an account-level subscription policy that includes a subset of log groups in the account. The account subscription policy can work with Amazon Kinesis Data Streams, Amazon Lambda, or Amazon Data Firehose. Logs sent to a service through an account-level subscription policy are base64 encoded and compressed with the gzip format. This section provides examples you can follow to create an account-level subscription for Amazon Kinesis Data Streams, Lambda, and Firehose. 

**Note**  
To view a list of all subscription filter policies in your account, use the `describe-account-policies` command with a value of `SUBSCRIPTION_FILTER_POLICY` for the `--policy-type` parameter. For more information, see [ describe-account-policies¶](https://docs.amazonaws.cn/cli/latest/reference/logs/describe-account-policies.html).

**Topics**
+ [Example 1: Subscription filters with Amazon Kinesis Data Streams](#DestinationKinesisExample-AccountLevel)
+ [Example 2: Subscription filters with Amazon Lambda](#LambdaFunctionExample-AccountLevel)
+ [Example 3: Subscription filters with Amazon Data Firehose](#FirehoseExample-AccountLevel)

## Example 1: Subscription filters with Amazon Kinesis Data Streams
<a name="DestinationKinesisExample-AccountLevel"></a>

Before you create a Amazon Kinesis Data Streams data stream to use with an account-level subscription policy, calculate the volume of log data that will be generated. Be sure to create a stream with enough shards to handle this volume. If a stream doesn't have enough shards, it is throttled. For more information about stream volume limits, see [ Quotas and Limits](https://docs.amazonaws.cn/streams/latest/dev/service-sizes-and-limits.html) in the Amazon Kinesis Data Streams documentation.

**Warning**  
Because the log events of multiple log groups are forwarded to the destination, there is a risk of throttling. Throttled deliverables are retried for up to 24 hours. After 24 hours, the failed deliverables are dropped.   
To mitigate the risk of throttling, you can take the following steps:  
Monitor your Amazon Kinesis Data Streams stream with CloudWatch metrics. This helps you identify throttling and adjust your configuration accordingly. For example, the `DeliveryThrottling` metric tracks the number of log events for which CloudWatch Logs was throttled when forwarding data to the subscription destination. For more information, see [Monitoring with CloudWatch metrics](CloudWatch-Logs-Monitoring-CloudWatch-Metrics.md).
Use the on-demand capacity mode for your stream in Amazon Kinesis Data Streams. On-demand mode instantly accommodates your workloads as they ramp up or down. For more information, see [ On-demand mode](https://docs.amazonaws.cn/streams/latest/dev/how-do-i-size-a-stream.html#ondemandmode).
Restrict your CloudWatch Logs subscription filter pattern to match the capacity of your stream in Amazon Kinesis Data Streams. If you are sending too much data to the stream, you might need to reduce the filter size or adjust the filter criteria.

The following example uses an account-level subscription policy to forward all log events to a stream in Amazon Kinesis Data Streams. The filter pattern matches any log events with the text `Test` and forwards them to the stream in Amazon Kinesis Data Streams.

**To create an account-level subscription policy for Amazon Kinesis Data Streams**

1. Create a destination stream using the following command: 

   ```
   $ C:\>  aws kinesis create-stream —stream-name "TestStream" —shard-count 1
   ```

1. Wait a few minutes for the stream to become active. You can verify whether the stream is active by using the [describe-stream](https://docs.amazonaws.cn/cli/latest/reference/kinesis/describe-stream.html) command to check the **StreamDescription.StreamStatus** property. 

   ```
   aws kinesis describe-stream --stream-name "TestStream"
   ```

   The following is example output:

   ```
   {
       "StreamDescription": {
           "StreamStatus": "ACTIVE",
           "StreamName": "TestStream",
           "StreamARN": "arn:aws:kinesis:region:123456789012:stream/TestStream",
           "Shards": [
               {
                   "ShardId": "shardId-000000000000",
                   "HashKeyRange": {
                       "EndingHashKey": "EXAMPLE8463463374607431768211455",
                       "StartingHashKey": "0"
                   },
                   "SequenceNumberRange": {
                       "StartingSequenceNumber":
                       "EXAMPLE688818456679503831981458784591352702181572610"
                   }
               }
           ]
       }
   }
   ```

1. Create the IAM role that will grant CloudWatch Logs permission to put data into your stream. First, you'll need to create a trust policy in a file (for example, `~/TrustPolicyForCWL-Kinesis.json`). Use a text editor to create this policy.

   This policy includes a `aws:SourceArn` global condition context key to help prevent the confused deputy security problem. For more information, see [Confused deputy prevention](Subscriptions-confused-deputy.md).

   ```
   {
     "Statement": {
       "Effect": "Allow",
       "Principal": { "Service": "logs.amazonaws.com" },
       "Action": "sts:AssumeRole",
       "Condition": { 
           "StringLike": { "aws:SourceArn": "arn:aws:logs:region:123456789012:*" } 
        }
      }
   }
   ```

1. Use the **create-role** command to create the IAM role, specifying the trust policy file. Note the returned **Role.Arn** value, as you will also need it for a later step:

   ```
   aws iam create-role --role-name CWLtoKinesisRole --assume-role-policy-document file://~/TrustPolicyForCWL-Kinesis.json
   ```

   The following is an example of the output.

   ```
   {
       "Role": {
           "AssumeRolePolicyDocument": {
               "Statement": {
                   "Action": "sts:AssumeRole",
                   "Effect": "Allow",
                   "Principal": {
                       "Service": "logs.amazonaws.com"
                   },
                   "Condition": { 
                       "StringLike": { 
                           "aws:SourceArn": { "arn:aws:logs:region:123456789012:*" }
                       } 
                   }
               }
           },
           "RoleId": "EXAMPLE450GAB4HC5F431",
           "CreateDate": "2023-05-29T13:46:29.431Z",
           "RoleName": "CWLtoKinesisRole",
           "Path": "/",
           "Arn": "arn:aws:iam::123456789012:role/CWLtoKinesisRole"
       }
   }
   ```

1. Create a permissions policy to define what actions CloudWatch Logs can do on your account. First, you'll create a permissions policy in a file (for example, `~/PermissionsForCWL-Kinesis.json`). Use a text editor to create this policy. Don't use the IAM console to create it.

   ```
   {
     "Statement": [
       {
         "Effect": "Allow",
         "Action": "kinesis:PutRecord",
         "Resource": "arn:aws:kinesis:region:123456789012:stream/TestStream"
       }
     ]
   }
   ```

1. Associate the permissions policy with the role using the following [put-role-policy](https://docs.amazonaws.cn/cli/latest/reference/iam/put-role-policy.html) command:

   ```
   aws iam put-role-policy  --role-name CWLtoKinesisRole  --policy-name Permissions-Policy-For-CWL  --policy-document file://~/PermissionsForCWL-Kinesis.json
   ```

1. After the stream is in the **Active** state and you have created the IAM role, you can create the CloudWatch Logs subscription filter policy. The policy immediately starts the flow of real-time log data to your stream. In this example, all log events that contain the string `ERROR` are streamed, except those in the log groups named `LogGroupToExclude1` and `LogGroupToExclude2`.

   ```
   aws logs put-account-policy \
       --policy-name "ExamplePolicy" \
       --policy-type "SUBSCRIPTION_FILTER_POLICY" \
       --policy-document '{"RoleArn":"arn:aws:iam::123456789012:role/CWLtoKinesisRole", "DestinationArn":"arn:aws:kinesis:region:123456789012:stream/TestStream", "FilterPattern": "Test", "Distribution": "Random"}' \
       --selection-criteria 'LogGroupName NOT IN ["LogGroupToExclude1", "LogGroupToExclude2"]' \
       --scope "ALL"
   ```

1. After you set up the subscription filter, CloudWatch Logs forwards all the incoming log events that match the filter pattern and selection criteria to your stream. 

   The `selection-criteria` field is optional, but is important for excluding log groups that can cause an infinite log recursion from a subscription filter. For more information about this issue and determining which log groups to exclude, see [Log recursion prevention](Subscriptions-recursion-prevention.md). Currently, NOT IN is the only supported operator for `selection-criteria`.

   You can verify that the flow of log events by by using a Amazon Kinesis Data Streams shard iterator and using the Amazon Kinesis Data Streams `get-records` command to fetch some Amazon Kinesis Data Streams records::

   ```
   aws kinesis get-shard-iterator --stream-name TestStream --shard-id shardId-000000000000 --shard-iterator-type TRIM_HORIZON
   ```

   ```
   {
       "ShardIterator":
       "AAAAAAAAAAFGU/kLvNggvndHq2UIFOw5PZc6F01s3e3afsSscRM70JSbjIefg2ub07nk1y6CDxYR1UoGHJNP4m4NFUetzfL+wev+e2P4djJg4L9wmXKvQYoE+rMUiFq+p4Cn3IgvqOb5dRA0yybNdRcdzvnC35KQANoHzzahKdRGb9v4scv+3vaq+f+OIK8zM5My8ID+g6rMo7UKWeI4+IWiK2OSh0uP"
   }
   ```

   ```
   aws kinesis get-records --limit 10 --shard-iterator "AAAAAAAAAAFGU/kLvNggvndHq2UIFOw5PZc6F01s3e3afsSscRM70JSbjIefg2ub07nk1y6CDxYR1UoGHJNP4m4NFUetzfL+wev+e2P4djJg4L9wmXKvQYoE+rMUiFq+p4Cn3IgvqOb5dRA0yybNdRcdzvnC35KQANoHzzahKdRGb9v4scv+3vaq+f+OIK8zM5My8ID+g6rMo7UKWeI4+IWiK2OSh0uP"
   ```

   You might need to use this command a few times before Amazon Kinesis Data Streams starts to return data.

   You should expect to see a response with an array of records. The **Data** attribute in a Amazon Kinesis Data Streams record is base64 encoded and compressed with the gzip format. You can examine the raw data from the command line using the following Unix commands:

   ```
   echo -n "<Content of Data>" | base64 -d | zcat
   ```

   The base64 decoded and decompressed data is formatted as JSON with the following structure:

   ```
   { 
       "messageType": "DATA_MESSAGE",
       "owner": "123456789012",
       "logGroup": "Example1",
       "logStream": "logStream1",
       "subscriptionFilters": [ 
           "ExamplePolicy" 
       ],
       "logEvents": [ 
           { 
               "id": "31953106606966983378809025079804211143289615424298221568",
               "timestamp": 1432826855000,
               "message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root\"}"
           },
           { 
               "id": "31953106606966983378809025079804211143289615424298221569",
               "timestamp": 1432826855000,
               "message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root\"}" 
           }, 
           { 
               "id": "31953106606966983378809025079804211143289615424298221570",
               "timestamp": 1432826855000,
               "message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root\"}" 
           } 
       ],
       "policyLevel": "ACCOUNT_LEVEL_POLICY"
   }
   ```

   The key elements in the data structure are the following:  
**messageType**  
Data messages will use the "DATA\$1MESSAGE" type. Sometimes CloudWatch Logs might emit Amazon Kinesis Data Streams records with a "CONTROL\$1MESSAGE" type, mainly for checking if the destination is reachable.  
**owner**  
The Amazon Account ID of the originating log data.  
**logGroup**  
The log group name of the originating log data.  
**logStream**  
The log stream name of the originating log data.  
**subscriptionFilters**  
The list of subscription filter names that matched with the originating log data.  
**logEvents**  
The actual log data, represented as an array of log event records. The "id" property is a unique identifier for every log event.  
**policyLevel**  
The level at which the policy was enforced. "ACCOUNT\$1LEVEL\$1POLICY" is the `policyLevel` for an account-level subscription filter policy.

## Example 2: Subscription filters with Amazon Lambda
<a name="LambdaFunctionExample-AccountLevel"></a>

In this example, you'll create a CloudWatch Logs account-level subscription filter policy that sends log data to your Amazon Lambda function.

**Warning**  
Before you create the Lambda function, calculate the volume of log data that will be generated. Be sure to create a function that can handle this volume. If the function can't handle the volume, the log stream will be throttled. Because the log events of either all log groups or a subset of the account's log groups are forwarded to the destination, there is a risk of throttling. For more information about Lambda limits, see [Amazon Lambda Limits](https://docs.amazonaws.cn/lambda/latest/dg/limits.html). 

**To create an account-level subscription filter policy for Lambda**

1. Create the Amazon Lambda function.

   Ensure that you have set up the Lambda execution role. For more information, see [Step 2.2: Create an IAM Role (execution role)](https://docs.amazonaws.cn/lambda/latest/dg/lambda-intro-execution-role.html) in the *Amazon Lambda Developer Guide*.

1. Open a text editor and create a file named `helloWorld.js` with the following contents:

   ```
   var zlib = require('zlib');
   exports.handler = function(input, context) {
       var payload = Buffer.from(input.awslogs.data, 'base64');
       zlib.gunzip(payload, function(e, result) {
           if (e) { 
               context.fail(e);
           } else {
               result = JSON.parse(result.toString());
               console.log("Event Data:", JSON.stringify(result, null, 2));
               context.succeed();
           }
       });
   };
   ```

1. Zip the file helloWorld.js and save it with the name `helloWorld.zip`.

1. Use the following command, where the role is the Lambda execution role you set up in the first step:

   ```
   aws lambda create-function \
       --function-name helloworld \
       --zip-file fileb://file-path/helloWorld.zip \
       --role lambda-execution-role-arn \
       --handler helloWorld.handler \
       --runtime nodejs18.x
   ```

1. Grant CloudWatch Logs the permission to execute your function. Use the following command, replacing the placeholder account with your own account.

   ```
   aws lambda add-permission \
       --function-name "helloworld" \
       --statement-id "helloworld" \
       --principal "logs.amazonaws.com" \
       --action "lambda:InvokeFunction" \
       --source-arn "arn:aws:logs:region:123456789012:log-group:*" \
       --source-account "123456789012"
   ```

1. Create an account-level subscription filter policy using the following command, replacing the placeholder account with your own account. In this example, all log events that contain the string `ERROR` are streamed, except those in the log groups named `LogGroupToExclude1` and `LogGroupToExclude2`.

   ```
   aws logs put-account-policy \
       --policy-name "ExamplePolicyLambda" \
       --policy-type "SUBSCRIPTION_FILTER_POLICY" \
       --policy-document '{"DestinationArn":"arn:aws:lambda:region:123456789012:function:helloWorld", "FilterPattern": "Test", "Distribution": "Random"}' \
       --selection-criteria 'LogGroupName NOT IN ["LogGroupToExclude1", "LogGroupToExclude2"]' \
       --scope "ALL"
   ```

   After you set up the subscription filter, CloudWatch Logs forwards all the incoming log events that match the filter pattern and selection criteria to your stream. 

   The `selection-criteria` field is optional, but is important for excluding log groups that can cause an infinite log recursion from a subscription filter. For more information about this issue and determining which log groups to exclude, see [Log recursion prevention](Subscriptions-recursion-prevention.md). Currently, NOT IN is the only supported operator for `selection-criteria`.

1. (Optional) Test using a sample log event. At a command prompt, run the following command, which will put a simple log message into the subscribed stream.

   To see the output of your Lambda function, navigate to the Lambda function where you will see the output in /aws/lambda/helloworld:

   ```
   aws logs put-log-events --log-group-name Example1 --log-stream-name logStream1 --log-events "[{\"timestamp\":CURRENT TIMESTAMP MILLIS , \"message\": \"Simple Lambda Test\"}]"
   ```

   You should expect to see a response with an array of Lambda. The **Data** attribute in the Lambda record is base64 encoded and compressed with the gzip format. The actual payload that Lambda receives is in the following format `{ "awslogs": {"data": "BASE64ENCODED_GZIP_COMPRESSED_DATA"} }` You can examine the raw data from the command line using the following Unix commands:

   ```
   echo -n "<BASE64ENCODED_GZIP_COMPRESSED_DATA>" | base64 -d | zcat
   ```

   The base64 decoded and decompressed data is formatted as JSON with the following structure:

   ```
   { 
       "messageType": "DATA_MESSAGE",
       "owner": "123456789012",
       "logGroup": "Example1",
       "logStream": "logStream1",
       "subscriptionFilters": [ 
           "ExamplePolicyLambda" 
       ],
       "logEvents": [ 
           { 
               "id": "31953106606966983378809025079804211143289615424298221568",
               "timestamp": 1432826855000,
               "message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root\"}"
           },
           { 
               "id": "31953106606966983378809025079804211143289615424298221569",
               "timestamp": 1432826855000,
               "message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root\"}" 
           }, 
           { 
               "id": "31953106606966983378809025079804211143289615424298221570",
               "timestamp": 1432826855000,
               "message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root\"}" 
           } 
       ],
       "policyLevel": "ACCOUNT_LEVEL_POLICY"
   }
   ```
**Note**  
The account-level subscription filter will not be applied to the destination Lambda function’s log group. This is to prevent an infinite log recursion that can lead to an increase in ingestion billing. For more information about this problem, see [Log recursion prevention](Subscriptions-recursion-prevention.md) .

   The key elements in the data structure are the following:  
**messageType**  
Data messages will use the "DATA\$1MESSAGE" type. Sometimes CloudWatch Logs might emit Amazon Kinesis Data Streams records with a "CONTROL\$1MESSAGE" type, mainly for checking if the destination is reachable.  
**owner**  
The Amazon Account ID of the originating log data.  
**logGroup**  
The log group name of the originating log data.  
**logStream**  
The log stream name of the originating log data.  
**subscriptionFilters**  
The list of subscription filter names that matched with the originating log data.  
**logEvents**  
The actual log data, represented as an array of log event records. The "id" property is a unique identifier for every log event.  
**policyLevel**  
The level at which the policy was enforced. "ACCOUNT\$1LEVEL\$1POLICY" is the `policyLevel` for an account-level subscription filter policy.

## Example 3: Subscription filters with Amazon Data Firehose
<a name="FirehoseExample-AccountLevel"></a>

In this example, you'll create a CloudWatch Logs account-level subscription filter policy that sends incoming log events that match your defined filters to your Amazon Data Firehose delivery stream. Data sent from CloudWatch Logs to Amazon Data Firehose is already compressed with gzip level 6 compression, so you do not need to use compression within your Firehose delivery stream. You can then use the decompression feature in Firehose to automatically decompress the logs. For more information, see [ Writing to Kinesis Data Firehose Using CloudWatch Logs](https://docs.amazonaws.cn/firehose/latest/dev/writing-with-cloudwatch-logs.html).

**Warning**  
Before you create the Firehose stream, calculate the volume of log data that will be generated. Be sure to create a Firehose stream that can handle this volume. If the stream cannot handle the volume, the log stream will be throttled. For more information about Firehose stream volume limits, see [Amazon Data Firehose Data Limits](https://docs.amazonaws.cn/firehose/latest/dev/limits.html). 

**To create a subscription filter for Firehose**

1. Create an Amazon Simple Storage Service (Amazon S3) bucket. We recommend that you use a bucket that was created specifically for CloudWatch Logs. However, if you want to use an existing bucket, skip to step 2.

   Run the following command, replacing the placeholder Region with the Region you want to use:

   ```
   aws s3api create-bucket --bucket amzn-s3-demo-bucket2 --create-bucket-configuration LocationConstraint=region
   ```

   The following is example output:

   ```
   {
       "Location": "/amzn-s3-demo-bucket2"
   }
   ```

1. Create the IAM role that grants Amazon Data Firehose permission to put data into your Amazon S3 bucket.

   For more information, see [Controlling Access with Amazon Data Firehose](https://docs.amazonaws.cn/firehose/latest/dev/controlling-access.html) in the *Amazon Data Firehose Developer Guide*.

   First, use a text editor to create a trust policy in a file `~/TrustPolicyForFirehose.json` as follows:

   ```
   {
     "Statement": {
       "Effect": "Allow",
       "Principal": { "Service": "firehose.amazonaws.com" },
       "Action": "sts:AssumeRole"
       } 
   }
   ```

1. Use the **create-role** command to create the IAM role, specifying the trust policy file. Keep a note of the returned **Role.Arn** value, as you will need it in a later step:

   ```
   aws iam create-role \
    --role-name FirehosetoS3Role \
    --assume-role-policy-document file://~/TrustPolicyForFirehose.json
   
   {
       "Role": {
           "AssumeRolePolicyDocument": {
               "Statement": {
                   "Action": "sts:AssumeRole",
                   "Effect": "Allow",
                   "Principal": {
                       "Service": "firehose.amazonaws.com"
                   }
               }
           },
           "RoleId": "EXAMPLE50GAB4HC5F431",
           "CreateDate": "2023-05-29T13:46:29.431Z",
           "RoleName": "FirehosetoS3Role",
           "Path": "/",
           "Arn": "arn:aws:iam::123456789012:role/FirehosetoS3Role"
       }
   }
   ```

1. Create a permissions policy to define what actions Firehose can do on your account. First, use a text editor to create a permissions policy in a file `~/PermissionsForFirehose.json`:

   ```
   {
     "Statement": [
       {
         "Effect": "Allow",
         "Action": [ 
             "s3:AbortMultipartUpload", 
             "s3:GetBucketLocation", 
             "s3:GetObject", 
             "s3:ListBucket", 
             "s3:ListBucketMultipartUploads", 
             "s3:PutObject" ],
         "Resource": [ 
             "arn:aws:s3:::amzn-s3-demo-bucket2", 
             "arn:aws:s3:::amzn-s3-demo-bucket2/*" ]
       }
     ]
   }
   ```

1. Associate the permissions policy with the role using the following put-role-policy command:

   ```
   aws iam put-role-policy --role-name FirehosetoS3Role --policy-name Permissions-Policy-For-Firehose --policy-document file://~/PermissionsForFirehose.json
   ```

1. Create a destination Firehose delivery stream as follows, replacing the placeholder values for **RoleARN** and **BucketARN** with the role and bucket ARNs that you created:

   ```
   aws firehose create-delivery-stream \
      --delivery-stream-name 'my-delivery-stream' \
      --s3-destination-configuration \
     '{"RoleARN": "arn:aws:iam::123456789012:role/FirehosetoS3Role", "BucketARN": "arn:aws:s3:::amzn-s3-demo-bucket2"}'
   ```

   NFirehose automatically uses a prefix in YYYY/MM/DD/HH UTC time format for delivered Amazon S3 objects. You can specify an extra prefix to be added in front of the time format prefix. If the prefix ends with a forward slash (/), it appears as a folder in the Amazon S3 bucket.

1. Wait a few minutes for the stream becomes active. You can use the Firehose **describe-delivery-stream** command to check the **DeliveryStreamDescription.DeliveryStreamStatus** property. In addition, note the **DeliveryStreamDescription.DeliveryStreamARN** value, as you will need it in a later step:

   ```
   aws firehose describe-delivery-stream --delivery-stream-name "my-delivery-stream"
   {
       "DeliveryStreamDescription": {
           "HasMoreDestinations": false,
           "VersionId": "1",
           "CreateTimestamp": 1446075815.822,
           "DeliveryStreamARN": "arn:aws:firehose:us-east-1:123456789012:deliverystream/my-delivery-stream",
           "DeliveryStreamStatus": "ACTIVE",
           "DeliveryStreamName": "my-delivery-stream",
           "Destinations": [
               {
                   "DestinationId": "destinationId-000000000001",
                   "S3DestinationDescription": {
                       "CompressionFormat": "UNCOMPRESSED",
                       "EncryptionConfiguration": {
                           "NoEncryptionConfig": "NoEncryption"
                       },
                       "RoleARN": "delivery-stream-role",
                       "BucketARN": "arn:aws:s3:::amzn-s3-demo-bucket2",
                       "BufferingHints": {
                           "IntervalInSeconds": 300,
                           "SizeInMBs": 5
                       }
                   }
               }
           ]
       }
   }
   ```

1. Create the IAM role that grants CloudWatch Logs permission to put data into your Firehose delivery stream. First, use a text editor to create a trust policy in a file `~/TrustPolicyForCWL.json`:

   This policy includes a `aws:SourceArn` global condition context key to help prevent the confused deputy security problem. For more information, see [Confused deputy prevention](Subscriptions-confused-deputy.md).

   ```
   {
     "Statement": {
       "Effect": "Allow",
       "Principal": { "Service": "logs.amazonaws.com" },
       "Action": "sts:AssumeRole",
       "Condition": { 
            "StringLike": { 
                "aws:SourceArn": "arn:aws:logs:region:123456789012:*"
            } 
        }
     }
   }
   ```

1. Use the **create-role** command to create the IAM role, specifying the trust policy file. Make a note of the returned **Role.Arn** value, as you will need it in a later step:

   ```
   aws iam create-role \
   --role-name CWLtoKinesisFirehoseRole \
   --assume-role-policy-document file://~/TrustPolicyForCWL.json
   
   {
       "Role": {
           "AssumeRolePolicyDocument": {
               "Statement": {
                   "Action": "sts:AssumeRole",
                   "Effect": "Allow",
                   "Principal": {
                       "Service": "logs.amazonaws.com"
                   },
                   "Condition": { 
                        "StringLike": { 
                            "aws:SourceArn": "arn:aws:logs:region:123456789012:*"
                        } 
                    }
               }
           },
           "RoleId": "AAOIIAH450GAB4HC5F431",
           "CreateDate": "2015-05-29T13:46:29.431Z",
           "RoleName": "CWLtoKinesisFirehoseRole",
           "Path": "/",
           "Arn": "arn:aws:iam::123456789012:role/CWLtoKinesisFirehoseRole"
       }
   }
   ```

1. Create a permissions policy to define what actions CloudWatch Logs can do on your account. First, use a text editor to create a permissions policy file (for example, `~/PermissionsForCWL.json`):

   ```
   {
       "Statement":[
         {
           "Effect":"Allow",
           "Action":["firehose:PutRecord"],
           "Resource":[
               "arn:aws:firehose:region:account-id:deliverystream/delivery-stream-name"]
         }
       ]
   }
   ```

1. Associate the permissions policy with the role using the put-role-policy command:

   ```
   aws iam put-role-policy --role-name CWLtoKinesisFirehoseRole --policy-name Permissions-Policy-For-CWL --policy-document file://~/PermissionsForCWL.json
   ```

1. After the Amazon Data Firehose delivery stream is in the active state and you have created the IAM role, you can create the CloudWatch Logs account-level subscription filter policy. The policy immediately starts the flow of real-time log data from the chosen log group to your Amazon Data Firehose delivery stream:

   ```
   aws logs put-account-policy \
       --policy-name "ExamplePolicyFirehose" \
       --policy-type "SUBSCRIPTION_FILTER_POLICY" \
       --policy-document '{"RoleArn":"arn:aws:iam::123456789012:role/CWLtoKinesisFirehoseRole", "DestinationArn":"arn:aws:firehose:us-east-1:123456789012:deliverystream/delivery-stream-name", "FilterPattern": "Test", "Distribution": "Random"}' \
       --selection-criteria 'LogGroupName NOT IN ["LogGroupToExclude1", "LogGroupToExclude2"]' \
       --scope "ALL"
   ```

1. After you set up the subscription filter, CloudWatch Logs forwards the incoming log events that match the filter pattern to your Amazon Data Firehose delivery stream.

   The `selection-criteria` field is optional, but is important for excluding log groups that can cause an infinite log recursion from a subscription filter. For more information about this issue and determining which log groups to exclude, see [Log recursion prevention](Subscriptions-recursion-prevention.md). Currently, NOT IN is the only supported operator for `selection-criteria`.

   Your data will start appearing in your Amazon S3 based on the time buffer interval set on your Amazon Data Firehose delivery stream. Once enough time has passed, you can verify your data by checking your Amazon S3 Bucket.

   ```
   aws s3api list-objects --bucket 'amzn-s3-demo-bucket2' --prefix 'firehose/'
   {
       "Contents": [
           {
               "LastModified": "2023-10-29T00:01:25.000Z",
               "ETag": "\"a14589f8897f4089d3264d9e2d1f1610\"",
               "StorageClass": "STANDARD",
               "Key": "firehose/2015/10/29/00/my-delivery-stream-2015-10-29-00-01-21-a188030a-62d2-49e6-b7c2-b11f1a7ba250",
               "Owner": {
                   "DisplayName": "cloudwatch-logs",
                   "ID": "1ec9cf700ef6be062b19584e0b7d84ecc19237f87b5"
               },
               "Size": 593
           },
           {
               "LastModified": "2015-10-29T00:35:41.000Z",
               "ETag": "\"a7035b65872bb2161388ffb63dd1aec5\"",
               "StorageClass": "STANDARD",
               "Key": "firehose/2023/10/29/00/my-delivery-stream-2023-10-29-00-35-40-EXAMPLE-7e66-49bc-9fd4-fc9819cc8ed3",
               "Owner": {
                   "DisplayName": "cloudwatch-logs",
                   "ID": "EXAMPLE6be062b19584e0b7d84ecc19237f87b6"
               },
               "Size": 5752
           }
       ]
   }
   ```

   ```
   aws s3api get-object --bucket 'amzn-s3-demo-bucket2' --key 'firehose/2023/10/29/00/my-delivery-stream-2023-10-29-00-01-21-a188030a-62d2-49e6-b7c2-b11f1a7ba250' testfile.gz
   
   {
       "AcceptRanges": "bytes",
       "ContentType": "application/octet-stream",
       "LastModified": "Thu, 29 Oct 2023 00:07:06 GMT",
       "ContentLength": 593,
       "Metadata": {}
   }
   ```

   The data in the Amazon S3 object is compressed with the gzip format. You can examine the raw data from the command line using the following Unix command:

   ```
   zcat testfile.gz
   ```

# Cross-account cross-Region subscriptions
<a name="CrossAccountSubscriptions"></a>

You can collaborate with an owner of a different Amazon account and receive their log events on your Amazon resources, such as an Amazon Kinesis or Amazon Data Firehose stream (this is known as cross-account data sharing). For example, this log event data can be read from a centralized Amazon Kinesis Data Streams or Firehose stream to perform custom processing and analysis. Custom processing is especially useful when you collaborate and analyze data across many accounts.

For example, a company's information security group might want to analyze data for real-time intrusion detection or anomalous behaviors so it could conduct an audit of accounts in all divisions in the company by collecting their federated production logs for central processing. A real-time stream of event data across those accounts can be assembled and delivered to the information security groups, who can use Amazon Kinesis Data Streams to attach the data to their existing security analytic systems.

**Note**  
The log group and the destination must be in the same Amazon Region. However, the Amazon resource that the destination points to can be located in a different Region. In the examples in the following sections, all Region-specific resources are created in US East (N. Virginia)).

If you have configured Amazon Organizations and are working with member accounts you can use log centralization to collect log data from source accounts into a central monitoring account. 

When working with centralized log groups you can use these system fields dimensions when creating subscription filters:
+ `@aws.account` - This dimension represents the Amazon account ID from which the log event originated.
+ `@aws.region` - This dimension represents the Amazon region where the log event was generated. 

These dimensions help in identifying the source of log data, allowing for more granular filtering and analysis of metrics derived from centralized logs. 

**Topics**
+ [Cross-account cross-Region log data sharing using Amazon Kinesis Data Streams](CrossAccountSubscriptions-Kinesis.md)
+ [Cross-account cross-Region log data sharing using Firehose](CrossAccountSubscriptions-Firehose.md)
+ [Cross-account cross-Region account-level subscriptions using Amazon Kinesis Data Streams](CrossAccountSubscriptions-Kinesis-Account.md)
+ [Cross-account cross-Region account-level subscriptions using Firehose](CrossAccountSubscriptions-Firehose-Account.md)

# Cross-account cross-Region log data sharing using Amazon Kinesis Data Streams
<a name="CrossAccountSubscriptions-Kinesis"></a>

When you create a cross-account subscription, you can specify a single account or an organization to be the sender. If you specify an organization, then this procedure enables all accounts in the organization to send logs to the receiver account.

To share log data across accounts, you need to establish a log data sender and receiver:
+ **Log data sender**—gets the destination information from the recipient and lets CloudWatch Logs know that it's ready to send its log events to the specified destination. In the procedures in the rest of this section, the log data sender is shown with a fictional Amazon account number of 111111111111.

  If you're going to have multiple accounts in one organization send logs to one recipient account, you can create a policy that grants all accounts in the organization the permission to send logs to the recipient account. You still have to set up separate subscription filters for each sender account.
+ **Log data recipient**—sets up a destination that encapsulates a Amazon Kinesis Data Streams stream and lets CloudWatch Logs know that the recipient wants to receive log data. The recipient then shares the information about this destination with the sender. In the procedures in the rest of this section, the log data recipient is shown with a fictional Amazon account number of 999999999999.

To start receiving log events from cross-account users, the log data recipient first creates a CloudWatch Logs destination. Each destination consists of the following key elements:

**Destination name**  
The name of the destination you want to create.

**Target ARN**  
The Amazon Resource Name (ARN) of the Amazon resource that you want to use as the destination of the subscription feed.

**Role ARN**  
An Amazon Identity and Access Management (IAM) role that grants CloudWatch Logs the necessary permissions to put data into the chosen stream.

**Access policy**  
An IAM policy document (in JSON format, written using IAM policy grammar) that governs the set of users that are allowed to write to your destination.

**Note**  
The log group and the destination must be in the same Amazon Region. However, the Amazon resource that the destination points to can be located in a different Region. In the examples in the following sections, all Region-specific resources are created in US East (N. Virginia).

**Topics**
+ [Setting up a new cross-account subscription](Cross-Account-Log_Subscription-New.md)
+ [Updating an existing cross-account subscription](Cross-Account-Log_Subscription-Update.md)

# Setting up a new cross-account subscription
<a name="Cross-Account-Log_Subscription-New"></a>

Follow the steps in these sections to set up a new cross-account log subscription.

**Topics**
+ [Step 1: Create a destination](CreateDestination.md)
+ [Step 2: (Only if using an organization) Create an IAM role](CreateSubscriptionFilter-IAMrole.md)
+ [Step 3: Add/validate IAM permissions for the cross-account destination](Subscription-Filter-CrossAccount-Permissions.md)
+ [Step 4: Create a subscription filter](CreateSubscriptionFilter.md)
+ [Validate the flow of log events](ValidateLogEventFlow.md)
+ [Modify destination membership at runtime](ModifyDestinationMembership.md)

# Step 1: Create a destination
<a name="CreateDestination"></a>

**Important**  
All steps in this procedure are to be done in the log data recipient account.

For this example, the log data recipient account has an Amazon account ID of 999999999999, while the log data sender Amazon account ID is 111111111111.

 This example creates a destination using a Amazon Kinesis Data Streams stream called RecipientStream, and a role that enables CloudWatch Logs to write data to it. 

When the destination is created, CloudWatch Logs sends a test message to the destination on the recipient account’s behalf. When the subscription filter is active later, CloudWatch Logs sends log events to the destination on the source account’s behalf.

**To create a destination**

1. In the recipient account, create a destination stream in Amazon Kinesis Data Streams. At a command prompt, type:

   ```
   aws kinesis create-stream --stream-name "RecipientStream" --shard-count 1
   ```

1. Wait until the stream becomes active. You can use the **aws kinesis describe-stream** command to check the **StreamDescription.StreamStatus** property. In addition, take note of the **StreamDescription.StreamARN** value because you will pass it to CloudWatch Logs later:

   ```
   aws kinesis describe-stream --stream-name "RecipientStream"
   {
     "StreamDescription": {
       "StreamStatus": "ACTIVE",
       "StreamName": "RecipientStream",
       "StreamARN": "arn:aws:kinesis:us-east-1:999999999999:stream/RecipientStream",
       "Shards": [
         {
           "ShardId": "shardId-000000000000",
           "HashKeyRange": {
             "EndingHashKey": "34028236692093846346337460743176EXAMPLE",
             "StartingHashKey": "0"
           },
           "SequenceNumberRange": {
             "StartingSequenceNumber": "4955113521868881845667950383198145878459135270218EXAMPLE"
           }
         }
       ]
     }
   }
   ```

   It might take a minute or two for your stream to show up in the active state.

1. Create the IAM role that grants CloudWatch Logs the permission to put data into your stream. First, you'll need to create a trust policy in a file **\$1/TrustPolicyForCWL.json**. Use a text editor to create this policy file, do not use the IAM console.

   This policy includes a `aws:SourceArn` global condition context key that specifies the `sourceAccountId` to help prevent the confused deputy security problem. If you don't yet know the source account ID in the first call, we recommend that you put the destination ARN in the source ARN field. In the subsequent calls, you should set the source ARN to be the actual source ARN that you gathered from the first call. For more information, see [Confused deputy prevention](Subscriptions-confused-deputy.md). 

   ```
   {
       "Statement": {
           "Effect": "Allow",
           "Principal": {
               "Service": "logs.amazonaws.com"
           },
           "Condition": {
               "StringLike": {
                   "aws:SourceArn": [
                       "arn:aws:logs:region:sourceAccountId:*",
                       "arn:aws:logs:region:recipientAccountId:*"
                   ]
               }
           },
           "Action": "sts:AssumeRole"
       }
   }
   ```

1. Use the **aws iam create-role** command to create the IAM role, specifying the trust policy file. Take note of the returned Role.Arn value because it will also be passed to CloudWatch Logs later:

   ```
   aws iam create-role \
   --role-name CWLtoKinesisRole \
   --assume-role-policy-document file://~/TrustPolicyForCWL.json
   
   {
       "Role": {
           "AssumeRolePolicyDocument": {
               "Statement": {
                   "Action": "sts:AssumeRole",
                   "Effect": "Allow",
                   "Condition": {
                       "StringLike": {
                           "aws:SourceArn": [
                               "arn:aws:logs:region:sourceAccountId:*",
                               "arn:aws:logs:region:recipientAccountId:*"
                           ]
                       }
                   },
                   "Principal": {
                       "Service": "logs.amazonaws.com"
                   }
               }
           },
           "RoleId": "AAOIIAH450GAB4HC5F431",
           "CreateDate": "2015-05-29T13:46:29.431Z",
           "RoleName": "CWLtoKinesisRole",
           "Path": "/",
           "Arn": "arn:aws:iam::999999999999:role/CWLtoKinesisRole"
       }
   }
   ```

1. Create a permissions policy to define which actions CloudWatch Logs can perform on your account. First, use a text editor to create a permissions policy in a file **\$1/PermissionsForCWL.json**:

   ```
   {
     "Statement": [
       {
         "Effect": "Allow",
         "Action": "kinesis:PutRecord",
         "Resource": "arn:aws:kinesis:region:999999999999:stream/RecipientStream"
       }
     ]
   }
   ```

1. Associate the permissions policy with the role by using the **aws iam put-role-policy** command:

   ```
   aws iam put-role-policy \
       --role-name CWLtoKinesisRole \
       --policy-name Permissions-Policy-For-CWL \
       --policy-document file://~/PermissionsForCWL.json
   ```

1. After the stream is in the active state and you have created the IAM role, you can create the CloudWatch Logs destination.

   1. This step doesn't associate an access policy with your destination and is only the first step out of two that completes a destination creation. Make a note of the **DestinationArn** that is returned in the payload:

      ```
      aws logs put-destination \
          --destination-name "testDestination" \
          --target-arn "arn:aws:kinesis:region:999999999999:stream/RecipientStream" \
          --role-arn "arn:aws:iam::999999999999:role/CWLtoKinesisRole"
      
      {
        "DestinationName" : "testDestination",
        "RoleArn" : "arn:aws:iam::999999999999:role/CWLtoKinesisRole",
        "DestinationArn" : "arn:aws:logs:us-east-1:999999999999:destination:testDestination",
        "TargetArn" : "arn:aws:kinesis:us-east-1:999999999999:stream/RecipientStream"
      }
      ```

   1. After step 7a is complete, in the log data recipient account, associate an access policy with the destination. This policy must specify the **logs:PutSubscriptionFilter** action and grants permission to the sender account to access the destination.

      The policy grants permission to the Amazon account that sends logs. You can specify just this one account in the policy, or if the sender account is a member of an organization, the policy can specify the organization ID of the organization. This way, you can create just one policy to allow multiple accounts in one organization to send logs to this destination account.

      Use a text editor to create a file named `~/AccessPolicy.json` with one of the following policy statements.

      This first example policy allows all accounts in the organization that have an ID of `o-1234567890` to send logs to the recipient account.

------
#### [ JSON ]

****  

      ```
      {
          "Version":"2012-10-17",		 	 	 
          "Statement": [
              {
                  "Sid": "",
                  "Effect": "Allow",
                  "Principal": "*",
                  "Action": "logs:PutSubscriptionFilter",
                  "Resource": "arn:aws-cn:logs:us-east-1:999999999999:destination:testDestination",
                  "Condition": {
                      "StringEquals": {
                          "aws:PrincipalOrgID": [
                              "o-1234567890"
                          ]
                      }
                  }
              }
          ]
      }
      ```

------

      This next example allows just the log data sender account (111111111111) to send logs to the log data recipient account.

------
#### [ JSON ]

****  

      ```
      {
          "Version":"2012-10-17",		 	 	 
          "Statement": [
              {
                  "Sid": "",
                  "Effect": "Allow",
                  "Principal": {
                      "AWS": "111111111111"
                  },
                  "Action": "logs:PutSubscriptionFilter",
                  "Resource": "arn:aws-cn:logs:us-east-1:999999999999:destination:testDestination"
              }
          ]
      }
      ```

------

   1. Attach the policy you created in the previous step to the destination.

      ```
      aws logs put-destination-policy \
          --destination-name "testDestination" \
          --access-policy file://~/AccessPolicy.json
      ```

      This access policy enables users in the Amazon Account with ID 111111111111 to call **PutSubscriptionFilter** against the destination with ARN arn:aws:logs:*region*:999999999999:destination:testDestination. Any other user's attempt to call PutSubscriptionFilter against this destination will be rejected.

      To validate a user's privileges against an access policy, see [Using Policy Validator](https://docs.amazonaws.cn/IAM/latest/UserGuide/policies_policy-validator.html) in the *IAM User Guide*.

When you have finished, if you're using Amazon Organizations for your cross-account permissions, follow the steps in [Step 2: (Only if using an organization) Create an IAM role](CreateSubscriptionFilter-IAMrole.md). If you're granting permissions directly to the other account instead of using Organizations, you can skip that step and proceed to [Step 4: Create a subscription filter](CreateSubscriptionFilter.md).

# Step 2: (Only if using an organization) Create an IAM role
<a name="CreateSubscriptionFilter-IAMrole"></a>

In the previous section, if you created the destination by using an access policy that grants permissions to the organization that account `111111111111` is in, instead of granting permissions directly to account `111111111111`, then follow the steps in this section. Otherwise, you can skip to [Step 4: Create a subscription filter](CreateSubscriptionFilter.md).

The steps in this section create an IAM role, which CloudWatch can assume and validate whether the sender account has permission to create a subscription filter against the recipient destination. 

Perform the steps in this section in the sender account. The role must exist in the sender account, and you specify the ARN of this role in the subscription filter. In this example, the sender account is `111111111111`.

**To create the IAM role necessary for cross-account log subscriptions using Amazon Organizations**

1. Create the following trust policy in a file `/TrustPolicyForCWLSubscriptionFilter.json`. Use a text editor to create this policy file; do not use the IAM console.

   ```
   {
     "Statement": {
       "Effect": "Allow",
       "Principal": { "Service": "logs.amazonaws.com" },
       "Action": "sts:AssumeRole"
     }
   }
   ```

1. Create the IAM role that uses this policy. Take note of the `Arn` value that is returned by the command, you will need it later in this procedure. In this example, we use `CWLtoSubscriptionFilterRole` for the name of the role we're creating.

   ```
   aws iam create-role \ 
        --role-name CWLtoSubscriptionFilterRole \ 
        --assume-role-policy-document file://~/TrustPolicyForCWLSubscriptionFilter.json
   ```

1. Create a permissions policy to define the actions that CloudWatch Logs can perform on your account.

   1. First, use a text editor to create the following permissions policy in a file named `~/PermissionsForCWLSubscriptionFilter.json`.

      ```
      { 
          "Statement": [ 
              { 
                  "Effect": "Allow", 
                  "Action": "logs:PutLogEvents", 
                  "Resource": "arn:aws:logs:region:111111111111:log-group:LogGroupOnWhichSubscriptionFilterIsCreated:*" 
              } 
          ] 
      }
      ```

   1. Enter the following command to associate the permissions policy you just created with the role that you created in step 2.

      ```
      aws iam put-role-policy  
          --role-name CWLtoSubscriptionFilterRole  
          --policy-name Permissions-Policy-For-CWL-Subscription-filter 
          --policy-document file://~/PermissionsForCWLSubscriptionFilter.json
      ```

When you have finished, you can proceed to [Step 4: Create a subscription filter](CreateSubscriptionFilter.md).

# Step 3: Add/validate IAM permissions for the cross-account destination
<a name="Subscription-Filter-CrossAccount-Permissions"></a>

According to Amazon cross-account policy evaluation logic, in order to access any cross-account resource (such as an Kinesis or Firehose stream used as a destination for a subscription filter) you must have an identity-based policy in the sending account which provides explicit access to the cross-account destination resource. For more information about policy evaluation logic, see [ Cross-account policy evaluation logic](https://docs.amazonaws.cn/IAM/latest/UserGuide/reference_policies_evaluation-logic-cross-account.html).

You can attach the identity-based policy to the IAM role or IAM user that you are using to create the subscription filter. This policy must be present in the sending account. If you are using the Administrator role to create the subscription filter, you can skip this step and move on to [Step 4: Create a subscription filter](CreateSubscriptionFilter.md).

**To add or validate the IAM permissions needed for cross-account**

1. Enter the following command to check which IAM role or IAM user is being used to run Amazon logs commands.

   ```
   aws sts get-caller-identity
   ```

   The command returns output similar to the following:

   ```
   {
   "UserId": "User ID",
   "Account": "sending account id",
   "Arn": "arn:aws:sending account id:role/user:RoleName/UserName"
   }
   ```

   Make note of the value represented by *RoleName* or *UserName*.

1. Sign into the Amazon Web Services Management Console in the sending account and search for the attached policies with the IAM role or IAM user returned in the output of the command you entered in step 1.

1. Verify that the policies attached to this role or user provide explicit permissions to call `logs:PutSubscriptionFilter` on the cross-account destination resource. 

   The following policy provides permissions to create a subscription filter on any destination resource only in a single Amazon account, account `999999999999`:

------
#### [ JSON ]

****  

   ```
   {
        "Version":"2012-10-17",		 	 	 
        "Statement": [
           {
               "Sid": "AllowSubscriptionFiltersOnAccountResources",
               "Effect": "Allow",
               "Action": "logs:PutSubscriptionFilter",
               "Resource": [
                   "arn:aws-cn:logs:*:*:log-group:*",
                   "arn:aws-cn:logs:*:123456789012:destination:*"
               ]
           }
       ]
   }
   ```

------

   The following policy provides permissions to create a subscription filter only on a specific destination resource named `sampleDestination` in single Amazon account, account `123456789012`:

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "AllowSubscriptionFiltersonAccountResource",
               "Effect": "Allow",
               "Action": "logs:PutSubscriptionFilter",
               "Resource": [
                   "arn:aws-cn:logs:*:*:log-group:*",
                   "arn:aws-cn:logs:*:123456789012:destination:sampleDestination"
               ]
           }
       ]
   }
   ```

------

# Step 4: Create a subscription filter
<a name="CreateSubscriptionFilter"></a>

After you create a destination, the log data recipient account can share the destination ARN (arn:aws:logs:us-east-1:999999999999:destination:testDestination) with other Amazon accounts so that they can send log events to the same destination. These other sending accounts users then create a subscription filter on their respective log groups against this destination. The subscription filter immediately starts the flow of real-time log data from the chosen log group to the specified destination.

**Note**  
If you are granting permissions for the subscription filter to an entire organization, you will need to use the ARN of the IAM role that you created in [Step 2: (Only if using an organization) Create an IAM role](CreateSubscriptionFilter-IAMrole.md).

In the following example, a subscription filter is created in a sending account. the filter is associated with a log group containing Amazon CloudTrail events so that every logged activity made by "Root" Amazon credentials is delivered to the destination you previously created. That destination encapsulates a stream called "RecipientStream".

The rest of the steps in the following sections assume that you have followed the directions in [Sending CloudTrail Events to CloudWatch Logs](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/send-cloudtrail-events-to-cloudwatch-logs.html) in the *Amazon CloudTrail User Guide* and created a log group that contains your CloudTrail events. These steps assume that the name of this log group is `CloudTrail/logs`.

When you enter the following command, be sure you are signed in as the IAM user or using the IAM role that you added the policy for, in [Step 3: Add/validate IAM permissions for the cross-account destination](Subscription-Filter-CrossAccount-Permissions.md).

```
aws logs put-subscription-filter \
    --log-group-name "CloudTrail/logs" \
    --filter-name "RecipientStream" \
    --filter-pattern "{$.userIdentity.type = Root}" \
    --destination-arn "arn:aws:logs:region:999999999999:destination:testDestination"
```

The log group and the destination must be in the same Amazon Region. However, the destination can point to an Amazon resource such as a Amazon Kinesis Data Streams stream that is located in a different Region.

# Validate the flow of log events
<a name="ValidateLogEventFlow"></a>

After you create the subscription filter, CloudWatch Logs forwards all the incoming log events that match the filter pattern to the stream that is encapsulated within the destination stream called "**RecipientStream**". The destination owner can verify that this is happening by using the **aws kinesis get-shard-iterator** command to grab a Amazon Kinesis Data Streams shard, and using the **aws kinesis get-records** command to fetch some Amazon Kinesis Data Streams records:

```
aws kinesis get-shard-iterator \
      --stream-name RecipientStream \
      --shard-id shardId-000000000000 \
      --shard-iterator-type TRIM_HORIZON

{
    "ShardIterator":
    "AAAAAAAAAAFGU/kLvNggvndHq2UIFOw5PZc6F01s3e3afsSscRM70JSbjIefg2ub07nk1y6CDxYR1UoGHJNP4m4NFUetzfL+wev+e2P4djJg4L9wmXKvQYoE+rMUiFq+p4Cn3IgvqOb5dRA0yybNdRcdzvnC35KQANoHzzahKdRGb9v4scv+3vaq+f+OIK8zM5My8ID+g6rMo7UKWeI4+IWiKEXAMPLE"
}

aws kinesis get-records \
      --limit 10 \
      --shard-iterator
      "AAAAAAAAAAFGU/kLvNggvndHq2UIFOw5PZc6F01s3e3afsSscRM70JSbjIefg2ub07nk1y6CDxYR1UoGHJNP4m4NFUetzfL+wev+e2P4djJg4L9wmXKvQYoE+rMUiFq+p4Cn3IgvqOb5dRA0yybNdRcdzvnC35KQANoHzzahKdRGb9v4scv+3vaq+f+OIK8zM5My8ID+g6rMo7UKWeI4+IWiKEXAMPLE"
```

**Note**  
You might need to rerun the get-records command a few times before Amazon Kinesis Data Streams starts to return data.

You should see a response with an array of Amazon Kinesis Data Streams records. The data attribute in the Amazon Kinesis Data Streams record is compressed in gzip format and then base64 encoded. You can examine the raw data from the command line using the following Unix command:

```
echo -n "<Content of Data>" | base64 -d | zcat
```

The base64 decoded and decompressed data is formatted as JSON with the following structure:

```
{
    "owner": "111111111111",
    "logGroup": "CloudTrail/logs",
    "logStream": "111111111111_CloudTrail/logs_us-east-1",
    "subscriptionFilters": [
        "RecipientStream"
    ],
    "messageType": "DATA_MESSAGE",
    "logEvents": [
        {
            "id": "3195310660696698337880902507980421114328961542429EXAMPLE",
            "timestamp": 1432826855000,
            "message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root\"}"
        },
        {
            "id": "3195310660696698337880902507980421114328961542429EXAMPLE",
            "timestamp": 1432826855000,
            "message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root\"}"
        },
        {
            "id": "3195310660696698337880902507980421114328961542429EXAMPLE",
            "timestamp": 1432826855000,
            "message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root\"}"
        }
    ]
}
```

The key elements in this data structure are as follows:

**owner**  
The Amazon Account ID of the originating log data.

**logGroup**  
The log group name of the originating log data.

**logStream**  
The log stream name of the originating log data.

**subscriptionFilters**  
The list of subscription filter names that matched with the originating log data.

**messageType**  
Data messages use the "DATA\$1MESSAGE" type. Sometimes CloudWatch Logs may emit Amazon Kinesis Data Streams records with a "CONTROL\$1MESSAGE" type, mainly for checking if the destination is reachable.

**logEvents**  
The actual log data, represented as an array of log event records. The ID property is a unique identifier for every log event.

# Modify destination membership at runtime
<a name="ModifyDestinationMembership"></a>

You might encounter situations where you have to add or remove membership of some users from a destination that you own. You can use the `put-destination-policy` command on your destination with a new access policy. In the following example, a previously added account **111111111111** is stopped from sending any more log data, and account **222222222222** is enabled.

1. Fetch the policy that is currently associated with the destination **testDestination** and make a note of the **AccessPolicy**:

   ```
   aws logs describe-destinations \
       --destination-name-prefix "testDestination"
   
   {
    "Destinations": [
      {
        "DestinationName": "testDestination",
        "RoleArn": "arn:aws:iam::999999999999:role/CWLtoKinesisRole",
        "DestinationArn": "arn:aws:logs:region:999999999999:destination:testDestination",
        "TargetArn": "arn:aws:kinesis:region:999999999999:stream/RecipientStream",
        "AccessPolicy": "{\"Version\": \"2012-10-17\", \"Statement\": [{\"Sid\": \"\", \"Effect\": \"Allow\", \"Principal\": {\"AWS\": \"111111111111\"}, \"Action\": \"logs:PutSubscriptionFilter\", \"Resource\": \"arn:aws:logs:region:999999999999:destination:testDestination\"}] }"
      }
    ]
   }
   ```

1. Update the policy to reflect that account **111111111111** is stopped, and that account **222222222222** is enabled. Put this policy in the **\$1/NewAccessPolicy.json** file:

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "",
               "Effect": "Allow",
               "Principal": {
                   "AWS": "222222222222"
               },
               "Action": "logs:PutSubscriptionFilter",
               "Resource": "arn:aws-cn:logs:us-east-1:999999999999:destination:testDestination"
           }
       ]
   }
   ```

------

1. Call **PutDestinationPolicy** to associate the policy defined in the **NewAccessPolicy.json** file with the destination:

   ```
   aws logs put-destination-policy \
   --destination-name "testDestination" \
   --access-policy file://~/NewAccessPolicy.json
   ```

   This will eventually disable the log events from account ID **111111111111**. Log events from account ID **222222222222** start flowing to the destination as soon as the owner of account **222222222222** creates a subscription filter.

# Updating an existing cross-account subscription
<a name="Cross-Account-Log_Subscription-Update"></a>

If you currently have a cross-account logs subscription where the destination account grants permissions only to specific sender accounts, and you want to update this subscription so that the destination account grants access to all accounts in an organization, follow the steps in this section.

**Topics**
+ [Step 1: Update the subscription filters](Cross-Account-Log_Subscription-Update-filter.md)
+ [Step 2: Update the existing destination access policy](Cross-Account-Log_Subscription-Update-policy.md)

# Step 1: Update the subscription filters
<a name="Cross-Account-Log_Subscription-Update-filter"></a>

**Note**  
This step is needed only for cross-account subscriptions for logs that are created by the services listed in [Enable logging from Amazon services](AWS-logs-and-resource-policy.md). If you are not working with logs created by one of these log groups, you can skip to [Step 2: Update the existing destination access policy](Cross-Account-Log_Subscription-Update-policy.md).

In certain cases, you must update the subscription filters in all the sender accounts that are sending logs to the destination account. The update adds an IAM role, which CloudWatch can assume and validate that the sender account has permission to send logs to the recipient account.

Follow the steps in this section for every sender account that you want to update to use organization ID for the cross-account subscription permissions.

In the examples in this section, two accounts, `111111111111` and `222222222222` already have subscription filters created to send logs to account `999999999999`. The existing subscription filter values are as follows:

```
## Existing Subscription Filter parameter values
    \ --log-group-name "my-log-group-name" 
    \ --filter-name "RecipientStream" 
    \ --filter-pattern "{$.userIdentity.type = Root}" 
    \ --destination-arn "arn:aws:logs:region:999999999999:destination:testDestination"
```

If you need to find the current subscription filter parameter values, enter the following command.

```
aws logs describe-subscription-filters 
    \ --log-group-name "my-log-group-name"
```

**To update a subscription filter to start using organization IDs for cross-account log permissions**

1. Create the following trust policy in a file `~/TrustPolicyForCWL.json`. Use a text editor to create this policy file; do not use the IAM console.

   ```
   {
     "Statement": {
       "Effect": "Allow",
       "Principal": { "Service": "logs.amazonaws.com" },
       "Action": "sts:AssumeRole"
     }
   }
   ```

1. Create the IAM role that uses this policy. Take note of the `Arn` value of the `Arn` value that is returned by the command, you will need it later in this procedure. In this example, we use `CWLtoSubscriptionFilterRole` for the name of the role we're creating.

   ```
   aws iam create-role 
       \ --role-name CWLtoSubscriptionFilterRole 
       \ --assume-role-policy-document file://~/TrustPolicyForCWL.json
   ```

1. Create a permissions policy to define the actions that CloudWatch Logs can perform on your account.

   1. First, use a text editor to create the following permissions policy in a file named `/PermissionsForCWLSubscriptionFilter.json`.

      ```
      { 
          "Statement": [ 
              { 
                  "Effect": "Allow", 
                  "Action": "logs:PutLogEvents", 
                  "Resource": "arn:aws:logs:region:111111111111:log-group:LogGroupOnWhichSubscriptionFilterIsCreated:*" 
              } 
          ] 
      }
      ```

   1. Enter the following command to associate the permissions policy you just created with the role that you created in step 2.

      ```
      aws iam put-role-policy 
          --role-name CWLtoSubscriptionFilterRole 
          --policy-name Permissions-Policy-For-CWL-Subscription-filter 
          --policy-document file://~/PermissionsForCWLSubscriptionFilter.json
      ```

1. Enter the following command to update the subscription filter.

   ```
   aws logs put-subscription-filter 
       \ --log-group-name "my-log-group-name" 
       \ --filter-name "RecipientStream" 
       \ --filter-pattern "{$.userIdentity.type = Root}" 
       \ --destination-arn "arn:aws:logs:region:999999999999:destination:testDestination"
       \ --role-arn "arn:aws:iam::111111111111:role/CWLtoSubscriptionFilterRole"
   ```

# Step 2: Update the existing destination access policy
<a name="Cross-Account-Log_Subscription-Update-policy"></a>

After you have updated the subscription filters in all of the sender accounts, you can update the destination access policy in the recipient account.

In the following examples, the recipient account is `999999999999` and the destination is named `testDestination`.

The update enables all accounts that are part of the organization with ID `o-1234567890` to send logs to the recipient account. Only the accounts that have subscription filters created will actually send logs to the recipient account.

**To update the destination access policy in the recipient account to start using an organization ID for permissions**

1. In the recipient account, use a text editor to create a `~/AccessPolicy.json` file with the following contents.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "",
               "Effect": "Allow",
               "Principal": "*",
               "Action": "logs:PutSubscriptionFilter",
               "Resource": "arn:aws-cn:logs:us-east-1:999999999999:destination:testDestination",
               "Condition": {
                   "StringEquals": {
                       "aws:PrincipalOrgID": [
                           "o-1234567890"
                       ]
                   }
               }
           }
       ]
   }
   ```

------

1. Enter the following command to attach the policy that you just created to the existing destination. To update a destination to use an access policy with an organization ID instead of an access policy that lists specific Amazon account IDs, include the `force` parameter.
**Warning**  
If you are working with logs sent by an Amazon service listed in [Enable logging from Amazon services](AWS-logs-and-resource-policy.md), then before doing this step, you must have first updated the subscription filters in all the sender accounts as explained in [Step 1: Update the subscription filters](Cross-Account-Log_Subscription-Update-filter.md).

   ```
   aws logs put-destination-policy 
       \ --destination-name "testDestination" 
       \ --access-policy file://~/AccessPolicy.json
       \ --force
   ```

# Cross-account cross-Region log data sharing using Firehose
<a name="CrossAccountSubscriptions-Firehose"></a>

To share log data across accounts, you need to establish a log data sender and receiver:
+ **Log data sender**—gets the destination information from the recipient and lets CloudWatch Logs know that it is ready to send its log events to the specified destination. In the procedures in the rest of this section, the log data sender is shown with a fictional Amazon account number of 111111111111.
+ **Log data recipient**—sets up a destination that encapsulates a Amazon Kinesis Data Streams stream and lets CloudWatch Logs know that the recipient wants to receive log data. The recipient then shares the information about this destination with the sender. In the procedures in the rest of this section, the log data recipient is shown with a fictional Amazon account number of 222222222222.

The example in this section uses a Firehose delivery stream with Amazon S3 storage. You can also set up Firehose delivery streams with different settings. For more information, see [ Creating a Firehose Delivery Stream](https://docs.amazonaws.cn/firehose/latest/dev/basic-create.html).

**Note**  
The log group and the destination must be in the same Amazon Region. However, the Amazon resource that the destination points to can be located in a different Region.

**Note**  
 Firehose subscription filter for a ***same account*** and ***cross-Region*** delivery stream is supported. 

**Topics**
+ [Step 1: Create a Firehose delivery stream](CreateFirehoseStream.md)
+ [Step 2: Create a destination](CreateFirehoseStreamDestination.md)
+ [Step 3: Add/validate IAM permissions for the cross-account destination](Subscription-Filter-CrossAccount-Permissions-Firehose.md)
+ [Step 4: Create a subscription filter](CreateSubscriptionFilterFirehose.md)
+ [Validating the flow of log events](ValidateLogEventFlowFirehose.md)
+ [Modifying destination membership at runtime](ModifyDestinationMembershipFirehose.md)

# Step 1: Create a Firehose delivery stream
<a name="CreateFirehoseStream"></a>

**Important**  
 Before you complete the following steps, you must use an access policy, so Firehose can access your Amazon S3 bucket. For more information, see [Controlling Access](https://docs.aws.amazon.com/firehose/latest/dev/controlling-access.html#using-iam-s3) in the *Amazon Data Firehose Developer Guide*.   
 All of the steps in this section (Step 1) must be done in the log data recipient account.   
 US East (N. Virginia) is used in the following sample commands. Replace this Region with the correct Region for your deployment. 

**To create a Firehose delivery stream to be used as the destination**

1. Create an Amazon S3 bucket:

   ```
   aws s3api create-bucket --bucket amzn-s3-demo-bucket --create-bucket-configuration LocationConstraint=us-east-1
   ```

1. Create the IAM role that grants Firehose permission to put data into the bucket.

   1. First, use a text editor to create a trust policy in a file `~/TrustPolicyForFirehose.json`.

      ```
      { "Statement": { "Effect": "Allow", "Principal": { "Service": "firehose.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "sts:ExternalId":"222222222222" } } } }
      ```

   1. Create the IAM role, specifying the trust policy file that you just made.

      ```
      aws iam create-role \ 
          --role-name FirehosetoS3Role \ 
          --assume-role-policy-document file://~/TrustPolicyForFirehose.json
      ```

   1. The output of this command will look similar to the following. Make a note of the role name and the role ARN.

      ```
      {
          "Role": {
              "Path": "/",
              "RoleName": "FirehosetoS3Role",
              "RoleId": "AROAR3BXASEKW7K635M53",
              "Arn": "arn:aws:iam::222222222222:role/FirehosetoS3Role",
              "CreateDate": "2021-02-02T07:53:10+00:00",
              "AssumeRolePolicyDocument": {
                  "Statement": {
                      "Effect": "Allow",
                      "Principal": {
                          "Service": "firehose.amazonaws.com"
                      },
                      "Action": "sts:AssumeRole",
                      "Condition": {
                          "StringEquals": {
                              "sts:ExternalId": "222222222222"
                          }
                      }
                  }
              }
          }
      }
      ```

1. Create a permissions policy to define the actions that Firehose can perform in your account.

   1. First, use a text editor to create the following permissions policy in a file named `~/PermissionsForFirehose.json`. Depending on your use case, you might need to add more permissions to this file.

      ```
      {
          "Statement": [{
              "Effect": "Allow",
              "Action": [
                  "s3:PutObject",
                  "s3:PutObjectAcl",
                  "s3:ListBucket"
              ],
              "Resource": [
                  "arn:aws:s3:::amzn-s3-demo-bucket",
                  "arn:aws:s3:::amzn-s3-demo-bucket/*"
              ]
          }]
      }
      ```

   1. Enter the following command to associate the permissions policy that you just created with the IAM role.

      ```
      aws iam put-role-policy --role-name FirehosetoS3Role --policy-name Permissions-Policy-For-Firehose-To-S3 --policy-document file://~/PermissionsForFirehose.json
      ```

1. Enter the following command to create the Firehose delivery stream. Replace *my-role-arn* and *amzn-s3-demo-bucket2-arn* with the correct values for your deployment.

   ```
   aws firehose create-delivery-stream \
      --delivery-stream-name 'my-delivery-stream' \
      --s3-destination-configuration \
     '{"RoleARN": "arn:aws:iam::222222222222:role/FirehosetoS3Role", "BucketARN": "arn:aws:s3:::amzn-s3-demo-bucket"}'
   ```

   The output should look similar to the following:

   ```
   {
       "DeliveryStreamARN": "arn:aws:firehose:us-east-1:222222222222:deliverystream/my-delivery-stream"
   }
   ```

# Step 2: Create a destination
<a name="CreateFirehoseStreamDestination"></a>

**Important**  
All steps in this procedure are to be done in the log data recipient account.

When the destination is created, CloudWatch Logs sends a test message to the destination on the recipient account’s behalf. When the subscription filter is active later, CloudWatch Logs sends log events to the destination on the source account’s behalf.

**To create a destination**

1. Wait until the Firehose stream that you created in [Step 1: Create a Firehose delivery stream](CreateFirehoseStream.md) becomes active. You can use the following command to check the **StreamDescription.StreamStatus** property.

   ```
   aws firehose describe-delivery-stream --delivery-stream-name "my-delivery-stream"
   ```

   In addition, take note of the **DeliveryStreamDescription.DeliveryStreamARN** value, because you will need to use it in a later step. Sample output of this command:

   ```
   {
       "DeliveryStreamDescription": {
           "DeliveryStreamName": "my-delivery-stream",
           "DeliveryStreamARN": "arn:aws:firehose:us-east-1:222222222222:deliverystream/my-delivery-stream",
           "DeliveryStreamStatus": "ACTIVE",
           "DeliveryStreamEncryptionConfiguration": {
               "Status": "DISABLED"
           },
           "DeliveryStreamType": "DirectPut",
           "VersionId": "1",
           "CreateTimestamp": "2021-02-01T23:59:15.567000-08:00",
           "Destinations": [
               {
                   "DestinationId": "destinationId-000000000001",
                   "S3DestinationDescription": {
                       "RoleARN": "arn:aws:iam::222222222222:role/FirehosetoS3Role",
                       "BucketARN": "arn:aws:s3:::amzn-s3-demo-bucket",
                       "BufferingHints": {
                           "SizeInMBs": 5,
                           "IntervalInSeconds": 300
                       },
                       "CompressionFormat": "UNCOMPRESSED",
                       "EncryptionConfiguration": {
                           "NoEncryptionConfig": "NoEncryption"
                       },
                       "CloudWatchLoggingOptions": {
                           "Enabled": false
                       }
                   },
                   "ExtendedS3DestinationDescription": {
                       "RoleARN": "arn:aws:iam::222222222222:role/FirehosetoS3Role",
                       "BucketARN": "arn:aws:s3:::amzn-s3-demo-bucket",
                       "BufferingHints": {
                           "SizeInMBs": 5,
                           "IntervalInSeconds": 300
                       },
                       "CompressionFormat": "UNCOMPRESSED",
                       "EncryptionConfiguration": {
                           "NoEncryptionConfig": "NoEncryption"
                       },
                       "CloudWatchLoggingOptions": {
                           "Enabled": false
                       },
                       "S3BackupMode": "Disabled"
                   }
               }
           ],
           "HasMoreDestinations": false
       }
   }
   ```

   It might take a minute or two for your delivery stream to show up in the active state.

1. When the delivery stream is active, create the IAM role that will grant CloudWatch Logs the permission to put data into your Firehose stream. First, you'll need to create a trust policy in a file **\$1/TrustPolicyForCWL.json**. Use a text editor to create this policy. For more information about CloudWatch Logs endpoints, see [ Amazon CloudWatch Logs endpoints and quotas](https://docs.amazonaws.cn/general/latest/gr/cwl_region.html). 

   This policy includes a `aws:SourceArn` global condition context key that specifies the `sourceAccountId` to help prevent the confused deputy security problem. If you don't yet know the source account ID in the first call, we recommend that you put the destination ARN in the source ARN field. In the subsequent calls, you should set the source ARN to be the actual source ARN that you gathered from the first call. For more information, see [Confused deputy prevention](Subscriptions-confused-deputy.md). 

   ```
   {
       "Statement": {
           "Effect": "Allow",
           "Principal": {
               "Service": "logs.region.amazonaws.com"
           },
           "Action": "sts:AssumeRole",
           "Condition": {
               "StringLike": {
                   "aws:SourceArn": [
                       "arn:aws:logs:region:sourceAccountId:*",
                       "arn:aws:logs:region:recipientAccountId:*"
                   ]
               }
           }
        }
   }
   ```

1. Use the **aws iam create-role** command to create the IAM role, specifying the trust policy file that you just created. 

   ```
   aws iam create-role \
         --role-name CWLtoKinesisFirehoseRole \
         --assume-role-policy-document file://~/TrustPolicyForCWL.json
   ```

   The following is a sample output. Take note of the returned `Role.Arn` value, because you will need to use it in a later step.

   ```
   {
       "Role": {
           "Path": "/",
           "RoleName": "CWLtoKinesisFirehoseRole",
           "RoleId": "AROAR3BXASEKYJYWF243H",
           "Arn": "arn:aws:iam::222222222222:role/CWLtoKinesisFirehoseRole",
           "CreateDate": "2021-02-02T08:10:43+00:00",
           "AssumeRolePolicyDocument": {
               "Statement": {
                   "Effect": "Allow",
                   "Principal": {
                       "Service": "logs.region.amazonaws.com"
                   },
                   "Action": "sts:AssumeRole",
                   "Condition": {
                       "StringLike": {
                           "aws:SourceArn": [
                               "arn:aws:logs:region:sourceAccountId:*",
                               "arn:aws:logs:region:recipientAccountId:*"
                           ]
                       }
                   }
               }
           }
       }
   }
   ```

1. Create a permissions policy to define which actions CloudWatch Logs can perform on your account. First, use a text editor to create a permissions policy in a file **\$1/PermissionsForCWL.json**:

   ```
   {
       "Statement":[
         {
           "Effect":"Allow",
           "Action":["firehose:*"],
           "Resource":["arn:aws:firehose:region:222222222222:*"]
         }
       ]
   }
   ```

1. Associate the permissions policy with the role by entering the following command:

   ```
   aws iam put-role-policy --role-name CWLtoKinesisFirehoseRole --policy-name Permissions-Policy-For-CWL --policy-document file://~/PermissionsForCWL.json
   ```

1. After the Firehose delivery stream is in the active state and you have created the IAM role, you can create the CloudWatch Logs destination.

   1. This step will not associate an access policy with your destination and is only the first step out of two that completes a destination creation. Make a note of the ARN of the new destination that is returned in the payload, because you will use this as the `destination.arn` in a later step.

      ```
      aws logs put-destination \                                                       
          --destination-name "testFirehoseDestination" \
          --target-arn "arn:aws:firehose:us-east-1:222222222222:deliverystream/my-delivery-stream" \
          --role-arn "arn:aws:iam::222222222222:role/CWLtoKinesisFirehoseRole"
      
      {
          "destination": {
              "destinationName": "testFirehoseDestination",
              "targetArn": "arn:aws:firehose:us-east-1:222222222222:deliverystream/my-delivery-stream",
              "roleArn": "arn:aws:iam::222222222222:role/CWLtoKinesisFirehoseRole",
              "arn": "arn:aws:logs:us-east-1:222222222222:destination:testFirehoseDestination"}
      }
      ```

   1. After the previous step is complete, in the log data recipient account (222222222222), associate an access policy with the destination.

      This policy enables the log data sender account (111111111111) to access the destination in just the log data recipient account (222222222222). You can use a text editor to put this policy in the **\$1/AccessPolicy.json** file:

------
#### [ JSON ]

****  

      ```
      {
        "Version":"2012-10-17",		 	 	 
        "Statement" : [
          {
            "Sid" : "",
            "Effect" : "Allow",
            "Principal" : {
              "AWS" : "111111111111"
            },
            "Action" : "logs:PutSubscriptionFilter",
            "Resource" : "arn:aws-cn:logs:us-east-1:222222222222:destination:testFirehoseDestination"
          }
        ]
      }
      ```

------

   1. This creates a policy that defines who has write access to the destination. This policy must specify the **logs:PutSubscriptionFilter** action to access the destination. Cross-account users will use the **PutSubscriptionFilter** action to send log events to the destination:

      ```
      aws logs put-destination-policy \
          --destination-name "testFirehoseDestination" \
          --access-policy file://~/AccessPolicy.json
      ```

# Step 3: Add/validate IAM permissions for the cross-account destination
<a name="Subscription-Filter-CrossAccount-Permissions-Firehose"></a>

According to Amazon cross-account policy evaluation logic, in order to access any cross-account resource (such as an Kinesis or Firehose stream used as a destination for a subscription filter) you must have an identity-based policy in the sending account which provides explicit access to the cross-account destination resource. For more information about policy evaluation logic, see [ Cross-account policy evaluation logic](https://docs.amazonaws.cn/IAM/latest/UserGuide/reference_policies_evaluation-logic-cross-account.html).

You can attach the identity-based policy to the IAM role or IAM user that you are using to create the subscription filter. This policy must be present in the sending account. If you are using the Administrator role to create the subscription filter, you can skip this step and move on to [Step 4: Create a subscription filter](CreateSubscriptionFilter.md).

**To add or validate the IAM permissions needed for cross-account**

1. Enter the following command to check which IAM role or IAM user is being used to run Amazon logs commands.

   ```
   aws sts get-caller-identity
   ```

   The command returns output similar to the following:

   ```
   {
   "UserId": "User ID",
   "Account": "sending account id",
   "Arn": "arn:aws:sending account id:role/user:RoleName/UserName"
   }
   ```

   Make note of the value represented by *RoleName* or *UserName*.

1. Sign into the Amazon Web Services Management Console in the sending account and search for the attached policies with the IAM role or IAM user returned in the output of the command you entered in step 1.

1. Verify that the policies attached to this role or user provide explicit permissions to call `logs:PutSubscriptionFilter` on the cross-account destination resource.

   The following policy provides permissions to create a subscription filter on any destination resource only in a single Amazon account, account `999999999999`:

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "AllowSubscriptionFiltersOnAnyResourceInOneSpecificAccount",
               "Effect": "Allow",
               "Action": "logs:PutSubscriptionFilter",
               "Resource": [
                   "arn:aws-cn:logs:*:*:log-group:*",
                   "arn:aws-cn:logs:*:123456789012:destination:*"
               ]
           }
       ]
   }
   ```

------

   The following policy provides permissions to create a subscription filter only on a specific destination resource named `sampleDestination` in single Amazon account, account `123456789012`:

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
           "Sid": "AllowSubscriptionFiltersOnSpecificResource",
               "Effect": "Allow",
               "Action": "logs:PutSubscriptionFilter",
               "Resource": [
                   "arn:aws-cn:logs:*:*:log-group:*",
                   "arn:aws-cn:logs:*:123456789012:destination:amzn-s3-demo-bucket"
               ]
           }
       ]
   }
   ```

------

# Step 4: Create a subscription filter
<a name="CreateSubscriptionFilterFirehose"></a>

Switch to the sending account, which is 111111111111 in this example. You will now create the subscription filter in the sending account. In this example, the filter is associated with a log group containing Amazon CloudTrail events so that every logged activity made by "Root" Amazon credentials is delivered to the destination you previously created. For more information about how to send Amazon CloudTrail events to CloudWatch Logs, see [Sending CloudTrail Events to CloudWatch Logs](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/send-cloudtrail-events-to-cloudwatch-logs.html) in the *Amazon CloudTrail User Guide*.

When you enter the following command, be sure you are signed in as the IAM user or using the IAM role that you added the policy for, in [Step 3: Add/validate IAM permissions for the cross-account destination](Subscription-Filter-CrossAccount-Permissions-Firehose.md).

```
aws logs put-subscription-filter \
    --log-group-name "aws-cloudtrail-logs-111111111111-300a971e" \                   
    --filter-name "firehose_test" \
    --filter-pattern "{$.userIdentity.type = AssumedRole}" \
    --destination-arn "arn:aws:logs:us-east-1:222222222222:destination:testFirehoseDestination"
```

The log group and the destination must be in the same Amazon Region. However, the destination can point to an Amazon resource such as a Firehose stream that is located in a different Region.

# Validating the flow of log events
<a name="ValidateLogEventFlowFirehose"></a>

After you create the subscription filter, CloudWatch Logs forwards all the incoming log events that match the filter pattern to the Firehose delivery stream. The data starts appearing in your Amazon S3 bucket based on the time buffer interval that is set on the Firehose delivery stream. Once enough time has passed, you can verify your data by checking the Amazon S3 bucket. To check the bucket, enter the following command:

```
aws s3api list-objects --bucket 'amzn-s3-demo-bucket' 
```

The output of that command will be similar to the following:

```
{
    "Contents": [
        {
            "Key": "2021/02/02/08/my-delivery-stream-1-2021-02-02-08-55-24-5e6dc317-071b-45ba-a9d3-4805ba39c2ba",
            "LastModified": "2021-02-02T09:00:26+00:00",
            "ETag": "\"EXAMPLEa817fb88fc770b81c8f990d\"",
            "Size": 198,
            "StorageClass": "STANDARD",
            "Owner": {
                "DisplayName": "firehose+2test",
                "ID": "EXAMPLE27fd05889c665d2636218451970ef79400e3d2aecca3adb1930042e0"
            }
        }
    ]
}
```

You can then retrieve a specific object from the bucket by entering the following command. Replace the value of `key` with the value you found in the previous command.

```
aws s3api get-object --bucket 'amzn-s3-demo-bucket' --key '2021/02/02/08/my-delivery-stream-1-2021-02-02-08-55-24-5e6dc317-071b-45ba-a9d3-4805ba39c2ba' testfile.gz
```

The data in the Amazon S3 object is compressed with the gzip format. You can examine the raw data from the command line using one of the following commands:

Linux:

```
zcat testfile.gz
```

macOS:

```
zcat <testfile.gz
```

# Modifying destination membership at runtime
<a name="ModifyDestinationMembershipFirehose"></a>

You might encounter situations where you have to add or remove log senders from a destination that you own. You can use the **PutDestinationPolicy** action on your destination with new access policy. In the following example, a previously added account **111111111111** is stopped from sending any more log data, and account **333333333333** is enabled.

1. Fetch the policy that is currently associated with the destination **testDestination** and make a note of the **AccessPolicy**:

   ```
   aws logs describe-destinations \
       --destination-name-prefix "testFirehoseDestination"
   
   {
       "destinations": [
           {
               "destinationName": "testFirehoseDestination",
               "targetArn": "arn:aws:firehose:us-east-1:222222222222:deliverystream/my-delivery-stream",
               "roleArn": "arn:aws:iam:: 222222222222:role/CWLtoKinesisFirehoseRole",
               "accessPolicy": "{\n  \"Version\" : \"2012-10-17\",\n  \"Statement\" : [\n    {\n      \"Sid\" : \"\",\n      \"Effect\" : \"Allow\",\n      \"Principal\" : {\n        \"AWS\" : \"111111111111 \"\n      },\n      \"Action\" : \"logs:PutSubscriptionFilter\",\n      \"Resource\" : \"arn:aws:logs:us-east-1:222222222222:destination:testFirehoseDestination\"\n    }\n  ]\n}\n\n",
               "arn": "arn:aws:logs:us-east-1: 222222222222:destination:testFirehoseDestination",
               "creationTime": 1612256124430
           }
       ]
   }
   ```

1. Update the policy to reflect that account **111111111111** is stopped, and that account **333333333333** is enabled. Put this policy in the **\$1/NewAccessPolicy.json** file:

------
#### [ JSON ]

****  

   ```
   {
     "Version":"2012-10-17",		 	 	 
     "Statement" : [
       {
         "Sid" : "",
         "Effect" : "Allow",
         "Principal" : {
           "AWS" : "333333333333 "
         },
         "Action" : "logs:PutSubscriptionFilter",
         "Resource" : "arn:aws-cn:logs:us-east-1:222222222222:destination:testFirehoseDestination"
       }
     ]
   }
   ```

------

1. Use the following command to associate the policy defined in the **NewAccessPolicy.json** file with the destination:

   ```
   aws logs put-destination-policy \
       --destination-name "testFirehoseDestination" \                                                                              
       --access-policy file://~/NewAccessPolicy.json
   ```

   This eventually disables the log events from account ID **111111111111**. Log events from account ID **333333333333** start flowing to the destination as soon as the owner of account **333333333333** creates a subscription filter.

# Cross-account cross-Region account-level subscriptions using Amazon Kinesis Data Streams
<a name="CrossAccountSubscriptions-Kinesis-Account"></a>

When you create a cross-account subscription, you can specify a single account or an organization to be the sender. If you specify an organization, then this procedure enables all accounts in the organization to send logs to the receiver account.

To share log data across accounts, you need to establish a log data sender and receiver:
+ **Log data sender**—gets the destination information from the recipient and lets CloudWatch Logs know that it's ready to send its log events to the specified destination. In the procedures in the rest of this section, the log data sender is shown with a fictional Amazon account number of 111111111111.

  If you're going to have multiple accounts in one organization send logs to one recipient account, you can create a policy that grants all accounts in the organization the permission to send logs to the recipient account. You still have to set up separate subscription filters for each sender account.
+ **Log data recipient**—sets up a destination that encapsulates a Amazon Kinesis Data Streams stream and lets CloudWatch Logs know that the recipient wants to receive log data. The recipient then shares the information about this destination with the sender. In the procedures in the rest of this section, the log data recipient is shown with a fictional Amazon account number of 999999999999.

To start receiving log events from cross-account users, the log data recipient first creates a CloudWatch Logs destination. Each destination consists of the following key elements:

**Destination name**  
The name of the destination you want to create.

**Target ARN**  
The Amazon Resource Name (ARN) of the Amazon resource that you want to use as the destination of the subscription feed.

**Role ARN**  
An Amazon Identity and Access Management (IAM) role that grants CloudWatch Logs the necessary permissions to put data into the chosen stream.

**Access policy**  
An IAM policy document (in JSON format, written using IAM policy grammar) that governs the set of users that are allowed to write to your destination.

**Note**  
The log group and the destination must be in the same Amazon Region. However, the Amazon resource that the destination points to can be located in a different Region. In the examples in the following sections, all Region-specific resources are created in US East (N. Virginia).

**Topics**
+ [Setting up a new cross-account subscription](Cross-Account-Log_Subscription-New-Account.md)
+ [Updating an existing cross-account subscription](Cross-Account-Log_Subscription-Update-Account.md)

# Setting up a new cross-account subscription
<a name="Cross-Account-Log_Subscription-New-Account"></a>

Follow the steps in these sections to set up a new cross-account log subscription.

**Topics**
+ [Step 1: Create a destination](CreateDestination-Account.md)
+ [Step 2: (Only if using an organization) Create an IAM role](CreateSubscriptionFilter-IAMrole-Account.md)
+ [Step 3: Create an account-level subscription filter policy](CreateSubscriptionFilter-Account.md)
+ [Validate the flow of log events](ValidateLogEventFlow-Account.md)
+ [Modify destination membership at runtime](ModifyDestinationMembership-Account.md)

# Step 1: Create a destination
<a name="CreateDestination-Account"></a>

**Important**  
All steps in this procedure are to be done in the log data recipient account.

For this example, the log data recipient account has an Amazon account ID of 999999999999, while the log data sender Amazon account ID is 111111111111.

 This example creates a destination using a Amazon Kinesis Data Streams stream called RecipientStream, and a role that enables CloudWatch Logs to write data to it. 

When the destination is created, CloudWatch Logs sends a test message to the destination on the recipient account’s behalf. When the subscription filter is active later, CloudWatch Logs sends log events to the destination on the source account’s behalf.

**To create a destination**

1. In the recipient account, create a destination stream in Amazon Kinesis Data Streams. At a command prompt, type:

   ```
   aws kinesis create-stream --stream-name "RecipientStream" --shard-count 1
   ```

1. Wait until the stream becomes active. You can use the **aws kinesis describe-stream** command to check the **StreamDescription.StreamStatus** property. In addition, take note of the **StreamDescription.StreamARN** value because you will pass it to CloudWatch Logs later:

   ```
   aws kinesis describe-stream --stream-name "RecipientStream"
   {
     "StreamDescription": {
       "StreamStatus": "ACTIVE",
       "StreamName": "RecipientStream",
       "StreamARN": "arn:aws:kinesis:us-east-1:999999999999:stream/RecipientStream",
       "Shards": [
         {
           "ShardId": "shardId-000000000000",
           "HashKeyRange": {
             "EndingHashKey": "34028236692093846346337460743176EXAMPLE",
             "StartingHashKey": "0"
           },
           "SequenceNumberRange": {
             "StartingSequenceNumber": "4955113521868881845667950383198145878459135270218EXAMPLE"
           }
         }
       ]
     }
   }
   ```

   It might take a minute or two for your stream to show up in the active state.

1. Create the IAM role that grants CloudWatch Logs the permission to put data into your stream. First, you'll need to create a trust policy in a file **\$1/TrustPolicyForCWL.json**. Use a text editor to create this policy file, do not use the IAM console.

   This policy includes a `aws:SourceArn` global condition context key that specifies the `sourceAccountId` to help prevent the confused deputy security problem. If you don't yet know the source account ID in the first call, we recommend that you put the destination ARN in the source ARN field. In the subsequent calls, you should set the source ARN to be the actual source ARN that you gathered from the first call. For more information, see [Confused deputy prevention](Subscriptions-confused-deputy.md). 

   ```
   {
       "Statement": {
           "Effect": "Allow",
           "Principal": {
               "Service": "logs.amazonaws.com"
           },
           "Condition": {
               "StringLike": {
                   "aws:SourceArn": [
                       "arn:aws:logs:region:sourceAccountId:*",
                       "arn:aws:logs:region:recipientAccountId:*"
                   ]
               }
           },
           "Action": "sts:AssumeRole"
       }
   }
   ```

1. Use the **aws iam create-role** command to create the IAM role, specifying the trust policy file. Take note of the returned Role.Arn value because it will also be passed to CloudWatch Logs later:

   ```
   aws iam create-role \
   --role-name CWLtoKinesisRole \
   --assume-role-policy-document file://~/TrustPolicyForCWL.json
   
   {
       "Role": {
           "AssumeRolePolicyDocument": {
               "Statement": {
                   "Action": "sts:AssumeRole",
                   "Effect": "Allow",
                   "Condition": {
                       "StringLike": {
                           "aws:SourceArn": [
                               "arn:aws:logs:region:sourceAccountId:*",
                               "arn:aws:logs:region:recipientAccountId:*"
                           ]
                       }
                   },
                   "Principal": {
                       "Service": "logs.amazonaws.com"
                   }
               }
           },
           "RoleId": "AAOIIAH450GAB4HC5F431",
           "CreateDate": "2023-05-29T13:46:29.431Z",
           "RoleName": "CWLtoKinesisRole",
           "Path": "/",
           "Arn": "arn:aws:iam::999999999999:role/CWLtoKinesisRole"
       }
   }
   ```

1. Create a permissions policy to define which actions CloudWatch Logs can perform on your account. First, use a text editor to create a permissions policy in a file **\$1/PermissionsForCWL.json**:

   ```
   {
     "Statement": [
       {
         "Effect": "Allow",
         "Action": "kinesis:PutRecord",
         "Resource": "arn:aws:kinesis:region:999999999999:stream/RecipientStream"
       }
     ]
   }
   ```

1. Associate the permissions policy with the role by using the **aws iam put-role-policy** command:

   ```
   aws iam put-role-policy \
       --role-name CWLtoKinesisRole \
       --policy-name Permissions-Policy-For-CWL \
       --policy-document file://~/PermissionsForCWL.json
   ```

1. After the stream is in the active state and you have created the IAM role, you can create the CloudWatch Logs destination.

   1. This step doesn't associate an access policy with your destination and is only the first step out of two that completes a destination creation. Make a note of the **DestinationArn** that is returned in the payload:

      ```
      aws logs put-destination \
          --destination-name "testDestination" \
          --target-arn "arn:aws:kinesis:region:999999999999:stream/RecipientStream" \
          --role-arn "arn:aws:iam::999999999999:role/CWLtoKinesisRole"
      
      {
        "DestinationName" : "testDestination",
        "RoleArn" : "arn:aws:iam::999999999999:role/CWLtoKinesisRole",
        "DestinationArn" : "arn:aws:logs:us-east-1:999999999999:destination:testDestination",
        "TargetArn" : "arn:aws:kinesis:us-east-1:999999999999:stream/RecipientStream"
      }
      ```

   1. After step 7a is complete, in the log data recipient account, associate an access policy with the destination. This policy must specify the **logs:PutSubscriptionFilter** action and grants permission to the sender account to access the destination.

      The policy grants permission to the Amazon account that sends logs. You can specify just this one account in the policy, or if the sender account is a member of an organization, the policy can specify the organization ID of the organization. This way, you can create just one policy to allow multiple accounts in one organization to send logs to this destination account.

      Use a text editor to create a file named `~/AccessPolicy.json` with one of the following policy statements.

      This first example policy allows all accounts in the organization that have an ID of `o-1234567890` to send logs to the recipient account.

------
#### [ JSON ]

****  

      ```
      {
          "Version":"2012-10-17",		 	 	 
          "Statement": [
              {
                  "Sid": "",
                  "Effect": "Allow",
                  "Principal": "*",
                  "Action": [
                      "logs:PutSubscriptionFilter",
                      "logs:PutAccountPolicy"
                  ],
                  "Resource": "arn:aws-cn:logs:us-east-1:999999999999:destination:testDestination",
                  "Condition": {
                      "StringEquals": {
                          "aws:PrincipalOrgID": [
                              "o-1234567890"
                          ]
                      }
                  }
              }
          ]
      }
      ```

------

      This next example allows just the log data sender account (111111111111) to send logs to the log data recipient account.

------
#### [ JSON ]

****  

      ```
      {
          "Version":"2012-10-17",		 	 	 
          "Statement": [
              {
                  "Sid": "",
                  "Effect": "Allow",
                  "Principal": {
                      "AWS": "111111111111"
                  },
                  "Action": [
                      "logs:PutSubscriptionFilter",
                      "logs:PutAccountPolicy"
                  ],
                  "Resource": "arn:aws-cn:logs:us-east-1:999999999999:destination:testDestination"
              }
          ]
      }
      ```

------

   1. Attach the policy you created in the previous step to the destination.

      ```
      aws logs put-destination-policy \
          --destination-name "testDestination" \
          --access-policy file://~/AccessPolicy.json
      ```

      This access policy enables users in the Amazon Account with ID 111111111111 to call **PutSubscriptionFilter** against the destination with ARN arn:aws:logs:*region*:999999999999:destination:testDestination. Any other user's attempt to call PutSubscriptionFilter against this destination will be rejected.

      To validate a user's privileges against an access policy, see [Using Policy Validator](https://docs.amazonaws.cn/IAM/latest/UserGuide/policies_policy-validator.html) in the *IAM User Guide*.

When you have finished, if you're using Amazon Organizations for your cross-account permissions, follow the steps in [Step 2: (Only if using an organization) Create an IAM role](CreateSubscriptionFilter-IAMrole-Account.md). If you're granting permissions directly to the other account instead of using Organizations, you can skip that step and proceed to [Step 3: Create an account-level subscription filter policy](CreateSubscriptionFilter-Account.md).

# Step 2: (Only if using an organization) Create an IAM role
<a name="CreateSubscriptionFilter-IAMrole-Account"></a>

In the previous section, if you created the destination by using an access policy that grants permissions to the organization that account `111111111111` is in, instead of granting permissions directly to account `111111111111`, then follow the steps in this section. Otherwise, you can skip to [Step 3: Create an account-level subscription filter policy](CreateSubscriptionFilter-Account.md).

The steps in this section create an IAM role, which CloudWatch can assume and validate whether the sender account has permission to create a subscription filter against the recipient destination. 

Perform the steps in this section in the sender account. The role must exist in the sender account, and you specify the ARN of this role in the subscription filter. In this example, the sender account is `111111111111`.

**To create the IAM role necessary for cross-account log subscriptions using Amazon Organizations**

1. Create the following trust policy in a file `/TrustPolicyForCWLSubscriptionFilter.json`. Use a text editor to create this policy file; do not use the IAM console.

   ```
   {
     "Statement": {
       "Effect": "Allow",
       "Principal": { "Service": "logs.amazonaws.com" },
       "Action": "sts:AssumeRole"
     }
   }
   ```

1. Create the IAM role that uses this policy. Take note of the `Arn` value that is returned by the command, you will need it later in this procedure. In this example, we use `CWLtoSubscriptionFilterRole` for the name of the role we're creating.

   ```
   aws iam create-role \ 
        --role-name CWLtoSubscriptionFilterRole \ 
        --assume-role-policy-document file://~/TrustPolicyForCWLSubscriptionFilter.json
   ```

1. Create a permissions policy to define the actions that CloudWatch Logs can perform on your account.

   1. First, use a text editor to create the following permissions policy in a file named `~/PermissionsForCWLSubscriptionFilter.json`.

      ```
      { 
          "Statement": [ 
              { 
                  "Effect": "Allow", 
                  "Action": "logs:PutLogEvents", 
                  "Resource": "arn:aws:logs:region:111111111111:log-group:LogGroupOnWhichSubscriptionFilterIsCreated:*" 
              } 
          ] 
      }
      ```

   1. Enter the following command to associate the permissions policy you just created with the role that you created in step 2.

      ```
      aws iam put-role-policy  
          --role-name CWLtoSubscriptionFilterRole  
          --policy-name Permissions-Policy-For-CWL-Subscription-filter 
          --policy-document file://~/PermissionsForCWLSubscriptionFilter.json
      ```

When you have finished, you can proceed to [Step 3: Create an account-level subscription filter policy](CreateSubscriptionFilter-Account.md).

# Step 3: Create an account-level subscription filter policy
<a name="CreateSubscriptionFilter-Account"></a>

After you create a destination, the log data recipient account can share the destination ARN (arn:aws:logs:us-east-1:999999999999:destination:testDestination) with other Amazon accounts so that they can send log events to the same destination. These other sending accounts users then create a subscription filter on their respective log groups against this destination. The subscription filter immediately starts the flow of real-time log data from the chosen log group to the specified destination.

**Note**  
If you are granting permissions for the subscription filter to an entire organization, you will need to use the ARN of the IAM role that you created in [Step 2: (Only if using an organization) Create an IAM role](CreateSubscriptionFilter-IAMrole-Account.md).

In the following example, an account-level subscription filter policy is created in a sending account. the filter is associated with the sender account `111111111111` so that every log event matching the filter and selection criteria is delivered to the destination you previously created. That destination encapsulates a stream called "RecipientStream".

The `selection-criteria` field is optional, but is important for excluding log groups that can cause an infinite log recursion from a subscription filter. For more information about this issue and determining which log groups to exclude, see [Log recursion prevention](Subscriptions-recursion-prevention.md). Currently, NOT IN is the only supported operator for `selection-criteria`.

```
aws logs put-account-policy \
    --policy-name "CrossAccountStreamsExamplePolicy" \
    --policy-type "SUBSCRIPTION_FILTER_POLICY" \
    --policy-document '{"DestinationArn":"arn:aws:logs:region:999999999999:destination:testDestination", "FilterPattern": "", "Distribution": "Random"}' \
    --selection-criteria 'LogGroupName NOT IN ["LogGroupToExclude1", "LogGroupToExclude2"]' \
    --scope "ALL"
```

The sender account's log groups and the destination must be in the same Amazon Region. However, the destination can point to an Amazon resource such as a Amazon Kinesis Data Streams stream that is located in a different Region.

# Validate the flow of log events
<a name="ValidateLogEventFlow-Account"></a>

After you create the account-level subscription filter policy, CloudWatch Logs forwards all the incoming log events that match the filter pattern and selection criteria to the stream that is encapsulated within the destination stream called "**RecipientStream**". The destination owner can verify that this is happening by using the **aws kinesis get-shard-iterator** command to grab a Amazon Kinesis Data Streams shard, and using the **aws kinesis get-records** command to fetch some Amazon Kinesis Data Streams records:

```
aws kinesis get-shard-iterator \
      --stream-name RecipientStream \
      --shard-id shardId-000000000000 \
      --shard-iterator-type TRIM_HORIZON

{
    "ShardIterator":
    "AAAAAAAAAAFGU/kLvNggvndHq2UIFOw5PZc6F01s3e3afsSscRM70JSbjIefg2ub07nk1y6CDxYR1UoGHJNP4m4NFUetzfL+wev+e2P4djJg4L9wmXKvQYoE+rMUiFq+p4Cn3IgvqOb5dRA0yybNdRcdzvnC35KQANoHzzahKdRGb9v4scv+3vaq+f+OIK8zM5My8ID+g6rMo7UKWeI4+IWiKEXAMPLE"
}

aws kinesis get-records \
      --limit 10 \
      --shard-iterator
      "AAAAAAAAAAFGU/kLvNggvndHq2UIFOw5PZc6F01s3e3afsSscRM70JSbjIefg2ub07nk1y6CDxYR1UoGHJNP4m4NFUetzfL+wev+e2P4djJg4L9wmXKvQYoE+rMUiFq+p4Cn3IgvqOb5dRA0yybNdRcdzvnC35KQANoHzzahKdRGb9v4scv+3vaq+f+OIK8zM5My8ID+g6rMo7UKWeI4+IWiKEXAMPLE"
```

**Note**  
You might need to rerun the `get-records` command a few times before Amazon Kinesis Data Streams starts to return data.

You should see a response with an array of Amazon Kinesis Data Streams records. The data attribute in the Amazon Kinesis Data Streams record is compressed in gzip format and then base64 encoded. You can examine the raw data from the command line using the following Unix command:

```
echo -n "<Content of Data>" | base64 -d | zcat
```

The base64 decoded and decompressed data is formatted as JSON with the following structure:

```
{
    "owner": "111111111111",
    "logGroup": "CloudTrail/logs",
    "logStream": "111111111111_CloudTrail/logs_us-east-1",
    "subscriptionFilters": [
        "RecipientStream"
    ],
    "messageType": "DATA_MESSAGE",
    "logEvents": [
        {
            "id": "3195310660696698337880902507980421114328961542429EXAMPLE",
            "timestamp": 1432826855000,
            "message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root\"}"
        },
        {
            "id": "3195310660696698337880902507980421114328961542429EXAMPLE",
            "timestamp": 1432826855000,
            "message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root\"}"
        },
        {
            "id": "3195310660696698337880902507980421114328961542429EXAMPLE",
            "timestamp": 1432826855000,
            "message": "{\"eventVersion\":\"1.03\",\"userIdentity\":{\"type\":\"Root\"}"
        }
    ]
}
```

The key elements in the data structure are the following:

**messageType**  
Data messages will use the "DATA\$1MESSAGE" type. Sometimes CloudWatch Logs might emit Amazon Kinesis Data Streams records with a "CONTROL\$1MESSAGE" type, mainly for checking if the destination is reachable.

**owner**  
The Amazon Account ID of the originating log data.

**logGroup**  
The log group name of the originating log data.

**logStream**  
The log stream name of the originating log data.

**subscriptionFilters**  
The list of subscription filter names that matched with the originating log data.

**logEvents**  
The actual log data, represented as an array of log event records. The "id" property is a unique identifier for every log event.

**policyLevel**  
The level at which the policy was enforced. "ACCOUNT\$1LEVEL\$1POLICY" is the `policyLevel` for an account-level subscription filter policy.

# Modify destination membership at runtime
<a name="ModifyDestinationMembership-Account"></a>

You might encounter situations where you have to add or remove membership of some users from a destination that you own. You can use the `put-destination-policy` command on your destination with a new access policy. In the following example, a previously added account **111111111111** is stopped from sending any more log data, and account **222222222222** is enabled.

1. Fetch the policy that is currently associated with the destination **testDestination** and make a note of the **AccessPolicy**:

   ```
   aws logs describe-destinations \
       --destination-name-prefix "testDestination"
   
   {
    "Destinations": [
      {
        "DestinationName": "testDestination",
        "RoleArn": "arn:aws:iam::999999999999:role/CWLtoKinesisRole",
        "DestinationArn": "arn:aws:logs:region:999999999999:destination:testDestination",
        "TargetArn": "arn:aws:kinesis:region:999999999999:stream/RecipientStream",
        "AccessPolicy": "{\"Version\": \"2012-10-17\", \"Statement\": [{\"Sid\": \"\", \"Effect\": \"Allow\", \"Principal\": {\"AWS\": \"111111111111\"}, \"Action\": \"logs:PutSubscriptionFilter\", \"Resource\": \"arn:aws:logs:region:999999999999:destination:testDestination\"}] }"
      }
    ]
   }
   ```

1. Update the policy to reflect that account **111111111111** is stopped, and that account **222222222222** is enabled. Put this policy in the **\$1/NewAccessPolicy.json** file:

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "",
               "Effect": "Allow",
               "Principal": {
                   "AWS": "222222222222"
               },
               "Action": [
                   "logs:PutSubscriptionFilter",
                   "logs:PutAccountPolicy"
               ],
               "Resource": "arn:aws-cn:logs:us-east-1:999999999999:destination:testDestination"
           }
       ]
   }
   ```

------

1. Call **PutDestinationPolicy** to associate the policy defined in the **NewAccessPolicy.json** file with the destination:

   ```
   aws logs put-destination-policy \
   --destination-name "testDestination" \
   --access-policy file://~/NewAccessPolicy.json
   ```

   This will eventually disable the log events from account ID **111111111111**. Log events from account ID **222222222222** start flowing to the destination as soon as the owner of account **222222222222** creates a subscription filter.

# Updating an existing cross-account subscription
<a name="Cross-Account-Log_Subscription-Update-Account"></a>

If you currently have a cross-account logs subscription where the destination account grants permissions only to specific sender accounts, and you want to update this subscription so that the destination account grants access to all accounts in an organization, follow the steps in this section.

**Topics**
+ [Step 1: Update the subscription filters](Cross-Account-Log_Subscription-Update-filter-Account.md)
+ [Step 2: Update the existing destination access policy](Cross-Account-Log_Subscription-Update-policy-Account.md)

# Step 1: Update the subscription filters
<a name="Cross-Account-Log_Subscription-Update-filter-Account"></a>

**Note**  
This step is needed only for cross-account subscriptions for logs that are created by the services listed in [Enable logging from Amazon services](AWS-logs-and-resource-policy.md). If you are not working with logs created by one of these log groups, you can skip to [Step 2: Update the existing destination access policy](Cross-Account-Log_Subscription-Update-policy-Account.md).

In certain cases, you must update the subscription filters in all the sender accounts that are sending logs to the destination account. The update adds an IAM role, which CloudWatch can assume and validate that the sender account has permission to send logs to the recipient account.

Follow the steps in this section for every sender account that you want to update to use organization ID for the cross-account subscription permissions.

In the examples in this section, two accounts, `111111111111` and `222222222222` already have subscription filters created to send logs to account `999999999999`. The existing subscription filter values are as follows:

```
## Existing Subscription Filter parameter values
{
    "DestinationArn": "arn:aws:logs:region:999999999999:destination:testDestination",
    "FilterPattern": "{$.userIdentity.type = Root}",
    "Distribution": "Random"
}
```

If you need to find the current subscription filter parameter values, enter the following command.

```
aws logs describe-account-policies \
--policy-type "SUBSCRIPTION_FILTER_POLICY" \
--policy-name "CrossAccountStreamsExamplePolicy"
```

**To update a subscription filter to start using organization IDs for cross-account log permissions**

1. Create the following trust policy in a file `~/TrustPolicyForCWL.json`. Use a text editor to create this policy file; do not use the IAM console.

   ```
   {
     "Statement": {
       "Effect": "Allow",
       "Principal": { "Service": "logs.amazonaws.com" },
       "Action": "sts:AssumeRole"
     }
   }
   ```

1. Create the IAM role that uses this policy. Take note of the `Arn` value of the `Arn` value that is returned by the command, you will need it later in this procedure. In this example, we use `CWLtoSubscriptionFilterRole` for the name of the role we're creating.

   ```
   aws iam create-role 
       \ --role-name CWLtoSubscriptionFilterRole 
       \ --assume-role-policy-document file://~/TrustPolicyForCWL.json
   ```

1. Create a permissions policy to define the actions that CloudWatch Logs can perform on your account.

   1. First, use a text editor to create the following permissions policy in a file named `/PermissionsForCWLSubscriptionFilter.json`.

      ```
      { 
          "Statement": [ 
              { 
                  "Effect": "Allow", 
                  "Action": "logs:PutLogEvents", 
                  "Resource": "arn:aws:logs:region:111111111111:log-group:LogGroupOnWhichSubscriptionFilterIsCreated:*" 
              } 
          ] 
      }
      ```

   1. Enter the following command to associate the permissions policy you just created with the role that you created in step 2.

      ```
      aws iam put-role-policy 
          --role-name CWLtoSubscriptionFilterRole 
          --policy-name Permissions-Policy-For-CWL-Subscription-filter 
          --policy-document file://~/PermissionsForCWLSubscriptionFilter.json
      ```

1. Enter the following command to update the subscription filter policy.

   ```
   aws logs put-account-policy \
       --policy-name "CrossAccountStreamsExamplePolicy" \
       --policy-type "SUBSCRIPTION_FILTER_POLICY" \
       --policy-document '{"DestinationArn":"arn:aws:logs:region:999999999999:destination:testDestination", "FilterPattern": "{$.userIdentity.type = Root}", "Distribution": "Random"}' \
       --selection-criteria 'LogGroupName NOT IN ["LogGroupToExclude1", "LogGroupToExclude2"]' \
       --scope "ALL"
   ```

# Step 2: Update the existing destination access policy
<a name="Cross-Account-Log_Subscription-Update-policy-Account"></a>

After you have updated the subscription filters in all of the sender accounts, you can update the destination access policy in the recipient account.

In the following examples, the recipient account is `999999999999` and the destination is named `testDestination`.

The update enables all accounts that are part of the organization with ID `o-1234567890` to send logs to the recipient account. Only the accounts that have subscription filters created will actually send logs to the recipient account.

**To update the destination access policy in the recipient account to start using an organization ID for permissions**

1. In the recipient account, use a text editor to create a `~/AccessPolicy.json` file with the following contents.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "",
               "Effect": "Allow",
               "Principal": "*",
               "Action": [
                   "logs:PutSubscriptionFilter",
                   "logs:PutAccountPolicy"
               ],
               "Resource": "arn:aws-cn:logs:us-east-1:999999999999:destination:testDestination",
               "Condition": {
                   "StringEquals": {
                       "aws:PrincipalOrgID": [
                           "o-1234567890"
                       ]
                   }
               }
           }
       ]
   }
   ```

------

1. Enter the following command to attach the policy that you just created to the existing destination. To update a destination to use an access policy with an organization ID instead of an access policy that lists specific Amazon account IDs, include the `force` parameter.
**Warning**  
If you are working with logs sent by an Amazon service listed in [Enable logging from Amazon services](AWS-logs-and-resource-policy.md), then before doing this step, you must have first updated the subscription filters in all the sender accounts as explained in [Step 1: Update the subscription filters](Cross-Account-Log_Subscription-Update-filter-Account.md).

   ```
   aws logs put-destination-policy 
       \ --destination-name "testDestination" 
       \ --access-policy file://~/AccessPolicy.json
       \ --force
   ```

# Cross-account cross-Region account-level subscriptions using Firehose
<a name="CrossAccountSubscriptions-Firehose-Account"></a>

To share log data across accounts, you need to establish a log data sender and receiver:
+ **Log data sender**—gets the destination information from the recipient and lets CloudWatch Logs know that it is ready to send its log events to the specified destination. In the procedures in the rest of this section, the log data sender is shown with a fictional Amazon account number of 111111111111.
+ **Log data recipient**—sets up a destination that encapsulates a Amazon Kinesis Data Streams stream and lets CloudWatch Logs know that the recipient wants to receive log data. The recipient then shares the information about this destination with the sender. In the procedures in the rest of this section, the log data recipient is shown with a fictional Amazon account number of 222222222222.

The example in this section uses a Firehose delivery stream with Amazon S3 storage. You can also set up Firehose delivery streams with different settings. For more information, see [ Creating a Firehose Delivery Stream](https://docs.amazonaws.cn/firehose/latest/dev/basic-create.html).

**Note**  
The log group and the destination must be in the same Amazon Region. However, the Amazon resource that the destination points to can be located in a different Region.

**Note**  
 Firehose subscription filter for a ***same account*** and ***cross-Region*** delivery stream is supported. 

**Topics**
+ [Step 1: Create a Firehose delivery stream](CreateFirehoseStream-Account.md)
+ [Step 2: Create a destination](CreateFirehoseStreamDestination-Account.md)
+ [Step 3: Create an account-level subscription filter policy](CreateSubscriptionFilterFirehose-Account.md)
+ [Validating the flow of log events](ValidateLogEventFlowFirehose-Account.md)
+ [Modifying destination membership at runtime](ModifyDestinationMembershipFirehose-Account.md)

# Step 1: Create a Firehose delivery stream
<a name="CreateFirehoseStream-Account"></a>

**Important**  
 Before you complete the following steps, you must use an access policy, so Firehose can access your Amazon S3 bucket. For more information, see [Controlling Access](https://docs.aws.amazon.com/firehose/latest/dev/controlling-access.html#using-iam-s3) in the *Amazon Data Firehose Developer Guide*.   
 All of the steps in this section (Step 1) must be done in the log data recipient account.   
 US East (N. Virginia) is used in the following sample commands. Replace this Region with the correct Region for your deployment. 

**To create a Firehose delivery stream to be used as the destination**

1. Create an Amazon S3 bucket:

   ```
   aws s3api create-bucket --bucket amzn-s3-demo-bucket --create-bucket-configuration LocationConstraint=us-east-1
   ```

1. Create the IAM role that grants Firehose permission to put data into the bucket.

   1. First, use a text editor to create a trust policy in a file `~/TrustPolicyForFirehose.json`.

      ```
      { "Statement": { "Effect": "Allow", "Principal": { "Service": "firehose.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "sts:ExternalId":"222222222222" } } } }
      ```

   1. Create the IAM role, specifying the trust policy file that you just made.

      ```
      aws iam create-role \ 
          --role-name FirehosetoS3Role \ 
          --assume-role-policy-document file://~/TrustPolicyForFirehose.json
      ```

   1. The output of this command will look similar to the following. Make a note of the role name and the role ARN.

      ```
      {
          "Role": {
              "Path": "/",
              "RoleName": "FirehosetoS3Role",
              "RoleId": "AROAR3BXASEKW7K635M53",
              "Arn": "arn:aws:iam::222222222222:role/FirehosetoS3Role",
              "CreateDate": "2021-02-02T07:53:10+00:00",
              "AssumeRolePolicyDocument": {
                  "Statement": {
                      "Effect": "Allow",
                      "Principal": {
                          "Service": "firehose.amazonaws.com"
                      },
                      "Action": "sts:AssumeRole",
                      "Condition": {
                          "StringEquals": {
                              "sts:ExternalId": "222222222222"
                          }
                      }
                  }
              }
          }
      }
      ```

1. Create a permissions policy to define the actions that Firehose can perform in your account.

   1. First, use a text editor to create the following permissions policy in a file named `~/PermissionsForFirehose.json`. Depending on your use case, you might need to add more permissions to this file.

      ```
      {
          "Statement": [{
              "Effect": "Allow",
              "Action": [
                  "s3:PutObject",
                  "s3:PutObjectAcl",
                  "s3:ListBucket"
              ],
              "Resource": [
                  "arn:aws:s3:::amzn-s3-demo-bucket",
                  "arn:aws:s3:::amzn-s3-demo-bucket/*"
              ]
          }]
      }
      ```

   1. Enter the following command to associate the permissions policy that you just created with the IAM role.

      ```
      aws iam put-role-policy --role-name FirehosetoS3Role --policy-name Permissions-Policy-For-Firehose-To-S3 --policy-document file://~/PermissionsForFirehose.json
      ```

1. Enter the following command to create the Firehose delivery stream. Replace *my-role-arn* and *amzn-s3-demo-bucket2-arn* with the correct values for your deployment.

   ```
   aws firehose create-delivery-stream \
      --delivery-stream-name 'my-delivery-stream' \
      --s3-destination-configuration \
     '{"RoleARN": "arn:aws:iam::222222222222:role/FirehosetoS3Role", "BucketARN": "arn:aws:s3:::amzn-s3-demo-bucket"}'
   ```

   The output should look similar to the following:

   ```
   {
       "DeliveryStreamARN": "arn:aws:firehose:us-east-1:222222222222:deliverystream/my-delivery-stream"
   }
   ```

# Step 2: Create a destination
<a name="CreateFirehoseStreamDestination-Account"></a>

**Important**  
All steps in this procedure are to be done in the log data recipient account.

When the destination is created, CloudWatch Logs sends a test message to the destination on the recipient account’s behalf. When the subscription filter is active later, CloudWatch Logs sends log events to the destination on the source account’s behalf.

**To create a destination**

1. Wait until the Firehose stream that you created in [Step 1: Create a Firehose delivery stream](CreateFirehoseStream-Account.md) becomes active. You can use the following command to check the **StreamDescription.StreamStatus** property.

   ```
   aws firehose describe-delivery-stream --delivery-stream-name "my-delivery-stream"
   ```

   In addition, take note of the **DeliveryStreamDescription.DeliveryStreamARN** value, because you will need to use it in a later step. Sample output of this command:

   ```
   {
       "DeliveryStreamDescription": {
           "DeliveryStreamName": "my-delivery-stream",
           "DeliveryStreamARN": "arn:aws:firehose:us-east-1:222222222222:deliverystream/my-delivery-stream",
           "DeliveryStreamStatus": "ACTIVE",
           "DeliveryStreamEncryptionConfiguration": {
               "Status": "DISABLED"
           },
           "DeliveryStreamType": "DirectPut",
           "VersionId": "1",
           "CreateTimestamp": "2021-02-01T23:59:15.567000-08:00",
           "Destinations": [
               {
                   "DestinationId": "destinationId-000000000001",
                   "S3DestinationDescription": {
                       "RoleARN": "arn:aws:iam::222222222222:role/FirehosetoS3Role",
                       "BucketARN": "arn:aws:s3:::amzn-s3-demo-bucket",
                       "BufferingHints": {
                           "SizeInMBs": 5,
                           "IntervalInSeconds": 300
                       },
                       "CompressionFormat": "UNCOMPRESSED",
                       "EncryptionConfiguration": {
                           "NoEncryptionConfig": "NoEncryption"
                       },
                       "CloudWatchLoggingOptions": {
                           "Enabled": false
                       }
                   },
                   "ExtendedS3DestinationDescription": {
                       "RoleARN": "arn:aws:iam::222222222222:role/FirehosetoS3Role",
                       "BucketARN": "arn:aws:s3:::amzn-s3-demo-bucket",
                       "BufferingHints": {
                           "SizeInMBs": 5,
                           "IntervalInSeconds": 300
                       },
                       "CompressionFormat": "UNCOMPRESSED",
                       "EncryptionConfiguration": {
                           "NoEncryptionConfig": "NoEncryption"
                       },
                       "CloudWatchLoggingOptions": {
                           "Enabled": false
                       },
                       "S3BackupMode": "Disabled"
                   }
               }
           ],
           "HasMoreDestinations": false
       }
   }
   ```

   It might take a minute or two for your delivery stream to show up in the active state.

1. When the delivery stream is active, create the IAM role that will grant CloudWatch Logs the permission to put data into your Firehose stream. First, you'll need to create a trust policy in a file **\$1/TrustPolicyForCWL.json**. Use a text editor to create this policy. For more information about CloudWatch Logs endpoints, see [ Amazon CloudWatch Logs endpoints and quotas](https://docs.amazonaws.cn/general/latest/gr/cwl_region.html). 

   This policy includes a `aws:SourceArn` global condition context key that specifies the `sourceAccountId` to help prevent the confused deputy security problem. If you don't yet know the source account ID in the first call, we recommend that you put the destination ARN in the source ARN field. In the subsequent calls, you should set the source ARN to be the actual source ARN that you gathered from the first call. For more information, see [Confused deputy prevention](Subscriptions-confused-deputy.md). 

   ```
   {
       "Statement": {
           "Effect": "Allow",
           "Principal": {
               "Service": "logs.amazonaws.com"
           },
           "Action": "sts:AssumeRole",
           "Condition": {
               "StringLike": {
                   "aws:SourceArn": [
                       "arn:aws:logs:region:sourceAccountId:*",
                       "arn:aws:logs:region:recipientAccountId:*"
                   ]
               }
           }
        }
   }
   ```

1. Use the **aws iam create-role** command to create the IAM role, specifying the trust policy file that you just created. 

   ```
   aws iam create-role \
         --role-name CWLtoKinesisFirehoseRole \
         --assume-role-policy-document file://~/TrustPolicyForCWL.json
   ```

   The following is a sample output. Take note of the returned `Role.Arn` value, because you will need to use it in a later step.

   ```
   {
       "Role": {
           "Path": "/",
           "RoleName": "CWLtoKinesisFirehoseRole",
           "RoleId": "AROAR3BXASEKYJYWF243H",
           "Arn": "arn:aws:iam::222222222222:role/CWLtoKinesisFirehoseRole",
           "CreateDate": "2023-02-02T08:10:43+00:00",
           "AssumeRolePolicyDocument": {
               "Statement": {
                   "Effect": "Allow",
                   "Principal": {
                       "Service": "logs.amazonaws.com"
                   },
                   "Action": "sts:AssumeRole",
                   "Condition": {
                       "StringLike": {
                           "aws:SourceArn": [
                               "arn:aws:logs:region:sourceAccountId:*",
                               "arn:aws:logs:region:recipientAccountId:*"
                           ]
                       }
                   }
               }
           }
       }
   }
   ```

1. Create a permissions policy to define which actions CloudWatch Logs can perform on your account. First, use a text editor to create a permissions policy in a file **\$1/PermissionsForCWL.json**:

   ```
   {
       "Statement":[
         {
           "Effect":"Allow",
           "Action":["firehose:*"],
           "Resource":["arn:aws:firehose:region:222222222222:*"]
         }
       ]
   }
   ```

1. Associate the permissions policy with the role by entering the following command:

   ```
   aws iam put-role-policy --role-name CWLtoKinesisFirehoseRole --policy-name Permissions-Policy-For-CWL --policy-document file://~/PermissionsForCWL.json
   ```

1. After the Firehose delivery stream is in the active state and you have created the IAM role, you can create the CloudWatch Logs destination.

   1. This step will not associate an access policy with your destination and is only the first step out of two that completes a destination creation. Make a note of the ARN of the new destination that is returned in the payload, because you will use this as the `destination.arn` in a later step.

      ```
      aws logs put-destination \                                                       
          --destination-name "testFirehoseDestination" \
          --target-arn "arn:aws:firehose:us-east-1:222222222222:deliverystream/my-delivery-stream" \
          --role-arn "arn:aws:iam::222222222222:role/CWLtoKinesisFirehoseRole"
      
      {
          "destination": {
              "destinationName": "testFirehoseDestination",
              "targetArn": "arn:aws:firehose:us-east-1:222222222222:deliverystream/my-delivery-stream",
              "roleArn": "arn:aws:iam::222222222222:role/CWLtoKinesisFirehoseRole",
              "arn": "arn:aws:logs:us-east-1:222222222222:destination:testFirehoseDestination"}
      }
      ```

   1. After the previous step is complete, in the log data recipient account (222222222222), associate an access policy with the destination. This policy enables the log data sender account (111111111111) to access the destination in just the log data recipient account (222222222222). You can use a text editor to put this policy in the `~/AccessPolicy.json` file:

------
#### [ JSON ]

****  

      ```
      {
        "Version":"2012-10-17",		 	 	 
        "Statement" : [
          {
            "Sid" : "",
            "Effect" : "Allow",
            "Principal" : {
              "AWS" : "111111111111"
            },
            "Action" : ["logs:PutSubscriptionFilter","logs:PutAccountPolicy"],
            "Resource" : "arn:aws-cn:logs:us-east-1:222222222222:destination:testFirehoseDestination"
          }
        ]
      }
      ```

------

   1. This creates a policy that defines who has write access to the destination. This policy must specify the `logs:PutSubscriptionFilter` and `logs:PutAccountPolicy` actions to access the destination. Cross-account users will use the `PutSubscriptionFilter` and `PutAccountPolicy` actions to send log events to the destination.

      ```
      aws logs put-destination-policy \
          --destination-name "testFirehoseDestination" \
          --access-policy file://~/AccessPolicy.json
      ```

# Step 3: Create an account-level subscription filter policy
<a name="CreateSubscriptionFilterFirehose-Account"></a>

Switch to the sending account, which is 111111111111 in this example. You will now create the account-level subscription filter policy in the sending account. In this example, the filter causes every log event containing the string `ERROR` in all but two log groups to be delivered to the destination you previously created. 

```
aws logs put-account-policy \
    --policy-name "CrossAccountFirehoseExamplePolicy" \
    --policy-type "SUBSCRIPTION_FILTER_POLICY" \
    --policy-document '{"DestinationArn":"arn:aws:logs:us-east-1:222222222222:destination:testFirehoseDestination", "FilterPattern": "{$.userIdentity.type = AssumedRole}", "Distribution": "Random"}' \
    --selection-criteria 'LogGroupName NOT IN ["LogGroupToExclude1", "LogGroupToExclude2"]' \
    --scope "ALL"
```

The sending account's log groups and the destination must be in the same Amazon Region. However, the destination can point to an Amazon resource such as a Firehose stream that is located in a different Region.

# Validating the flow of log events
<a name="ValidateLogEventFlowFirehose-Account"></a>

After you create the subscription filter, CloudWatch Logs forwards all the incoming log events that match the filter pattern and selection criteria to the Firehose delivery stream. The data starts appearing in your Amazon S3 bucket based on the time buffer interval that is set on the Firehose delivery stream. Once enough time has passed, you can verify your data by checking the Amazon S3 bucket. To check the bucket, enter the following command:

```
aws s3api list-objects --bucket 'amzn-s3-demo-bucket' 
```

The output of that command will be similar to the following:

```
{
    "Contents": [
        {
            "Key": "2021/02/02/08/my-delivery-stream-1-2021-02-02-08-55-24-5e6dc317-071b-45ba-a9d3-4805ba39c2ba",
            "LastModified": "2023-02-02T09:00:26+00:00",
            "ETag": "\"EXAMPLEa817fb88fc770b81c8f990d\"",
            "Size": 198,
            "StorageClass": "STANDARD",
            "Owner": {
                "DisplayName": "firehose+2test",
                "ID": "EXAMPLE27fd05889c665d2636218451970ef79400e3d2aecca3adb1930042e0"
            }
        }
    ]
}
```

You can then retrieve a specific object from the bucket by entering the following command. Replace the value of `key` with the value you found in the previous command.

```
aws s3api get-object --bucket 'amzn-s3-demo-bucket' --key '2021/02/02/08/my-delivery-stream-1-2021-02-02-08-55-24-5e6dc317-071b-45ba-a9d3-4805ba39c2ba' testfile.gz
```

The data in the Amazon S3 object is compressed with the gzip format. You can examine the raw data from the command line using one of the following commands:

Linux:

```
zcat testfile.gz
```

macOS:

```
zcat <testfile.gz
```

# Modifying destination membership at runtime
<a name="ModifyDestinationMembershipFirehose-Account"></a>

You might encounter situations where you have to add or remove log senders from a destination that you own. You can use the **PutDestinationPolicy** and `PutAccountPolicy` actions on your destination with the new access policy. In the following example, a previously added account **111111111111** is stopped from sending any more log data, and account **333333333333** is enabled.

1. Fetch the policy that is currently associated with the destination **testDestination** and make a note of the **AccessPolicy**:

   ```
   aws logs describe-destinations \
       --destination-name-prefix "testFirehoseDestination"
   ```

   The returned data might look like this.

   ```
   {
       "destinations": [
           {
               "destinationName": "testFirehoseDestination",
               "targetArn": "arn:aws:firehose:us-east-1:222222222222:deliverystream/my-delivery-stream",
               "roleArn": "arn:aws:iam:: 222222222222:role/CWLtoKinesisFirehoseRole",
               "accessPolicy": "{\n  \"Version\" : \"2012-10-17\",\n  \"Statement\" : [\n    {\n      \"Sid\" : \"\",\n      \"Effect\" : \"Allow\",\n      \"Principal\" : {\n        \"AWS\" : \"111111111111 \"\n      },\n      \"Action\" : \"logs:PutSubscriptionFilter\",\n      \"Resource\" : \"arn:aws:logs:us-east-1:222222222222:destination:testFirehoseDestination\"\n    }\n  ]\n}\n\n",
               "arn": "arn:aws:logs:us-east-1: 222222222222:destination:testFirehoseDestination",
               "creationTime": 1612256124430
           }
       ]
   }
   ```

1. Update the policy to reflect that account **111111111111** is stopped, and that account **333333333333** is enabled. Put this policy in the **\$1/NewAccessPolicy.json** file:

------
#### [ JSON ]

****  

   ```
   {
     "Version":"2012-10-17",		 	 	 
     "Statement" : [
       {
         "Sid" : "",
         "Effect" : "Allow",
         "Principal" : {
           "AWS" : "333333333333 "
         },
         "Action" : ["logs:PutSubscriptionFilter","logs:PutAccountPolicy"],
         "Resource" : "arn:aws-cn:logs:us-east-1:222222222222:destination:testFirehoseDestination"
       }
     ]
   }
   ```

------

1. Use the following command to associate the policy defined in the **NewAccessPolicy.json** file with the destination:

   ```
   aws logs put-destination-policy \
       --destination-name "testFirehoseDestination" \                                                                              
       --access-policy file://~/NewAccessPolicy.json
   ```

   This eventually disables the log events from account ID **111111111111**. Log events from account ID **333333333333** start flowing to the destination as soon as the owner of account **333333333333** creates a subscription filter.

# Confused deputy prevention
<a name="Subscriptions-confused-deputy"></a>

The confused deputy problem is a security issue where an entity that doesn't have permission to perform an action can coerce a more-privileged entity to perform the action. In Amazon, cross-service impersonation can result in the confused deputy problem. Cross-service impersonation can occur when one service (the calling service) calls another service (the called service). The calling service can be manipulated to use its permissions to act on another customer's resources in a way it should not otherwise have permission to access. To prevent this, Amazon provides tools that help you protect your data for all services with service principals that have been given access to resources in your account.

We recommend using the [https://docs.amazonaws.cn/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn](https://docs.amazonaws.cn/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn), [https://docs.amazonaws.cn/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceaccount](https://docs.amazonaws.cn/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceaccount), [https://docs.amazonaws.cn/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceorgid](https://docs.amazonaws.cn/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceorgid), and [https://docs.amazonaws.cn/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceorgpaths](https://docs.amazonaws.cn/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceorgpaths) global condition context keys in resource policies to limit the permissions that gives another service to the resource. Use `aws:SourceArn` to associate only one resource with cross-service access. Use `aws:SourceAccount` to let any resource in that account be associated with the cross-service use. Use `aws:SourceOrgID` to allow any resource from any account within an organization be associated with the cross-service use. Use `aws:SourceOrgPaths` to associate any resource from accounts within an Amazon Organizations path with the cross-service use. For more information about using and understanding paths, see [Understand the Amazon Organizations entity path](https://docs.amazonaws.cn/IAM/latest/UserGuide/access_policies_last-accessed-view-data-orgs.html#access_policies_last-accessed-viewing-orgs-entity-path).

The most effective way to protect against the confused deputy problem is to use the `aws:SourceArn` global condition context key with the full ARN of the resource. If you don't know the full ARN of the resource or if you are specifying multiple resources, use the `aws:SourceArn` global context condition key with wildcard characters (`*`) for the unknown portions of the ARN. For example, `arn:aws-cn:servicename:*:123456789012:*`. 

If the `aws:SourceArn` value does not contain the account ID, such as an Amazon S3 bucket ARN, you must use both `aws:SourceAccount` and `aws:SourceArn` to limit permissions.

To protect against the confused deputy problem at scale, use the `aws:SourceOrgID` or `aws:SourceOrgPaths` global condition context key with the organization ID or organization path of the resource in your resource-based policies. Policies that include the `aws:SourceOrgID` or `aws:SourceOrgPaths` key will automatically include the correct accounts and you don't have to manually update the policies when you add, remove, or move accounts in your organization.

The policies documented for granting access to CloudWatch Logs to write data to Amazon Kinesis Data Streams and Firehose in [Step 1: Create a destination](CreateDestination.md) and [Step 2: Create a destination](CreateFirehoseStreamDestination.md) show how you can use the `aws:SourceArn` global condition context key to help prevent the confused deputy problem. 

# Log recursion prevention
<a name="Subscriptions-recursion-prevention"></a>

There is a risk of causing an infinite log recursion with subscription filters that can lead to a large increase in ingestion billing in both CloudWatch Logs and your destination, if not prevented. This can occur when a subscription filter is associated with a log group that receives log events as a result of your subscription delivery workflow. The logs ingested into the log group will be delivered to the destination, causing the log group to ingest more logs which will then be forwarded again to the destination, creating a recursion loop.

For example, consider a subscription filter with the destination as Firehose, which delivers log events to Amazon S3. Additionally, there is also a Lambda function that processes new events delivered to Amazon S3 and produces some logs itself. If the subscription filter is applied to the Lambda function’s log group, then the log events produced by the function will get forwarded to Firehose and Amazon S3 at the destination, which will then invoke the function again, causing more logs to be produced and forwarded to Firehose and Amazon S3, causing another invocation of the function and so on. This will occur in an infinite loop, leading to an unexpected billing increase on log ingestion, Firehose, and Amazon S3.

If the Lambda function is attached to a VPC with flow logs enabled for CloudWatch Logs, then the VPC’s log group can cause a log recursion as well.

We recommend that you don't apply subscription filters to log groups that are a part of your subscription delivery workflow. For account-level subscription filters, use the `selectionCriteria` parameter in the `PutAccountPolicy` API to exclude these log groups from the policy.

When excluding log groups, consider the following Amazon services that produce logs and may be a part of your subscription delivery workflows: 
+ Amazon EC2 with Fargate
+ Lambda
+ Amazon Step Functions
+ Amazon VPC flow logs that are enabled for CloudWatch Logs

**Note**  
Log events produced by a Lambda destination’s log group will not be forwarded back to the Lambda function for an account-level subscription filter policy. In this case, excluding the destination Lambda function’s log group using `selectionCriteria` is not required for account subscription policies.