

# Invoking Lambda with events from other Amazon services
Integrating other services

Some Amazon Web Services services can directly invoke Lambda functions using *triggers*. These services push events to Lambda, and the function is invoked immediately when the specified event occurs. Triggers are suitable for discrete events and real-time processing. When you [create a trigger using the Lambda console](#lambda-invocation-trigger), the console interacts with the corresponding Amazon service to configure the event notification on that service. The trigger is actually stored and managed by the service that generates the events, not by Lambda.

The events are data structured in JSON format. The JSON structure varies depending on the service that generates it and the event type, but they all contain the data that the function needs to process the event.

A function can have multiple triggers. Each trigger acts as a client invoking your function independently, and each event that Lambda passes to your function has data from only one trigger. Lambda converts the event document into an object and passes it to your function handler.

Depending on the service, the event-driven invocation can be [synchronous](invocation-sync.md) or [asynchronous](invocation-async.md).
+ For synchronous invocation, the service that generates the event waits for the response from your function. That service defines the data that the function needs to return in the response. The service controls the error strategy, such as whether to retry on errors.
+ For asynchronous invocation, Lambda queues the event before passing it to your function. When Lambda queues the event, it immediately sends a success response to the service that generated the event. After the function processes the event, Lambda doesn’t return a response to the event-generating service.

## Creating a trigger


The easiest way to create a trigger is to use the Lambda console. When you create a trigger using the console, Lambda automatically adds the required permissions to the function's [resource-based policy](access-control-resource-based.md).

**To create a trigger using the Lambda console**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Select the function you want to create a trigger for.

1. In the **Function overview** pane, choose **Add trigger**.

1. Select the Amazon service you want to invoke your function.

1. Fill out the options in the **Trigger configuration** pane and choose **Add**. Depending on the Amazon Web Services service you choose to invoke your function, the trigger configuration options will be different.

## Services that can invoke Lambda functions
Services list

The following table lists services that can invoke Lambda functions.


****  

| Service | Method of invocation | 
| --- | --- | 
|  [Amazon Managed Streaming for Apache Kafka](with-msk.md)  |  [Event source mapping](invocation-eventsourcemapping.md)  | 
|  [Self-managed Apache Kafka](with-kafka.md)  |  [Event source mapping](invocation-eventsourcemapping.md)  | 
|  [Amazon API Gateway](services-apigateway.md)  |  Event-driven; synchronous invocation  | 
|  [Amazon CloudFormation](services-cloudformation.md)  |  Event-driven; asynchronous invocation  | 
|  [Amazon CloudWatch Logs](https://docs.amazonaws.cn/AmazonCloudWatch/latest/logs/SubscriptionFilters.html#LambdaFunctionExample)  |  Event-driven; asynchronous invocation  | 
|  [Amazon CodeCommit](https://docs.amazonaws.cn/codecommit/latest/userguide/how-to-notify-lambda-cc.html)  |  Event-driven; asynchronous invocation  | 
|  [Amazon CodePipeline](https://docs.amazonaws.cn/codepipeline/latest/userguide/actions-invoke-lambda-function.html)  |  Event-driven; asynchronous invocation  | 
|  [Amazon Cognito](https://docs.amazonaws.cn/cognito/latest/developerguide/cognito-events.html)  |  Event-driven; synchronous invocation  | 
|  [Amazon Config](governance-config.md)  |  Event-driven; asynchronous invocation  | 
|  [Amazon Connect](https://docs.amazonaws.cn/connect/latest/adminguide/connect-lambda-functions.html)  |  Event-driven; synchronous invocation  | 
|  [Amazon DocumentDB](with-documentdb.md)  |  [Event source mapping](invocation-eventsourcemapping.md)  | 
|  [Amazon DynamoDB](with-ddb.md)  |  [Event source mapping](invocation-eventsourcemapping.md)  | 
|  [Elastic Load Balancing (Application Load Balancer)](services-alb.md)  |  Event-driven; synchronous invocation  | 
|  [Amazon EventBridge (CloudWatch Events)](https://docs.amazonaws.cn/eventbridge/latest/userguide/eb-what-is.html)  |  Event-driven; asynchronous invocation (event buses), synchronous or asynchronous invocation (pipes and schedules)  | 
|  [Amazon IoT](services-iot.md)  |  Event-driven; asynchronous invocation  | 
|  [Amazon Kinesis](with-kinesis.md)  |  [Event source mapping](invocation-eventsourcemapping.md)  | 
|  [Amazon Data Firehose](https://docs.amazonaws.cn/firehose/latest/dev/data-transformation.html)  |  Event-driven; synchronous invocation  | 
|  [Amazon Lex](https://docs.amazonaws.cn/lexv2/latest/dg/lambda.html)  |  Event-driven; synchronous invocation  | 
|  [Amazon MQ](with-mq.md)  |  [Event source mapping](invocation-eventsourcemapping.md)  | 
|  [Amazon Simple Email Service](https://docs.amazonaws.cn/ses/latest/dg/receiving-email-action-lambda.html)  |  Event-driven; asynchronous invocation  | 
|  [Amazon Simple Notification Service](with-sns.md)  |  Event-driven; asynchronous invocation  | 
|  [Amazon Simple Queue Service](with-sqs.md)  |  [Event source mapping](invocation-eventsourcemapping.md)  | 
|  [Amazon Simple Storage Service (Amazon S3)](with-s3.md)  |  Event-driven; asynchronous invocation  | 
|  [Amazon Simple Storage Service Batch](services-s3-batch.md)  |  Event-driven; synchronous invocation  | 
|  [Secrets Manager](https://docs.amazonaws.cn/secretsmanager/latest/userguide/rotate-secrets_lambda.html)  |  Secret rotation  | 
|  [Amazon Step Functions](https://docs.amazonaws.cn/step-functions/latest/dg/connect-lambda.html)  |  Event-driven; synchronous or asynchronous invocation  | 
|  [Amazon VPC Lattice](https://docs.amazonaws.cn/vpc-lattice/latest/ug/lambda-functions.html)  |  Event-driven; synchronous invocation  | 

# Using Lambda with Apache Kafka
Apache Kafka

Lambda supports [Apache Kafka](https://kafka.apache.org/) as an [event source](invocation-eventsourcemapping.md). Apache Kafka is an open-source event streaming platform designed to handle high-throughput, real-time data pipelines and streaming applications. There are two main ways to use Lambda with Apache Kafka:
+ [Using Lambda with Amazon MSK](with-msk.md) – Amazon Managed Streaming for Apache Kafka (Amazon MSK) is a fully-managed service by Amazon. Amazon MSK helps automate management of your Kafka infrastructure, including provisioning, patching, and scaling.
+ [Using Lambda with self-managed Apache Kafka](with-kafka.md) – In Amazon terminology, a self-managed cluster includes non-Amazon hosted Kafka clusters. For example, you can still use Lambda with a Kafka cluster hosted with a non-Amazon cloud provider such as [ Confluent Cloud](https://www.confluent.io/confluent-cloud/) or [Redpanda](https://www.redpanda.com/).

When deciding between Amazon MSK and self-managed Apache Kafka, consider your operational needs and control requirements. Amazon MSK is a better choice if you want Amazon to quickly help you manage a scalable, production-ready Kafka setup with minimal operational overhead. It simplifies security, monitoring, and high availability, helping you focus on application development rather than infrastructure management. On the other hand, self-managed Apache Kafka is better suited for use cases running on non-Amazon hosted environments, including on-premises clusters.

**Topics**
+ [

# Using Lambda with Amazon MSK
](with-msk.md)
+ [

# Using Lambda with self-managed Apache Kafka
](with-kafka.md)
+ [

# Apache Kafka event poller scaling modes in Lambda
](kafka-scaling-modes.md)
+ [

# Apache Kafka polling and stream starting positions in Lambda
](kafka-starting-positions.md)
+ [

# Customizable consumer group ID in Lambda
](kafka-consumer-group-id.md)
+ [

# Filtering events from Amazon MSK and self-managed Apache Kafka event sources
](kafka-filtering.md)
+ [

# Using schema registries with Kafka event sources in Lambda
](services-consume-kafka-events.md)
+ [

# Low latency processing for Kafka event sources
](with-kafka-low-latency.md)
+ [

# Configuring error handling controls for Kafka event sources
](kafka-retry-configurations.md)
+ [

# Capturing discarded batches for Amazon MSK and self-managed Apache Kafka event sources
](kafka-on-failure.md)
+ [

# Using a Kafka topic as an on-failure destination
](kafka-on-failure-destination.md)
+ [

# Kafka event source mapping logging
](esm-logging.md)
+ [

# Troubleshooting Kafka event source mapping errors
](with-kafka-troubleshoot.md)

# Using Lambda with Amazon MSK
MSK

[Amazon Managed Streaming for Apache Kafka (Amazon MSK)](https://docs.amazonaws.cn/msk/latest/developerguide/what-is-msk.html) is a fully-managed service that you can use to build and run applications that use Apache Kafka to process streaming data. Amazon MSK simplifies the setup, scaling, and management of Kafka clusters. Amazon MSK also makes it easier to configure your application for multiple Availability Zones and for security with Amazon Identity and Access Management (IAM).

This chapter explains how to use an Amazon MSK cluster as an event source for your Lambda function. The general process for integrating Amazon MSK with Lambda involves the following steps:

1. **[Cluster and network setup](with-msk-cluster-network.md)** – First, set up your [Amazon MSK cluster](https://docs.amazonaws.cn/msk/latest/developerguide/what-is-msk.html). This includes the correct networking configuration to allow Lambda to access your cluster.

1. **[Event source mapping setup](with-msk-configure.md)** – Then, create the [event source mapping](invocation-eventsourcemapping.md) resource that Lambda needs to securely connect your Amazon MSK cluster to your function.

1. **[Function and permissions setup](with-msk-permissions.md)** – Finally, ensure that your function is correctly set up, and has the necessary permissions in its [execution role](lambda-intro-execution-role.md).

**Note**  
You can now create and manage your Amazon MSK event source mappings directly from either the Lambda or the Amazon MSK console. Both consoles offer the option to automatically handle the setup of the necessary Lambda execution role permissions for a more streamlined configuration process.

For examples on how to set up a Lambda integration with an Amazon MSK cluster, see [Tutorial: Using an Amazon MSK event source mapping to invoke a Lambda function](services-msk-tutorial.md), [Using Amazon MSK as an event source for Amazon Lambda](https://amazonaws-china.com/blogs/compute/using-amazon-msk-as-an-event-source-for-aws-lambda/) on the Amazon Compute Blog, and [ Amazon MSK Lambda Integration](https://amazonmsk-labs.workshop.aws/en/msklambda.html) in the Amazon MSK Labs.

**Topics**
+ [

## Example event
](#msk-sample-event)
+ [

# Configuring your Amazon MSK cluster and Amazon VPC network for Lambda
](with-msk-cluster-network.md)
+ [

# Configuring Lambda permissions for Amazon MSK event source mappings
](with-msk-permissions.md)
+ [

# Configuring Amazon MSK event sources for Lambda
](with-msk-configure.md)
+ [

# Tutorial: Using an Amazon MSK event source mapping to invoke a Lambda function
](services-msk-tutorial.md)

## Example event


Lambda sends the batch of messages in the event parameter when it invokes your function. The event payload contains an array of messages. Each array item contains details of the Amazon MSK topic and partition identifier, together with a timestamp and a base64-encoded message.

```
{
   "eventSource":"aws:kafka",
   "eventSourceArn":"arn:aws-cn:kafka:cn-north-1:123456789012:cluster/vpc-2priv-2pub/751d2973-a626-431c-9d4e-d7975eb44dd7-2",
   "bootstrapServers":"b-2.demo-cluster-1.a1bcde.c1.kafka.cn-north-1.amazonaws.com.cn:9092,b-1.demo-cluster-1.a1bcde.c1.kafka.cn-north-1.amazonaws.com.cn:9092",
   "records":{
      "mytopic-0":[
         {
            "topic":"mytopic",
            "partition":0,
            "offset":15,
            "timestamp":1545084650987,
            "timestampType":"CREATE_TIME",
            "key":"abcDEFghiJKLmnoPQRstuVWXyz1234==",
            "value":"SGVsbG8sIHRoaXMgaXMgYSB0ZXN0Lg==",
            "headers":[
               {
                  "headerKey":[
                     104,
                     101,
                     97,
                     100,
                     101,
                     114,
                     86,
                     97,
                     108,
                     117,
                     101
                  ]
               }
            ]
         }
      ]
   }
}
```

# Configuring your Amazon MSK cluster and Amazon VPC network for Lambda
Cluster and network setup

To connect your Amazon Lambda function to your Amazon MSK cluster, you need to correctly configure your cluster and the [Amazon Virtual Private Cloud (VPC)](https://docs.amazonaws.cn/vpc/latest/userguide/what-is-amazon-vpc.html) it resides in. This page describes how to configure your cluster and VPC. If your cluster and VPC are already configured properly, see [Configuring Amazon MSK event sources for Lambda](with-msk-configure.md) to configure the event source mapping.

**Topics**
+ [

## Overview of network configuration requirements for Lambda and MSK integrations
](#msk-network-requirements)
+ [

## Configuring a NAT gateway for an MSK event source
](#msk-nat-gateway)
+ [

## Configuring Amazon PrivateLink endpoints for an MSK event source
](#msk-vpc-privatelink)

## Overview of network configuration requirements for Lambda and MSK integrations


The networking configuration required for a Lambda and MSK integration depends on the network architecture of your application. There are three main resources involved in this integration: the Amazon MSK cluster, the Lambda function, and the Lambda event source mapping. Each of these resources resides in a different VPC:
+ Your Amazon MSK cluster typically resides in a private subnet of a VPC that you manage.
+ Your Lambda function resides in an Amazon-managed VPC owned by Lambda.
+ Your Lambda event source mapping resides in another Amazon-managed VPC owned by Lambda, separate from the VPC that contains your function.

The [event source mapping](invocation-eventsourcemapping.md) is the intermediary resource between the MSK cluster and the Lambda function. The event source mapping has two primary jobs. First, it polls your MSK cluster for new messages. Then, it invokes your Lambda function with those messages. Since these three resources are in different VPCs, both the poll and invoke operations require cross-VPC network calls.

The network configuration requirements for your event source mapping depends on whether it uses [provisioned mode](invocation-eventsourcemapping.md#invocation-eventsourcemapping-provisioned-mode) or on-demand mode, as shown in the following diagram:

![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/MSK-esm-network-overview.png)


The way that the Lambda event source mapping polls your MSK cluster for new messages is the same in both modes. To establish a connection between your event source mapping and your MSK cluster, Lambda creates a [hyperplane ENI](configuration-vpc.md#configuration-vpc-enis) (or reuses an existing one, if available) in your private subnet to establish a secure connection. As illustrated in the diagram, this hyperplane ENI uses the subnet and security group configuration of your MSK cluster, not your Lambda function.

After polling the message from the cluster, the way Lambda invokes your function is different in each mode:
+ In provisioned mode, Lambda automatically handles the connection between the event source mapping VPC and the function VPC. So, you don’t need any additional networking components to successfully invoke your function.
+ In on-demand mode, your Lambda event source mapping invokes your function via a path through your customer-managed VPC. Because of this, you need to configure either a [NAT gateway](https://docs.amazonaws.cn/vpc/latest/userguide/vpc-nat-gateway.html) in the public subnet of your VPC, or [Amazon PrivateLink](https://docs.amazonaws.cn/vpc/latest/privatelink/what-is-privatelink.html) endpoints in the private subnet of the VPC that provide access to Lambda, [Amazon Security Token Service (STS)](https://docs.amazonaws.cn/STS/latest/APIReference/welcome.html), and optionally, [Amazon Secrets Manager](https://docs.amazonaws.cn/secretsmanager/latest/userguide/intro.html). Correctly configuring either one of these options allows a connection between your VPC and the Lambda-managed runtime VPC, which is necessary to invoke your function.

A NAT gateway allows resources in your private subnet to access the public internet. Using this configuration means your traffic traverses the internet before invoking the Lambda function. Amazon PrivateLink endpoints allow private subnets to securely connect to Amazon services or other private VPC resources without traversing the public internet. See [Configuring a NAT gateway for an MSK event source](#msk-nat-gateway) or [Configuring Amazon PrivateLink endpoints for an MSK event source](#msk-vpc-privatelink) for details on how to configure these resources.

So far, we’ve assumed that your MSK cluster resides in a private subnet within your VPC, which is the more common case. However, even if your MSK cluster is in a public subnet within your VPC, you must configure Amazon PrivateLink endpoints to enable a secure connection. The following table summarizes the networking configuration requirements based on how you configure your MSK cluster and Lambda event source mapping:


| MSK cluster location (in customer-managed VPC) | Lambda event source mapping scaling mode | Required networking configuration | 
| --- | --- | --- | 
|  Private subnet  |  On-demand mode  |  NAT gateway (in your VPC's public subnet), or Amazon PrivateLink endpoints (in your VPC's private subnet) to enable access to Lambda, Amazon STS, and optionally, Secrets Manager.  | 
|  Public subnet  |  On-demand mode  |  Amazon PrivateLink endpoints (in your VPC's public subnet) to enable access to Lambda, Amazon STS, and optionally, Secrets Manager.  | 
|  Private subnet  |  Provisioned mode  |  None  | 
|  Public subnet  |  Provisioned mode  |  None  | 

In addition, the security groups associated with your MSK cluster must allow traffic over the correct ports. Ensure that you have the following security group rules configured:
+ **Inbound rules** – Allow all traffic on the default broker port. The port that MSK uses depends on the type of authentication on the cluster: `9098` for IAM authentication, `9096` for SASL/SCRAM, and `9094` for TLS. Alternatively, you can use a self-referencing security group rule to allow access from instances within the same security group.
+ **Outbound rules** – Allow all traffic on port `443` for external destinations if your function needs to communicate with other Amazon services. Alternatively, you can use a self-referencing security group rule to limit access to the broker if you don’t need to communicate with other Amazon services.
+ **Amazon VPC endpoint inbound rules** – If you’re using an Amazon VPC endpoint, the security group associated with the endpoint must allow inbound traffic on port `443` from the cluster’s security group.

## Configuring a NAT gateway for an MSK event source


You can configure a NAT gateway to allow your event source mapping to poll messages from your cluster, and invoke the function via a path through your VPC. This is required only if your event source mapping uses on-demand mode, and your cluster resides within a private subnet of your VPC. If your cluster resides in a public subnet of your VPC, or your event source mapping uses provisioned mode, you don’t need to configure a NAT gateway.

A [NAT gateway](https://docs.amazonaws.cn/vpc/latest/userguide/vpc-nat-gateway.html) allows resources in a private subnet to access the public internet. If you need private connectivity to Lambda, see [Configuring Amazon PrivateLink endpoints for an MSK event source](#msk-vpc-privatelink) instead.

After you configure your NAT gateway, you must configure the appropriate route tables. This allows traffic from your private subnet to route to the public internet via the NAT gateway.

![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/MSK-NAT-Gateway.png)


The following steps guide you through configuring a NAT gateway using the console. Repeat these steps as necessary for each Availability Zone (AZ).

**To configure a NAT gateway and proper routing (console)**

1. Follow the steps in [ Create a NAT gateway](https://docs.amazonaws.cn/vpc/latest/userguide/nat-gateway-working-with.html), noting the following:
   + NAT gateways should always reside in a public subnet. Create NAT gateways with [public connectivity](https://docs.amazonaws.cn/vpc/latest/userguide/vpc-nat-gateway.html).
   + If your MSK cluster is replicated across multiple AZs, create one NAT gateway per AZ. For example, in each AZ, your VPC should have one private subnet containing your cluster, and one public subnet containing your NAT gateway. For a setup with three AZs, you’ll have three private subnets, three public subnets, and three NAT gateways.

1. After you create your NAT gateway, open the [Amazon VPC console](https://console.aws.amazon.com/vpc/) and choose **Route tables** in the left menu.

1. Choose **Create route table**.

1. Associate this route table with the VPC that contains your MSK cluster. Optionally, enter a name for your route table.

1. Choose **Create route table**.

1. Choose the route table you just created.

1. Under the **Subnet associations** tab, choose **Edit subnet associations**.
   + Associate this route table with the private subnet that contains your MSK cluster.

1. Choose **Edit routes**.

1. Choose **Add route**:

   1. For **Destination**, choose `0.0.0.0/0`.

   1. For **Target**, choose **NAT gateway**.

   1. In the search box, choose the NAT gateway you created in step 1. This should be the NAT gateway in the same AZ as the private subnet that contains your MSK cluster (the private subnet that you associated with this route table in step 6).

1. Choose **Save changes**.

## Configuring Amazon PrivateLink endpoints for an MSK event source


You can configure Amazon PrivateLink endpoints to poll messages from your cluster, and invoke the function via a path through your VPC. These endpoints should allow your MSK cluster to access the following:
+ The Lambda service
+ The [Amazon Security Token Service (STS)](https://docs.amazonaws.cn/STS/latest/APIReference/welcome.html)
+ Optionally, the [Amazon Secrets Manager](https://docs.amazonaws.cn/secretsmanager/latest/userguide/intro.html) service. This is required if the secret required for cluster authentication is stored in Secrets Manager.

Configuring PrivateLink endpoints is required only if your event source mapping uses on-demand mode. If your event source mapping uses provisioned mode, Lambda establishes the required connections for you.

PrivateLink endpoints allow secure, private access to Amazon services over Amazon PrivateLink. Alternatively, to configure a NAT gateway to give your MSK cluster access to the public internet, see [Configuring a NAT gateway for an MSK event source](#msk-nat-gateway).

After you configure your VPC endpoints, your MSK cluster should have direct and private access to Lambda, STS, and optionally, Secrets Manager.

![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/MSK-PrivateLink-Endpoints.png)


The following steps guide you through configuring a PrivateLink endpoint using the console. Repeat these steps as necessary for each endpoint (Lambda, STS, Secrets Manager).

**To configure VPC PrivateLink endpoints (console)**

1. Open the [Amazon VPC console](https://console.aws.amazon.com/vpc/) and choose **Endpoints** in the left menu.

1. Choose **Create endpoint**.

1. Optionally, enter a name for your endpoint.

1. For **Type**, choose **Amazon services**.

1. Under **Services**, start typing the name of the service. For example, to create an endpoint to connect to Lambda, type `lambda` in the search box.

1. In the results, you should see the service endpoint in the current region. For example, in the US East (N. Virginia) region, you should see `com.amazonaws.us-east-2.lambda`. Select this service.

1. Under **Network settings**, select the VPC that contains your MSK cluster.

1. Under **Subnets**, select the AZs that your MSK cluster is in.
   + For each AZ, under **Subnet ID**, choose the private subnet that contains your MSK cluster.

1. Under **Security groups**, select the security groups associated with your MSK cluster.

1. Choose **Create endpoint**.

By default, Amazon VPC endpoints have open IAM policies that allow broad access to resources. Best practice is to restrict these policies to perform the needed actions using that endpoint. For example, for your Secrets Manager endpoint, you can modify its policy such that it allows only your function’s execution role to access the secret.

**Example VPC endpoint policy – Secrets Manager endpoint**  

```
{
    "Statement": [
        {
            "Action": "secretsmanager:GetSecretValue",
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "arn:aws::iam::123456789012:role/my-role"
                ]
            },
            "Resource": "arn:aws::secretsmanager:us-west-2:123456789012:secret:my-secret"
        }
    ]
}
```

For the Amazon STS and Lambda endpoints, you can restrict the calling principal to the Lambda service principal. However, ensure that you use `"Resource": "*"` in these policies.

**Example VPC endpoint policy – Amazon STS endpoint**  

```
{
    "Statement": [
        {
            "Action": "sts:AssumeRole",
            "Effect": "Allow",
            "Principal": {
                "Service": [
                    "lambda.amazonaws.com"
                ]
            },
            "Resource": "*"
        }
    ]
}
```

**Example VPC endpoint policy – Lambda endpoint**  

```
{
    "Statement": [
        {
            "Action": "lambda:InvokeFunction",
            "Effect": "Allow",
            "Principal": {
                "Service": [
                    "lambda.amazonaws.com"
                ]
            },
            "Resource": "*"
        }
    ]
}
```

# Configuring Lambda permissions for Amazon MSK event source mappings
Configure permissions

To access the Amazon MSK cluster, your function and event source mapping need permissions to perform various Amazon MSK API actions. Add these permissions to the function's [execution role](lambda-intro-execution-role.md). If your users need access, add the required permissions to the identity policy for the user or role.

The [AWSLambdaMSKExecutionRole](https://docs.amazonaws.cn/aws-managed-policy/latest/reference/AWSLambdaMSKExecutionRole.html) managed policy contains the minimum required permissions for Amazon MSK Lambda event source mappings. To simplify the permissions process, you can:
+ Attach the [AWSLambdaMSKExecutionRole](https://docs.amazonaws.cn/aws-managed-policy/latest/reference/AWSLambdaMSKExecutionRole.html) managed policy to your execution role.
+ Let the Lambda console generate the permissions for you. When you [create an Amazon MSK event source mapping in the console](msk-esm-create.md#msk-console), Lambda evaluates your execution role and alerts you if any permissions are missing. Choose **Generate permissions** to automatically update your execution role. This doesn't work if you manually created or modified your execution role policies, or if the policies are attached to multiple roles. Note that additional permissions may still be required in your execution role when using advanced features such as [On-Failure Destination](kafka-on-failure.md) or [Amazon Glue Schema Registry](services-consume-kafka-events.md).

**Topics**
+ [

## Required permissions
](#msk-required-permissions)
+ [

## Optional permissions
](#msk-optional-permissions)

## Required permissions


Your Lambda function execution role must have the following required permissions for Amazon MSK event source mappings. These permissions are included in the [AWSLambdaMSKExecutionRole](https://docs.amazonaws.cn/aws-managed-policy/latest/reference/AWSLambdaMSKExecutionRole.html) managed policy.

### CloudWatch Logs permissions


The following permissions allow Lambda to create and store logs in Amazon CloudWatch Logs.
+ [logs:CreateLogGroup](https://docs.amazonaws.cn/AmazonCloudWatchLogs/latest/APIReference/API_CreateLogGroup.html)
+ [logs:CreateLogStream](https://docs.amazonaws.cn/AmazonCloudWatchLogs/latest/APIReference/API_CreateLogStream.html)
+ [logs:PutLogEvents](https://docs.amazonaws.cn/AmazonCloudWatchLogs/latest/APIReference/API_PutLogEvents.html)

### MSK cluster permissions


The following permissions allow Lambda to access your Amazon MSK cluster on your behalf:
+ [kafka:DescribeCluster](https://docs.amazonaws.cn/msk/1.0/apireference/clusters-clusterarn.html)
+ [kafka:DescribeClusterV2](https://docs.amazonaws.cn/MSK/2.0/APIReference/v2-clusters-clusterarn.html)
+ [kafka:GetBootstrapBrokers](https://docs.amazonaws.cn/msk/1.0/apireference/clusters-clusterarn-bootstrap-brokers.html)

We recommend using [kafka:DescribeClusterV2](https://docs.amazonaws.cn/MSK/2.0/APIReference/v2-clusters-clusterarn.html) instead of [kafka:DescribeCluster](https://docs.amazonaws.cn/msk/1.0/apireference/clusters-clusterarn.html). The v2 permission works with both provisioned and serverless Amazon MSK clusters. You only need one of these permissions in your policy.

### VPC permissions


The following permissions allow Lambda to create and manage network interfaces when connecting to your Amazon MSK cluster:
+ [ec2:CreateNetworkInterface](https://docs.amazonaws.cn/AWSEC2/latest/APIReference/API_CreateNetworkInterface.html)
+ [ec2:DescribeNetworkInterfaces](https://docs.amazonaws.cn/AWSEC2/latest/APIReference/API_DescribeNetworkInterfaces.html)
+ [ec2:DescribeVpcs](https://docs.amazonaws.cn/AWSEC2/latest/APIReference/API_DescribeVpcs.html)
+ [ec2:DeleteNetworkInterface](https://docs.amazonaws.cn/AWSEC2/latest/APIReference/API_DeleteNetworkInterface.html)
+ [ec2:DescribeSubnets](https://docs.amazonaws.cn/AWSEC2/latest/APIReference/API_DescribeSubnets.html)
+ [ec2:DescribeSecurityGroups](https://docs.amazonaws.cn/AWSEC2/latest/APIReference/API_DescribeSecurityGroups.html)

## Optional permissions


 Your Lambda function might also need permissions to: 
+ Access cross-account Amazon MSK clusters. For cross-account event source mappings, you need [kafka:DescribeVpcConnection](https://docs.amazonaws.cn/msk/1.0/apireference/vpc-connection-arn.html) in the execution role. An IAM principal creating a cross-account event source mapping needs [kafka:ListVpcConnections](https://docs.amazonaws.cn/msk/1.0/apireference/vpc-connections.html).
+ Access your SCRAM secret, if you're using [SASL/SCRAM authentication](msk-cluster-auth.md#msk-sasl-scram). This lets your function use a username and password to connect to Kafka.
+ Describe your Secrets Manager secret, if you're using SASL/SCRAM or [mTLS authentication](msk-cluster-auth.md#msk-mtls). This allows your function to retrieve the credentials or certificates needed for secure connections.
+ Access your Amazon KMS customer managed key, if your Amazon Secrets Manager secret is encrypted with an Amazon KMS customer managed key.
+ Access your schema registry secrets, if you're using a schema registry with authentication:
  + For Amazon Glue Schema Registry: Your function needs `glue:GetRegistry` and `glue:GetSchemaVersion` permissions. These allow your function to look up and use the message format rules stored in Amazon Glue.
  + For [Confluent Schema Registry](https://docs.confluent.io/platform/current/schema-registry/security/index.html) with `BASIC_AUTH` or `CLIENT_CERTIFICATE_TLS_AUTH`: Your function needs `secretsmanager:GetSecretValue` permission for the secret containing the authentication credentials. This lets your function retrieve the username/password or certificates needed to access the Confluent Schema Registry.
  + For private CA certificates: Your function needs secretsmanager:GetSecretValue permission for the secret containing the certificate. This allows your function to verify the identity of schema registries that use custom certificates.
+ Access Kafka cluster consumer groups and poll messages from the topic, if you're using IAM authentication for the event source mapping.

 These correspond to the following required permissions: 
+ [kafka:ListScramSecrets](https://docs.amazonaws.cn/msk/1.0/apireference/clusters-clusterarn-scram-secrets.html) - Allows listing of SCRAM secrets for Kafka authentication
+ [secretsmanager:GetSecretValue](https://docs.amazonaws.cn/secretsmanager/latest/apireference/API_GetSecretValue.html) - Enables retrieval of secrets from Secrets Manager
+ [kms:Decrypt](https://docs.amazonaws.cn/kms/latest/APIReference/API_Decrypt.html) - Permits decryption of encrypted data using Amazon KMS
+ [glue:GetRegistry](https://docs.amazonaws.cn/glue/latest/webapi/API_GetRegistry.html) - Allows access to Amazon Glue Schema Registry
+ [glue:GetSchemaVersion](https://docs.amazonaws.cn/glue/latest/webapi/API_GetSchemaVersion.html) - Enables retrieval of specific schema versions from Amazon Glue Schema Registry
+ [kafka-cluster:Connect](https://docs.amazonaws.cn/service-authorization/latest/reference/list_apachekafkaapisforamazonmskclusters.html) - Grants permission to connect and authenticate to the cluster
+ [kafka-cluster:AlterGroup](https://docs.amazonaws.cn/service-authorization/latest/reference/list_apachekafkaapisforamazonmskclusters.html) - Grants permission to join groups on a cluster, equivalent to Apache Kafka's READ GROUP ACL
+ [kafka-cluster:DescribeGroup](https://docs.amazonaws.cn/service-authorization/latest/reference/list_apachekafkaapisforamazonmskclusters.html) - Grants permission to describe groups on a cluster, equivalent to Apache Kafka's DESCRIBE GROUP ACL
+ [kafka-cluster:DescribeTopic](https://docs.amazonaws.cn/service-authorization/latest/reference/list_apachekafkaapisforamazonmskclusters.html) - Grants permission to describe topics on a cluster, equivalent to Apache Kafka's DESCRIBE TOPIC ACL
+ [kafka-cluster:ReadData](https://docs.amazonaws.cn/service-authorization/latest/reference/list_apachekafkaapisforamazonmskclusters.html) - Grants permission to read data from topics on a cluster, equivalent to Apache Kafka's READ TOPIC ACL

 Additionally, if you want to send records of failed invocations to an on-failure destination, you'll need the following permissions depending on the destination type: 
+ For Amazon SQS destinations: [sqs:SendMessage](https://docs.amazonaws.cn/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html) - Allows sending messages to an Amazon SQS queue
+ For Amazon SNS destinations: [sns:Publish](https://docs.amazonaws.cn/sns/latest/api/API_Publish.html) - Permits publishing messages to an Amazon SNS topic
+ For Amazon S3 bucket destinations: [s3:PutObject](https://docs.amazonaws.cn/AmazonS3/latest/API/API_PutObject.html) and [s3:ListBucket](https://docs.amazonaws.cn/AmazonS3/latest/API/API_ListBucket.html) - Enables writing and listing objects in an Amazon S3 bucket

For troubleshooting authentication and authorization errors, see [Troubleshooting Kafka event source mapping errors](with-kafka-troubleshoot.md).

# Configuring Amazon MSK event sources for Lambda
Configure event source

To use an Amazon MSK cluster as an event source for your Lambda function, you create an [event source mapping](invocation-eventsourcemapping.md) that connects the two resources. This page describes how to create an event source mapping for Amazon MSK.

This page assumes that you've already properly configured your MSK cluster and the [Amazon Virtual Private Cloud (VPC)](https://docs.amazonaws.cn/vpc/latest/userguide/what-is-amazon-vpc.html) it resides in. If you need to set up your cluster or VPC, see [Configuring your Amazon MSK cluster and Amazon VPC network for Lambda](with-msk-cluster-network.md). To configure retry behavior for error handling, see [Configuring error handling controls for Kafka event sources](kafka-retry-configurations.md).

**Topics**
+ [

## Using an Amazon MSK cluster as an event source
](#msk-esm-overview)
+ [

# Configuring Amazon MSK cluster authentication methods in Lambda
](msk-cluster-auth.md)
+ [

# Creating a Lambda event source mapping for an Amazon MSK event source
](msk-esm-create.md)
+ [

# Creating cross-account event source mappings in Lambda
](msk-cross-account.md)
+ [

# All Amazon MSK event source configuration parameters in Lambda
](msk-esm-parameters.md)

## Using an Amazon MSK cluster as an event source
Amazon MSK cluster as an event source

When you add your Apache Kafka or Amazon MSK cluster as a trigger for your Lambda function, the cluster is used as an [event source](invocation-eventsourcemapping.md).

Lambda reads event data from the Kafka topics that you specify as `Topics` in a [CreateEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html) request, based on the [starting position](kafka-starting-positions.md) that you specify. After successful processing, your Kafka topic is committed to your Kafka cluster.

Lambda reads messages sequentially for each Kafka topic partition. A single Lambda payload can contain messages from multiple partitions. When more records are available, Lambda continues processing records in batches, based on the BatchSize value that you specify in a [CreateEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html) request, until your function catches up with the topic.

After Lambda processes each batch, it commits the offsets of the messages in that batch. If your function returns an error for any of the messages in a batch, Lambda retries the whole batch of messages until processing succeeds or the messages expire. You can send records that fail all retry attempts to an on-failure destination for later processing.

**Note**  
While Lambda functions typically have a maximum timeout limit of 15 minutes, event source mappings for Amazon MSK, self-managed Apache Kafka, Amazon DocumentDB, and Amazon MQ for ActiveMQ and RabbitMQ only support functions with maximum timeout limits of 14 minutes.

# Configuring Amazon MSK cluster authentication methods in Lambda
Cluster authentication

Lambda needs permission to access your Amazon MSK cluster, retrieve records, and perform other tasks. Amazon MSK supports several ways to authenticate with your MSK cluster.

**Topics**
+ [

## Unauthenticated access
](#msk-unauthenticated)
+ [

## SASL/SCRAM authentication
](#msk-sasl-scram)
+ [

## Mutual TLS authentication
](#msk-mtls)
+ [

## IAM authentication
](#msk-iam-auth)
+ [

## How Lambda chooses a bootstrap broker
](#msk-bootstrap-brokers)

## Unauthenticated access


If no clients access the cluster over the internet, you can use unauthenticated access.

## SASL/SCRAM authentication


Lambda supports [ Simple Authentication and Security Layer/Salted Challenge Response Authentication Mechanism (SASL/SCRAM)](https://docs.amazonaws.cn/msk/latest/developerguide/msk-password-tutorial.html) authentication, with the SHA-512 hash function and Transport Layer Security (TLS) encryption. For Lambda to connect to the cluster, store the authentication credentials (username and password) in a Secrets Manager secret, and reference this secret when configuring your event source mapping.

For more information about using Secrets Manager, see [Sign-in credentials authentication with Secrets Manager](https://docs.amazonaws.cn/msk/latest/developerguide/msk-password.html) in the *Amazon Managed Streaming for Apache Kafka Developer Guide*.

**Note**  
Amazon MSK doesn't support SASL/PLAIN authentication.

## Mutual TLS authentication


Mutual TLS (mTLS) provides two-way authentication between the client and the server. The client sends a certificate to the server for the server to verify the client. The server also sends a certificate to the client for the client to verify the server.

For Amazon MSK integrations with Lambda, your MSK cluster acts as the server, and Lambda acts as the client.
+ For Lambda to verify your MSK cluster, you configure a client certificate as a secret in Secrets Manager, and reference this certificate in your event source mapping configuration. The client certificate must be signed by a certificate authority (CA) in the server's trust store.
+ The MSK cluster also sends a server certificate to Lambda. The server certificate must be signed by a certificate authority (CA) in the Amazon trust store.

Amazon MSK doesn't support self-signed server certificates. All brokers in Amazon MSK use [public certificates](https://docs.amazonaws.cn/msk/latest/developerguide/msk-encryption.html) signed by [Amazon Trust Services CAs](https://www.amazontrust.com/repository/), which Lambda trusts by default.

### Configuring the mTLS secret


The CLIENT\$1CERTIFICATE\$1TLS\$1AUTH secret requires a certificate field and a private key field. For an encrypted private key, the secret requires a private key password. Both the certificate and private key must be in PEM format.

**Note**  
Lambda supports the [PBES1](https://datatracker.ietf.org/doc/html/rfc2898/#section-6.1) (but not PBES2) private key encryption algorithms.

The certificate field must contain a list of certificates, beginning with the client certificate, followed by any intermediate certificates, and ending with the root certificate. Each certificate must start on a new line with the following structure:

```
-----BEGIN CERTIFICATE-----  
        <certificate contents>
-----END CERTIFICATE-----
```

Secrets Manager supports secrets up to 65,536 bytes, which is enough space for long certificate chains.

The private key must be in [PKCS \$18](https://datatracker.ietf.org/doc/html/rfc5208) format, with the following structure:

```
-----BEGIN PRIVATE KEY-----  
         <private key contents>
-----END PRIVATE KEY-----
```

For an encrypted private key, use the following structure:

```
-----BEGIN ENCRYPTED PRIVATE KEY-----  
          <private key contents>
-----END ENCRYPTED PRIVATE KEY-----
```

The following example shows the contents of a secret for mTLS authentication using an encrypted private key. For an encrypted private key, you include the private key password in the secret.

```
{
 "privateKeyPassword": "testpassword",
 "certificate": "-----BEGIN CERTIFICATE-----
MIIE5DCCAsygAwIBAgIRAPJdwaFaNRrytHBto0j5BA0wDQYJKoZIhvcNAQELBQAw
...
j0Lh4/+1HfgyE2KlmII36dg4IMzNjAFEBZiCRoPimO40s1cRqtFHXoal0QQbIlxk
cmUuiAii9R0=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIFgjCCA2qgAwIBAgIQdjNZd6uFf9hbNC5RdfmHrzANBgkqhkiG9w0BAQsFADBb
...
rQoiowbbk5wXCheYSANQIfTZ6weQTgiCHCCbuuMKNVS95FkXm0vqVD/YpXKwA/no
c8PH3PSoAaRwMMgOSA2ALJvbRz8mpg==
-----END CERTIFICATE-----",
 "privateKey": "-----BEGIN ENCRYPTED PRIVATE KEY-----
MIIFKzBVBgkqhkiG9w0BBQ0wSDAnBgkqhkiG9w0BBQwwGgQUiAFcK5hT/X7Kjmgp
...
QrSekqF+kWzmB6nAfSzgO9IaoAaytLvNgGTckWeUkWn/V0Ck+LdGUXzAC4RxZnoQ
zp2mwJn2NYB7AZ7+imp0azDZb+8YG2aUCiyqb6PnnA==
-----END ENCRYPTED PRIVATE KEY-----"
}
```

For more information about mTLS for Amazon MSK, and instructions on how to generate a client certificate, see [Mutual TLS client authentication for Amazon MSK](https://docs.amazonaws.cn/msk/latest/developerguide/msk-authentication.html) in the *Amazon Managed Streaming for Apache Kafka Developer Guide*.

## IAM authentication


You can use Amazon Identity and Access Management (IAM) to authenticate the identity of clients that connect to the MSK cluster. With IAM auth, Lambda relies on the permissions in your function's [execution role](lambda-intro-execution-role.md) to connect to the cluster, retrieve records, and perform other required actions. For a sample policy that contains the necessary permissions, see [ Create authorization policies for the IAM role](https://docs.amazonaws.cn/msk/latest/developerguide/create-iam-access-control-policies.html) in the *Amazon Managed Streaming for Apache Kafka Developer Guide*.

If IAM auth is active on your MSK cluster, and you don't provide a secret, Lambda automatically defaults to using IAM auth.

For more information about IAM authentication in Amazon MSK, see [IAM access control](https://docs.amazonaws.cn/msk/latest/developerguide/iam-access-control.html).

## How Lambda chooses a bootstrap broker


Lambda chooses a [ bootstrap broker](https://docs.amazonaws.cn/msk/latest/developerguide/msk-get-bootstrap-brokers.html) based on the authentication methods available on your cluster, and whether you provide a secret for authentication. If you provide a secret for mTLS or SASL/SCRAM, Lambda automatically chooses that auth method. If you don't provide a secret, Lambda selects the strongest auth method that's active on your cluster. The following is the order of priority in which Lambda selects a broker, from strongest to weakest auth:
+ mTLS (secret provided for mTLS)
+ SASL/SCRAM (secret provided for SASL/SCRAM)
+ SASL IAM (no secret provided, and IAM auth active)
+ Unauthenticated TLS (no secret provided, and IAM auth not active)
+ Plaintext (no secret provided, and both IAM auth and unauthenticated TLS are not active)

**Note**  
If Lambda can't connect to the most secure broker type, Lambda doesn't attempt to connect to a different (weaker) broker type. If you want Lambda to choose a weaker broker type, deactivate all stronger auth methods on your cluster.

# Creating a Lambda event source mapping for an Amazon MSK event source
Event source mapping

To create an event source mapping, you can use the Lambda console, the [Amazon Command Line Interface (CLI)](https://docs.amazonaws.cn/cli/latest/userguide/getting-started-install.html), or an [Amazon SDK](https://aws.amazon.com/getting-started/tools-sdks/).

**Note**  
When you create the event source mapping, Lambda creates a [ hyperplane ENI](configuration-vpc.md#configuration-vpc-enis) in the private subnet that contains your MSK cluster, allowing Lambda to establish a secure connection. This hyperplane ENI allows uses the subnet and security group configuration of your MSK cluster, not your Lambda function.

The following console steps add an Amazon MSK cluster as a trigger for your Lambda function. Under the hood, this creates an event source mapping resource.

**To add an Amazon MSK trigger to your Lambda function (console)**

1. Open the [Function page](https://console.aws.amazon.com/lambda/home#/functions) of the Lambda console.

1. Choose the name of the Lambda function you want to add an Amazon MSK trigger to.

1. Under **Function overview**, choose **Add trigger**.

1. Under **Trigger configuration**, choose **MSK**.

1. To specify your Kafka cluster details, do the following:

   1. For **MSK cluster**, select your cluster.

   1. For **Topic name**, enter the name of the Kafka topic to consume messages from.

   1. For **Consumer group ID**, enter the ID of a Kafka consumer group to join, if applicable. For more information, see [Customizable consumer group ID in Lambda](kafka-consumer-group-id.md).

1. For **Cluster authentication**, make the necessary configurations. For more information about cluster authentication, see [Configuring Amazon MSK cluster authentication methods in Lambda](msk-cluster-auth.md).
   + Toggle on **Use authentication** if you want Lambda to perform authentication with your MSK cluster when establishing a connection. Authentication is recommended.
   + If you use authentication, for **Authentication method**, choose the authentication method to use.
   + If you use authentication, for **Secrets Manager key**, choose the Secrets Manager key that contains the authentication credentials needed to access your cluster.

1. Under **Event poller configuration**, make the necessary configurations.
   + Choose **Activate trigger** to enable the trigger immediately after creation.
   + Choose whether you want to **Configure provisioned mode** for your event source mapping. For more information, see [Apache Kafka event poller scaling modes in Lambda](kafka-scaling-modes.md).
     + If you configure provisioned mode, enter a value for **Minimum event pollers**, a value for **Maximum event pollers**, and an optional value for PollerGroupName to specify grouping of multiple ESMs within the same event source VPC.
   + For **Starting position**, choose how you want Lambda to start reading from your stream. For more information, see [Apache Kafka polling and stream starting positions in Lambda](kafka-starting-positions.md).

1. Under **Batching**, make the necessary configurations. For more information about batching, see [Batching behavior](invocation-eventsourcemapping.md#invocation-eventsourcemapping-batching).

   1. For **Batch size**, enter the maximum number of messages to receive in a single batch.

   1. For **Batch window**, enter the maximum number of seconds that Lambda spends gathering records before invoking the function.

1. Under **Filtering**, make the necessary configurations. For more information about filtering, see [Filtering events from Amazon MSK and self-managed Apache Kafka event sources](kafka-filtering.md).
   + For **Filter criteria**, add filter criteria definitions to determine whether or not to process an event.

1. Under **Failure handling**, make the necessary configurations. For more information about failure handling, see [Capturing discarded batches for Amazon MSK and self-managed Apache Kafka event sources](kafka-on-failure.md).
   + For **On-failure destination**, specify the ARN of your on-failure destination.

1. For **Tags**, enter the tags to associate with this event source mapping.

1. To create the trigger, choose **Add**.

You can also create the event source mapping using the Amazon CLI with the [ create-event-source-mapping](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/create-event-source-mapping.html) command. The following example creates an event source mapping to map the Lambda function `my-msk-function` to the `AWSKafkaTopic` topic, starting from the `LATEST` message. This command also uses the [SourceAccessConfiguration](https://docs.amazonaws.cn/lambda/latest/api/API_SourceAccessConfiguration.html) object to instruct Lambda to use [SASL/SCRAM](msk-cluster-auth.md#msk-sasl-scram) authentication when connecting to the cluster.

```
aws lambda create-event-source-mapping \
  --event-source-arn arn:aws:kafka:us-east-1:111122223333:cluster/my-cluster/fc2f5bdf-fd1b-45ad-85dd-15b4a5a6247e-2 \
  --topics AWSKafkaTopic \
  --starting-position LATEST \
  --function-name my-kafka-function
  --source-access-configurations '[{"Type": "SASL_SCRAM_512_AUTH","URI": "arn:aws:secretsmanager:us-east-1:111122223333:secret:my-secret"}]'
```

If the cluster uses [mTLS authentication](msk-cluster-auth.md#msk-mtls), include a [SourceAccessConfiguration](https://docs.amazonaws.cn/lambda/latest/api/API_SourceAccessConfiguration.html) object that specifies `CLIENT_CERTIFICATE_TLS_AUTH` and a Secrets Manager key ARN. This is shown in the following command:

```
aws lambda create-event-source-mapping \
  --event-source-arn arn:aws:kafka:us-east-1:111122223333:cluster/my-cluster/fc2f5bdf-fd1b-45ad-85dd-15b4a5a6247e-2 \
  --topics AWSKafkaTopic \
  --starting-position LATEST \
  --function-name my-kafka-function
  --source-access-configurations '[{"Type": "CLIENT_CERTIFICATE_TLS_AUTH","URI": "arn:aws:secretsmanager:us-east-1:111122223333:secret:my-secret"}]'
```

When the cluster uses [IAM authentication](msk-cluster-auth.md#msk-iam-auth), you don’t need a [ SourceAccessConfiguration](https://docs.amazonaws.cn/lambda/latest/api/API_SourceAccessConfiguration.html) object. This is shown in the following command:

```
aws lambda create-event-source-mapping \
  --event-source-arn arn:aws:kafka:us-east-1:111122223333:cluster/my-cluster/fc2f5bdf-fd1b-45ad-85dd-15b4a5a6247e-2 \
  --topics AWSKafkaTopic \
  --starting-position LATEST \
  --function-name my-kafka-function
```

# Creating cross-account event source mappings in Lambda
Cross-account event source mappings

You can use [multi-VPC private connectivity](https://docs.amazonaws.cn/msk/latest/developerguide/aws-access-mult-vpc.html) to connect a Lambda function to a provisioned MSK cluster in a different Amazon Web Services account. Multi-VPC connectivity uses Amazon PrivateLink, which keeps all traffic within the Amazon network.

**Note**  
You can't create cross-account event source mappings for serverless MSK clusters.

To create a cross-account event source mapping, you must first [configure multi-VPC connectivity for the MSK cluster](https://docs.amazonaws.cn/msk/latest/developerguide/aws-access-mult-vpc.html#mvpc-cluster-owner-action-turn-on). When you create the event source mapping, use the managed VPC connection ARN instead of the cluster ARN, as shown in the following examples. The [CreateEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html) operation also differs depending on which authentication type the MSK cluster uses.

**Example — Create cross-account event source mapping for cluster that uses IAM authentication**  
When the cluster uses [IAM role-based authentication](msk-cluster-auth.md#msk-iam-auth), you don't need a [SourceAccessConfiguration](https://docs.amazonaws.cn/lambda/latest/api/API_SourceAccessConfiguration.html) object. Example:  

```
aws lambda create-event-source-mapping \
  --event-source-arn arn:aws:kafka:us-east-1:111122223333:vpc-connection/444455556666/my-cluster-name/51jn98b4-0a61-46cc-b0a6-61g9a3d797d5-7 \
  --topics AWSKafkaTopic \
  --starting-position LATEST \
  --function-name my-kafka-function
```

**Example — Create cross-account event source mapping for cluster that uses SASL/SCRAM authentication**  
If the cluster uses [SASL/SCRAM authentication](msk-cluster-auth.md#msk-sasl-scram), you must include a [SourceAccessConfiguration](https://docs.amazonaws.cn/lambda/latest/api/API_SourceAccessConfiguration.html) object that specifies `SASL_SCRAM_512_AUTH` and a Secrets Manager secret ARN.  
There are two ways to use secrets for cross-account Amazon MSK event source mappings with SASL/SCRAM authentication:  
+ Create a secret in the Lambda function account and sync it with the cluster secret. [Create a rotation](https://docs.amazonaws.cn/secretsmanager/latest/userguide/rotating-secrets.html) to keep the two secrets in sync. This option allows you to control the secret from the function account.
+ Use the secret that's associated with the MSK cluster. This secret must allow cross-account access to the Lambda function account. For more information, see [Permissions to Amazon Secrets Manager secrets for users in a different account](https://docs.amazonaws.cn/secretsmanager/latest/userguide/auth-and-access_examples_cross.html).

```
aws lambda create-event-source-mapping \
  --event-source-arn arn:aws:kafka:us-east-1:111122223333:vpc-connection/444455556666/my-cluster-name/51jn98b4-0a61-46cc-b0a6-61g9a3d797d5-7 \
  --topics AWSKafkaTopic \
  --starting-position LATEST \
  --function-name my-kafka-function \
  --source-access-configurations '[{"Type": "SASL_SCRAM_512_AUTH","URI": "arn:aws:secretsmanager:us-east-1:444455556666:secret:my-secret"}]'
```

**Example — Create cross-account event source mapping for cluster that uses mTLS authentication**  
If the cluster uses [mTLS authentication](msk-cluster-auth.md#msk-mtls), you must include a [SourceAccessConfiguration](https://docs.amazonaws.cn/lambda/latest/api/API_SourceAccessConfiguration.html) object that specifies `CLIENT_CERTIFICATE_TLS_AUTH` and a Secrets Manager secret ARN. The secret can be stored in the cluster account or the Lambda function account.  

```
aws lambda create-event-source-mapping \
  --event-source-arn arn:aws:kafka:us-east-1:111122223333:vpc-connection/444455556666/my-cluster-name/51jn98b4-0a61-46cc-b0a6-61g9a3d797d5-7 \
  --topics AWSKafkaTopic \
  --starting-position LATEST \
  --function-name my-kafka-function \
  --source-access-configurations '[{"Type": "CLIENT_CERTIFICATE_TLS_AUTH","URI": "arn:aws:secretsmanager:us-east-1:444455556666:secret:my-secret"}]'
```

# All Amazon MSK event source configuration parameters in Lambda
Configuration parameters

All Lambda event source types share the same [CreateEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html) and [UpdateEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_UpdateEventSourceMapping.html) API operations. However, only some of the parameters apply to Amazon MSK, as shown in the following table.


| Parameter | Required | Default | Notes | 
| --- | --- | --- | --- | 
|  AmazonManagedKafkaEventSourceConfig  |  N  |  Contains the ConsumerGroupId field, which defaults to a unique value.  |  Can set only on Create  | 
|  BatchSize  |  N  |  100  |  Maximum: 10,000  | 
|  DestinationConfig  |  N  |  N/A  |  [Capturing discarded batches for Amazon MSK and self-managed Apache Kafka event sources](kafka-on-failure.md)  | 
|  Enabled  |  N  |  True  |    | 
|  BisectBatchOnFunctionError  |  N  |  False  |  [Configuring error handling controls for Kafka event sources](kafka-retry-configurations.md)  | 
|  FunctionResponseTypes  |  N  |  N/A  |  [Configuring error handling controls for Kafka event sources](kafka-retry-configurations.md)  | 
|  MaximumRecordAgeInSeconds  |  N  |  -1 (infinite)  |  [Configuring error handling controls for Kafka event sources](kafka-retry-configurations.md)  | 
|  MaximumRetryAttempts  |  N  |  -1 (infinite)  |  [Configuring error handling controls for Kafka event sources](kafka-retry-configurations.md)  | 
|  EventSourceArn  |  Y  | N/A |  Can set only on Create  | 
|  FilterCriteria  |  N  |  N/A  |  [Control which events Lambda sends to your function](invocation-eventfiltering.md)  | 
|  FunctionName  |  Y  |  N/A  |    | 
|  KMSKeyArn  |  N  |  N/A  |  [Encryption of filter criteria](invocation-eventfiltering.md#filter-criteria-encryption)  | 
|  MaximumBatchingWindowInSeconds  |  N  |  500 ms  |  [Batching behavior](invocation-eventsourcemapping.md#invocation-eventsourcemapping-batching)  | 
|  ProvisionedPollersConfig  |  N  |  `MinimumPollers`: default value of 1 if not specified `MaximumPollers`: default value of 200 if not specified `PollerGroupName`: N/A  |  [Provisioned mode](kafka-scaling-modes.md#kafka-provisioned-mode)  | 
|  SourceAccessConfigurations  |  N  |  No credentials  |  SASL/SCRAM or CLIENT\$1CERTIFICATE\$1TLS\$1AUTH (MutualTLS) authentication credentials for your event source  | 
|  StartingPosition  |  Y  | N/A |  AT\$1TIMESTAMP, TRIM\$1HORIZON, or LATEST Can set only on Create  | 
|  StartingPositionTimestamp  |  N  |  N/A  |  Required if StartingPosition is set to AT\$1TIMESTAMP  | 
|  Tags  |  N  |  N/A  |  [Using tags on event source mappings](tags-esm.md)  | 
|  Topics  |  Y  | N/A |  Kafka topic name Can set only on Create  | 

**Note**  
When you specify a `PollerGroupName`, multiple ESMs within the same Amazon VPC can share Event Poller Unit (EPU) capacity. You can use this option to optimize Provisioned mode costs for your ESMs. Requirements for ESM grouping:  
ESMs must be within the same Amazon VPC
Maximum of 100 ESMs per poller group
Aggregate maximum pollers across all ESMs in a group cannot exceed 2000
You can update the `PollerGroupName` to move an ESM to a different group, or remove an ESM from a group by setting `PollerGroupName` to an empty string ("").

# Tutorial: Using an Amazon MSK event source mapping to invoke a Lambda function
Tutorial

In this tutorial, you will perform the following:
+ Create a Lambda function in the same Amazon account as an existing Amazon MSK cluster.
+ Configure networking and authentication for Lambda to communicate with Amazon MSK.
+ Set up a Lambda Amazon MSK event source mapping, which runs your Lambda function when events show up in the topic.

After you are finished with these steps, when events are sent to Amazon MSK, you will be able to set up a Lambda function to process those events automatically with your own custom Lambda code.

 **What can you do with this feature?** 

**Example solution: Use an MSK event source mapping to deliver live scores to your customers.**

Consider the following scenario: Your company hosts a web application where your customers can view information about live events, such as sports games. Information updates from the game are provided to your team through a Kafka topic on Amazon MSK. You want to design a solution that consumes updates from the MSK topic to provide an updated view of the live event to customers inside an application you develop. You have decided on the following design approach: Your client applications will communicate with a serverless backend hosted in Amazon. Clients will connect over websocket sessions using the Amazon API Gateway WebSocket API.

In this solution, you need a component that reads MSK events, performs some custom logic to prepare those events for the application layer and then forwards that information to the API Gateway API. You can implement this component with Amazon Lambda, by providing your custom logic in a Lambda function, then calling it with a Amazon Lambda Amazon MSK event source mapping.

For more information about implementing solutions using the Amazon API Gateway WebSocket API, see [WebSocket API tutorials](https://docs.amazonaws.cn/apigateway/latest/developerguide/websocket-api-chat-app.html) in the API Gateway documentation.

## Prerequisites


An Amazon account with the following preconfigured resources:

**To fulfill these prerequisites, we recommend following [Getting started using Amazon MSK](https://docs.amazonaws.cn//msk/latest/developerguide/getting-started.html) in the Amazon MSK documentation.**
+ An Amazon MSK cluster. See [Create an Amazon MSK cluster](https://docs.amazonaws.cn//msk/latest/developerguide/create-cluster.html) in *Getting started using Amazon MSK*.
+ The following configuration:
  + Ensure **IAM role-based authentication** is **Enabled** in your cluster security settings. This improves your security by limiting your Lambda function to only access the Amazon MSK resources needed. This is enabled by default on new Amazon MSK clusters.
  + Ensure **Public access** is off in your cluster networking settings. Restricting your Amazon MSK cluster's access to the internet improves your security by limiting how many intermediaries handle your data. This is enabled by default on new Amazon MSK clusters.
+ A Kafka topic in your Amazon MSK cluster to use for this solution. See [Create a topic](https://docs.amazonaws.cn//msk/latest/developerguide/create-topic.html) in *Getting started using Amazon MSK*.
+ A Kafka admin host set up to retrieve information from your Kafka cluster and send Kafka events to your topic for testing, such as an Amazon EC2 instance with the Kafka admin CLI and Amazon MSK IAM library installed. See [Create a client machine](https://docs.amazonaws.cn//msk/latest/developerguide/create-client-machine.html) in *Getting started using Amazon MSK*.

Once you have set up these resources, gather the following information from your Amazon account to confirm that you are ready to continue.
+ The name of your Amazon MSK cluster. You can find this information in the Amazon MSK console.
+ The cluster UUID, part of the ARN for your Amazon MSK cluster, which you can find in the Amazon MSK console. Follow the procedures in [Listing clusters](https://docs.amazonaws.cn/msk/latest/developerguide/msk-list-clusters.html) in the Amazon MSK documentation to find this information.
+ The security groups associated with your Amazon MSK cluster. You can find this information in the Amazon MSK console. In the following steps, refer to these as your *clusterSecurityGroups*.
+ The id of the Amazon VPC containing your Amazon MSK cluster. You can find this information by identifying subnets associated with your Amazon MSK cluster in the Amazon MSK console, then identifying the Amazon VPC associated with the subnet in the Amazon VPC Console.
+ The name of the Kafka topic used in your solution. You can find this information by calling your Amazon MSK cluster with the Kafka `topics` CLI from your Kafka admin host. For more information about the topics CLI, see [Adding and removing topics](https://kafka.apache.org/documentation/#basic_ops_add_topic) in the Kafka documentation.
+ The name of a consumer group for your Kafka topic, suitable for use by your Lambda function. This group can be created automatically by Lambda, so you don't need to create it with the Kafka CLI. If you do need to manage your consumer groups, to learn more about the consumer-groups CLI, see [Managing Consumer Groups](https://kafka.apache.org/documentation/#basic_ops_consumer_group) in the Kafka documentation.

The following permissions in your Amazon account:
+ Permission to create and manage a Lambda function.
+ Permission to create IAM policies and associate them with your Lambda function.
+ Permission to create Amazon VPC endpoints and alter networking configuration in the Amazon VPC hosting your Amazon MSK cluster.

### Install the Amazon Command Line Interface


If you have not yet installed the Amazon Command Line Interface, follow the steps at [Installing or updating the latest version of the Amazon CLI](https://docs.amazonaws.cn/cli/latest/userguide/getting-started-install.html) to install it.

The tutorial requires a command line terminal or shell to run commands. In Linux and macOS, use your preferred shell and package manager.

**Note**  
In Windows, some Bash CLI commands that you commonly use with Lambda (such as `zip`) are not supported by the operating system's built-in terminals. To get a Windows-integrated version of Ubuntu and Bash, [install the Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/install-win10). 

## Configure network connectivity for Lambda to communicate with Amazon MSK


 Use Amazon PrivateLink to connect Lambda and Amazon MSK. You can do so by creating interface Amazon VPC endpoints in the Amazon VPC console. For more information about networking configuration, see [Configuring your Amazon MSK cluster and Amazon VPC network for Lambda](with-msk-cluster-network.md). 

When a Amazon MSK event source mapping runs on the behalf of a Lambda function, it assumes the Lambda function’s execution role. This IAM role authorizes the mapping to access resources secured by IAM, such as your Amazon MSK cluster. Although the components share an execution role, the Amazon MSK mapping and your Lambda function have separate connectivity requirements for their respective tasks, as shown in the following diagram.

![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/msk_tut_network.png)


Your event source mapping belongs to your Amazon MSK cluster security group. In this networking step, create Amazon VPC endpoints from your Amazon MSK cluster VPC to connect the event source mapping to the Lambda and STS services. Secure these endpoints to accept traffic from your Amazon MSK cluster security group. Then, adjust the Amazon MSK cluster security groups to allow the event source mapping to communicate with the Amazon MSK cluster.

 You can configure the following steps using the Amazon Web Services Management Console.

**To configure interface Amazon VPC endpoints to connect Lambda and Amazon MSK**

1. Create a security group for your interface Amazon VPC endpoints, *endpointSecurityGroup*, that allows inbound TCP traffic on 443 from *clusterSecurityGroups*. Follow the procedure in [Create a security group](https://docs.amazonaws.cn//AWSEC2/latest/UserGuide/working-with-security-groups.html#creating-security-group) in the Amazon EC2 documentation to create a security group. Then, follow the procedure in [Add rules to a security group](https://docs.amazonaws.cn//AWSEC2/latest/UserGuide/working-with-security-groups.html#adding-security-group-rule) in the Amazon EC2 documentation to add appropriate rules. 

   **Create a security group with the following information:**

   When adding your inbound rules, create a rule for each security group in *clusterSecurityGroups*. For each rule:
   + For **Type**, select **HTTPS**.
   + For **Source**, select one of *clusterSecurityGroups*.

1.  Create an endpoint connecting the Lambda service to the Amazon VPC containing your Amazon MSK cluster. Follow the procedure in [Create an interface endpoint](https://docs.amazonaws.cn//vpc/latest/privatelink/create-interface-endpoint.html).

   **Create an interface endpoint with the following information:**
   + For **Service name**, select `com.amazonaws.regionName.lambda`, where *regionName* hosts your Lambda function.
   + For **VPC**, select the Amazon VPC containing your Amazon MSK cluster.
   + For **Security groups**, select *endpointSecurityGroup*, which you created earlier.
   + For **Subnets**, select the subnets that host your Amazon MSK cluster.
   + For **Policy**, provide the following policy document, which secures the endpoint for use by the Lambda service principal for the `lambda:InvokeFunction` action.

     ```
     {
         "Statement": [
             {
                 "Action": "lambda:InvokeFunction",
                 "Effect": "Allow",
                 "Principal": {
                     "Service": [
                         "lambda.amazonaws.com"
                     ]
                 },
                 "Resource": "*"
             }
         ]
     }
     ```
   + Ensure **Enable DNS name** remains set.

1.  Create an endpoint connecting the Amazon STS service to the Amazon VPC containing your Amazon MSK cluster. Follow the procedure in [Create an interface endpoint](https://docs.amazonaws.cn//vpc/latest/privatelink/create-interface-endpoint.html).

   **Create an interface endpoint with the following information:**
   + For **Service name**, select Amazon STS.
   + For **VPC**, select the Amazon VPC containing your Amazon MSK cluster.
   + For **Security groups**, select *endpointSecurityGroup*.
   + For **Subnets**, select the subnets that host your Amazon MSK cluster.
   + For **Policy**, provide the following policy document, which secures the endpoint for use by the Lambda service principal for the `sts:AssumeRole` action.

     ```
     {
         "Statement": [
             {
                 "Action": "sts:AssumeRole",
                 "Effect": "Allow",
                 "Principal": {
                     "Service": [
                         "lambda.amazonaws.com"
                     ]
                 },
                 "Resource": "*"
             }
         ]
     }
     ```
   + Ensure **Enable DNS name** remains set.

1. For each security group associated with your Amazon MSK cluster, that is, in *clusterSecurityGroups*, allow the following:
   + Allow all inbound and outbound TCP traffic on 9098 to all of *clusterSecurityGroups*, including within itself.
   + Allow all outbound TCP traffic on 443.

   Some of this traffic is allowed by default security group rules, so if your cluster is attached to a single security group, and that group has default rules, additional rules are not necessary. To adjust security group rules, follow the procedures in [Add rules to a security group](https://docs.amazonaws.cn//AWSEC2/latest/UserGuide/working-with-security-groups.html#adding-security-group-rule) in the Amazon EC2 documentation.

   **Add rules to your security groups with the following information:**
   + For each inbound rule or outbound rule for port 9098, provide
     + For **Type**, select **Custom TCP**.
     + For **Port range**, provide 9098.
     + For **Source**, provide one of *clusterSecurityGroups*.
   + For each inbound rule for port 443, for **Type**, select **HTTPS**.

## Create an IAM role for Lambda to read from your Amazon MSK topic


Identify the auth requirements for Lambda to read from your Amazon MSK topic, then define them in a policy. Create a role, *lambdaAuthRole*, that authorizes Lambda to use those permissions. Authorize actions on your Amazon MSK cluster using `kafka-cluster` IAM actions. Then, authorize Lambda to perform Amazon MSK `kafka` and Amazon EC2 actions needed to discover and connect to your Amazon MSK cluster, as well as CloudWatch actions so Lambda can log what it has done.

**To describe the auth requirements for Lambda to read from Amazon MSK**

1. Write an IAM policy document (a JSON document), *clusterAuthPolicy*, that allows Lambda to read from your Kafka topic in your Amazon MSK cluster using your Kafka consumer group. Lambda requires a Kafka consumer group to be set when reading.

   Alter the following template to align with your prerequisites:

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "kafka-cluster:Connect",
                   "kafka-cluster:DescribeGroup",
                   "kafka-cluster:AlterGroup",
                   "kafka-cluster:DescribeTopic",
                   "kafka-cluster:ReadData",
                   "kafka-cluster:DescribeClusterDynamicConfiguration"
               ],
               "Resource": [
                   "arn:aws-cn:kafka:us-east-1:111122223333:cluster/mskClusterName/cluster-uuid",
                   "arn:aws-cn:kafka:us-east-1:111122223333:topic/mskClusterName/cluster-uuid/mskTopicName",
                   "arn:aws-cn:kafka:us-east-1:111122223333:group/mskClusterName/cluster-uuid/mskGroupName"
               ]
           }
       ]
   }
   ```

------

   For more information, consult [Configuring Lambda permissions for Amazon MSK event source mappings](with-msk-permissions.md). When writing your policy:
   + Replace *us-east-1* and *111122223333* with the Amazon Web Services Region and Amazon Web Services account of your Amazon MSK cluster.
   + For *mskClusterName*, provide the name of your Amazon MSK cluster.
   + For *cluster-uuid*, provide the UUID in the ARN for your Amazon MSK cluster.
   + For *mskTopicName*, provide the name of your Kafka topic.
   + For *mskGroupName*, provide the name of your Kafka consumer group.

1. Identify the Amazon MSK, Amazon EC2 and CloudWatch permissions required for Lambda to discover and connect your Amazon MSK cluster, and log those events.

   The `AWSLambdaMSKExecutionRole` managed policy permissively defines the required permissions. Use it in the following steps.

   In a production environment, assess `AWSLambdaMSKExecutionRole` to restrict your execution role policy based on the principle of least privilege, then write a policy for your role that replaces this managed policy.

For details about the IAM policy language, see the [IAM documentation](https://docs.amazonaws.cn//iam/).

Now that you have written your policy document, create an IAM policy so you can attach it to your role. You can do this using the console with the following procedure.

**To create an IAM policy from your policy document**

1. Sign in to the Amazon Web Services Management Console and open the IAM console at [https://console.amazonaws.cn/iam/](https://console.amazonaws.cn/iam/).

1. In the navigation pane on the left, choose **Policies**. 

1. Choose **Create policy**.

1. In the **Policy editor** section, choose the **JSON** option.

1. Paste *clusterAuthPolicy*.

1. When you are finished adding permissions to the policy, choose **Next**.

1. On the **Review and create** page, type a **Policy Name** and a **Description** (optional) for the policy that you are creating. Review **Permissions defined in this policy** to see the permissions that are granted by your policy.

1. Choose **Create policy** to save your new policy.

For more information, see [Creating IAM policies](https://docs.amazonaws.cn//IAM/latest/UserGuide/access_policies_create.html) in the IAM documentation.

Now that you have appropriate IAM policies, create a role and attach them to it. You can do this using the console with the following procedure.

**To create an execution role in the IAM console**

1. Open the [Roles page](https://console.amazonaws.cn/iam/home#/roles) in the IAM console.

1. Choose **Create role**.

1. Under **Trusted entity type**, choose **Amazon service**.

1. Under **Use case**, choose **Lambda**.

1. Choose **Next**.

1. Select the following policies:
   + *clusterAuthPolicy*
   + `AWSLambdaMSKExecutionRole`

1. Choose **Next**.

1. For **Role name**, enter *lambdaAuthRole* and then choose **Create role**.

For more information, see [Defining Lambda function permissions with an execution role](lambda-intro-execution-role.md).

## Create a Lambda function to read from your Amazon MSK topic


Create a Lambda function configured to use your IAM role. You can create your Lambda function using the console.

**To create a Lambda function using your auth configuration**

1.  Open the Lambda console and select **Create function** from the header. 

1. Select **Author from scratch**.

1. For **Function name**, provide an appropriate name of your choice.

1. For **Runtime**, choose the **Latest supported** version of `Node.js` to use the code provided in this tutorial.

1. Choose **Change default execution role**.

1. Select **Use an existing role**.

1. For **Existing role**, select *lambdaAuthRole*.

In a production environment, you usually need to add further policies to the execution role for your Lambda function to meaningfully process your Amazon MSK events. For more information on adding policies to your role, see [Add or remove identity permissions](https://docs.amazonaws.cn/IAM/latest/UserGuide/access_policies_manage-attach-detach.html#add-policies-console) in the IAM documentation.

## Create an event source mapping to your Lambda function


Your Amazon MSK event source mapping provides the Lambda service the information necessary to invoke your Lambda when appropriate Amazon MSK events occur. You can create a Amazon MSK mapping using the console. Create a Lambda trigger, then the event source mapping is automatically set up.

**To create a Lambda trigger (and event source mapping)**

1. Navigate to your Lambda function's overview page.

1. In the function overview section, choose **Add trigger** on the bottom left.

1. In the **Select a source** dropdown, select **Amazon MSK**.

1. Don't set **authentication**.

1. For **MSK cluster**, select your cluster's name.

1. For **Batch size**, enter 1. This step makes this feature easier to test, and is not an ideal value in production.

1. For **Topic name**, provide the name of your Kafka topic.

1. For **Consumer group ID**, provide the id of your Kafka consumer group.

## Update your Lambda function to read your streaming data


 Lambda provides information about Kafka events through the event method parameter. For an example structure of a Amazon MSK event, see [Example event](with-msk.md#msk-sample-event). After you understand how to interpret Lambda forwarded Amazon MSK events, you can alter your Lambda function code to use the information they provide. 

 Provide the following code to your Lambda function to log the contents of a Lambda Amazon MSK event for testing purposes: 

------
#### [ .NET ]

**Amazon SDK for .NET**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-msk-to-lambda) repository. 
Consuming an Amazon MSK event with Lambda using .NET.  

```
using System.Text;
using Amazon.Lambda.Core;
using Amazon.Lambda.KafkaEvents;


// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))]

namespace MSKLambda;

public class Function
{
    
    
    /// <param name="input">The event for the Lambda function handler to process.</param>
    /// <param name="context">The ILambdaContext that provides methods for logging and describing the Lambda environment.</param>
    /// <returns></returns>
    public void FunctionHandler(KafkaEvent evnt, ILambdaContext context)
    {

        foreach (var record in evnt.Records)
        {
            Console.WriteLine("Key:" + record.Key); 
            foreach (var eventRecord in record.Value)
            {
                var valueBytes = eventRecord.Value.ToArray();    
                var valueText = Encoding.UTF8.GetString(valueBytes);
                
                Console.WriteLine("Message:" + valueText);
            }
        }
    }
    

}
```

------
#### [ Go ]

**SDK for Go V2**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-msk-to-lambda) repository. 
Consuming an Amazon MSK event with Lambda using Go.  

```
package main

import (
	"encoding/base64"
	"fmt"

	"github.com/aws/aws-lambda-go/events"
	"github.com/aws/aws-lambda-go/lambda"
)

func handler(event events.KafkaEvent) {
	for key, records := range event.Records {
		fmt.Println("Key:", key)

		for _, record := range records {
			fmt.Println("Record:", record)

			decodedValue, _ := base64.StdEncoding.DecodeString(record.Value)
			message := string(decodedValue)
			fmt.Println("Message:", message)
		}
	}
}

func main() {
	lambda.Start(handler)
}
```

------
#### [ Java ]

**SDK for Java 2.x**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-msk-to-lambda) repository. 
Consuming an Amazon MSK event with Lambda using Java.  

```
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.amazonaws.services.lambda.runtime.events.KafkaEvent;
import com.amazonaws.services.lambda.runtime.events.KafkaEvent.KafkaEventRecord;

import java.util.Base64;
import java.util.Map;

public class Example implements RequestHandler<KafkaEvent, Void> {

    @Override
    public Void handleRequest(KafkaEvent event, Context context) {
        for (Map.Entry<String, java.util.List<KafkaEventRecord>> entry : event.getRecords().entrySet()) {
            String key = entry.getKey();
            System.out.println("Key: " + key);

            for (KafkaEventRecord record : entry.getValue()) {
                System.out.println("Record: " + record);

                byte[] value = Base64.getDecoder().decode(record.getValue());
                String message = new String(value);
                System.out.println("Message: " + message);
            }
        }

        return null;
    }
}
```

------
#### [ JavaScript ]

**SDK for JavaScript (v3)**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-msk-to-lambda) repository. 
Consuming an Amazon MSK event with Lambda using JavaScript.  

```
exports.handler = async (event) => {
    // Iterate through keys
    for (let key in event.records) {
      console.log('Key: ', key)
      // Iterate through records
      event.records[key].map((record) => {
        console.log('Record: ', record)
        // Decode base64
        const msg = Buffer.from(record.value, 'base64').toString()
        console.log('Message:', msg)
      }) 
    }
}
```
Consuming an Amazon MSK event with Lambda using TypeScript.  

```
import { MSKEvent, Context } from "aws-lambda";
import { Buffer } from "buffer";
import { Logger } from "@aws-lambda-powertools/logger";

const logger = new Logger({
  logLevel: "INFO",
  serviceName: "msk-handler-sample",
});

export const handler = async (
  event: MSKEvent,
  context: Context
): Promise<void> => {
  for (const [topic, topicRecords] of Object.entries(event.records)) {
    logger.info(`Processing key: ${topic}`);

    // Process each record in the partition
    for (const record of topicRecords) {
      try {
        // Decode the message value from base64
        const decodedMessage = Buffer.from(record.value, 'base64').toString();

        logger.info({
          message: decodedMessage
        });
      }
      catch (error) {
        logger.error('Error processing event', { error });
        throw error;
      }
    };
  }
}
```

------
#### [ PHP ]

**SDK for PHP**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-msk-to-lambda) repository. 
Consuming an Amazon MSK event with Lambda using PHP.  

```
<?php
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0

// using bref/bref and bref/logger for simplicity

use Bref\Context\Context;
use Bref\Event\Kafka\KafkaEvent;
use Bref\Event\Handler as StdHandler;
use Bref\Logger\StderrLogger;

require __DIR__ . '/vendor/autoload.php';

class Handler implements StdHandler
{
    private StderrLogger $logger;
    public function __construct(StderrLogger $logger)
    {
        $this->logger = $logger;
    }

    /**
     * @throws JsonException
     * @throws \Bref\Event\InvalidLambdaEvent
     */
    public function handle(mixed $event, Context $context): void
    {
        $kafkaEvent = new KafkaEvent($event);
        $this->logger->info("Processing records");
        $records = $kafkaEvent->getRecords();

        foreach ($records as $record) {
            try {
                $key = $record->getKey();
                $this->logger->info("Key: $key");

                $values = $record->getValue();
                $this->logger->info(json_encode($values));

                foreach ($values as $value) {
                    $this->logger->info("Value: $value");
                }
                
            } catch (Exception $e) {
                $this->logger->error($e->getMessage());
            }
        }
        $totalRecords = count($records);
        $this->logger->info("Successfully processed $totalRecords records");
    }
}

$logger = new StderrLogger();
return new Handler($logger);
```

------
#### [ Python ]

**SDK for Python (Boto3)**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-msk-to-lambda) repository. 
Consuming an Amazon MSK event with Lambda using Python.  

```
import base64

def lambda_handler(event, context):
    # Iterate through keys
    for key in event['records']:
        print('Key:', key)
        # Iterate through records
        for record in event['records'][key]:
            print('Record:', record)
            # Decode base64
            msg = base64.b64decode(record['value']).decode('utf-8')
            print('Message:', msg)
```

------
#### [ Ruby ]

**SDK for Ruby**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-msk-to-lambda) repository. 
Consuming an Amazon MSK event with Lambda using Ruby.  

```
require 'base64'

def lambda_handler(event:, context:)
  # Iterate through keys
  event['records'].each do |key, records|
    puts "Key: #{key}"

    # Iterate through records
    records.each do |record|
      puts "Record: #{record}"

      # Decode base64
      msg = Base64.decode64(record['value'])
      puts "Message: #{msg}"
    end
  end
end
```

------
#### [ Rust ]

**SDK for Rust**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-msk-to-lambda) repository. 
Consuming an Amazon MSK event with Lambda using Rust.  

```
use aws_lambda_events::event::kafka::KafkaEvent;
use lambda_runtime::{run, service_fn, tracing, Error, LambdaEvent};
use base64::prelude::*;
use serde_json::{Value};
use tracing::{info};

/// Pre-Requisites:
/// 1. Install Cargo Lambda - see https://www.cargo-lambda.info/guide/getting-started.html
/// 2. Add packages tracing, tracing-subscriber, serde_json, base64
///
/// This is the main body for the function.
/// Write your code inside it.
/// There are some code example in the following URLs:
/// - https://github.com/awslabs/aws-lambda-rust-runtime/tree/main/examples
/// - https://github.com/aws-samples/serverless-rust-demo/

async fn function_handler(event: LambdaEvent<KafkaEvent>) -> Result<Value, Error> {

    let payload = event.payload.records;

    for (_name, records) in payload.iter() {

        for record in records {

         let record_text = record.value.as_ref().ok_or("Value is None")?;
         info!("Record: {}", &record_text);

         // perform Base64 decoding
         let record_bytes = BASE64_STANDARD.decode(record_text)?;
         let message = std::str::from_utf8(&record_bytes)?;
         
         info!("Message: {}", message);
        }

    }

    Ok(().into())
}

#[tokio::main]
async fn main() -> Result<(), Error> {

    // required to enable CloudWatch error logging by the runtime
    tracing::init_default_subscriber();
    info!("Setup CW subscriber!");

    run(service_fn(function_handler)).await
}
```

------

You can provide function code to your Lambda using the console.

**To update function code using the console code editor**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console and select your function.

1. Select the **Code** tab.

1. In the **Code source** pane, select your source code file and edit it in the integrated code editor.

1. In the **DEPLOY** section, choose **Deploy** to update your function's code:  
![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/getting-started-tutorial/deploy-console.png)

## Test your Lambda function to verify it is connected to your Amazon MSK topic


You can now verify whether or not your Lambda is being invoked by the event source by inspecting CloudWatch event logs.

**To verify whether your Lambda function is being invoked**

1. Use your Kafka admin host to generate Kafka events using the `kafka-console-producer` CLI. For more information, see [Write some events into the topic](https://kafka.apache.org/documentation/#quickstart_send) in the Kafka documentation. Send enough events to fill up the batch defined by batch size for your event source mapping defined in the previous step, or Lambda will wait for more information to invoke.

1. If your function runs, Lambda writes what happened to CloudWatch. In the console, navigate to your Lambda function's detail page.

1. Select the **Configuration** tab.

1. From the sidebar, select **Monitoring and operations tools**.

1. Identify the **CloudWatch log group** under **Logging configuration**. The log group should start with `/aws/lambda`. Choose the link to the log group.

1. In the CloudWatch console, inspect the **Log events** for the log events Lambda has sent to the log stream. Identify if there are log events containing the message from your Kafka event, as in the following image. If there are, you have successfully connected a Lambda function to Amazon MSK with a Lambda event source mapping.  
![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/msk_tut_log.png)

# Using Lambda with self-managed Apache Kafka
Self-managed Apache Kafka

This topic describes how to use Lambda with a self-managed Kafka cluster. In Amazon terminology, a self-managed cluster includes non-Amazon hosted Kafka clusters. For example, you can host your Kafka cluster with a cloud provider such as [Confluent Cloud](https://www.confluent.io/confluent-cloud/) or [Redpanda](https://www.redpanda.com/).

This chapter explains how to use a self-managed Apache Kafka cluster as an event source for your Lambda function. The general process for integrating self-managed Apache Kafka with Lambda involves the following steps:

1. **[Cluster and network setup](with-kafka-cluster-network.md)** – First, set up your self-managed Apache Kafka cluster with the correct networking configuration to allow Lambda to access your cluster.

1. **[Event source mapping setup](with-kafka-configure.md)** – Then, create the [event source mapping](invocation-eventsourcemapping.md) resource that Lambda needs to securely connect your Apache Kafka cluster to your function.

1. **[Function and permissions setup](with-kafka-permissions.md)** – Finally, ensure that your function is correctly set up, and has the necessary permissions in its [execution role](lambda-intro-execution-role.md).

Apache Kafka as an event source operates similarly to using Amazon Simple Queue Service (Amazon SQS) or Amazon Kinesis. Lambda internally polls for new messages from the event source and then synchronously invokes the target Lambda function. Lambda reads the messages in batches and provides these to your function as an event payload. The maximum batch size is configurable (the default is 100 messages). For more information, see [Batching behavior](invocation-eventsourcemapping.md#invocation-eventsourcemapping-batching).

To optimize the throughput of your self-managed Apache Kafka event source mapping, configure provisioned mode. In provisioned mode, you can define the minimum and maximum number of event pollers allocated to your event source mapping. This can improve the ability of your event source mapping to handle unexpected message spikes. For more information, see [provisioned mode](kafka-scaling-modes.md#kafka-provisioned-mode).

**Warning**  
Lambda event source mappings process each event at least once, and duplicate processing of records can occur. To avoid potential issues related to duplicate events, we strongly recommend that you make your function code idempotent. To learn more, see [How do I make my Lambda function idempotent](https://repost.aws/knowledge-center/lambda-function-idempotent) in the Amazon Knowledge Center.

For Kafka-based event sources, Lambda supports processing control parameters, such as batching windows and batch size. For more information, see [Batching behavior](invocation-eventsourcemapping.md#invocation-eventsourcemapping-batching).

For an example of how to use self-managed Kafka as an event source, see [Using self-hosted Apache Kafka as an event source for Amazon Lambda](https://amazonaws-china.com/blogs/compute/using-self-hosted-apache-kafka-as-an-event-source-for-aws-lambda/) on the Amazon Compute Blog.

**Topics**
+ [

## Example event
](#smaa-sample-event)
+ [

# Configuring your self-managed Apache Kafka cluster and network for Lambda
](with-kafka-cluster-network.md)
+ [

# Configuring Lambda execution role permissions
](with-kafka-permissions.md)
+ [

# Configuring self-managed Apache Kafka event sources for Lambda
](with-kafka-configure.md)

## Example event


Lambda sends the batch of messages in the event parameter when it invokes your Lambda function. The event payload contains an array of messages. Each array item contains details of the Kafka topic and Kafka partition identifier, together with a timestamp and a base64-encoded message.

```
{
   "eventSource": "SelfManagedKafka",
   "bootstrapServers":"b-2.demo-cluster-1.a1bcde.c1.kafka.cn-north-1.amazonaws.com.cn:9092,b-1.demo-cluster-1.a1bcde.c1.kafka.cn-north-1.amazonaws.com.cn:9092",
   "records":{
      "mytopic-0":[
         {
            "topic":"mytopic",
            "partition":0,
            "offset":15,
            "timestamp":1545084650987,
            "timestampType":"CREATE_TIME",
            "key":"abcDEFghiJKLmnoPQRstuVWXyz1234==",
            "value":"SGVsbG8sIHRoaXMgaXMgYSB0ZXN0Lg==",
            "headers":[
               {
                  "headerKey":[
                     104,
                     101,
                     97,
                     100,
                     101,
                     114,
                     86,
                     97,
                     108,
                     117,
                     101
                  ]
               }
            ]
         }
      ]
   }
}
```

# Configuring your self-managed Apache Kafka cluster and network for Lambda
Cluster and network setup

To connect your Lambda function to your self-managed Apache Kafka cluster, you need to correctly configure your cluster and the network it resides in. This page describes how to configure your cluster and network. If your cluster and network are already configured properly, see [Configuring self-managed Apache Kafka event sources for Lambda](with-kafka-configure.md) to configure the event source mapping.

**Topics**
+ [

## Self-managed Apache Kafka cluster setup
](#kafka-cluster-setup)
+ [

## Configure network security
](#services-kafka-vpc-config)

## Self-managed Apache Kafka cluster setup


You can host your self-managed Apache Kafka cluster with cloud providers such as [Confluent Cloud](https://www.confluent.io/confluent-cloud/) or [Redpanda](https://www.redpanda.com/), or run it on your own infrastructure. Ensure that your cluster is properly configured and accessible from the network where your Lambda event source mapping will connect.

## Configure network security


To give Lambda full access to self-managed Apache Kafka through your event source mapping, either your cluster must use a public endpoint (public IP address), or you must provide access to the Amazon VPC you created the cluster in.

When you use self-managed Apache Kafka with Lambda, create [Amazon PrivateLink VPC endpoints](https://docs.amazonaws.cn/vpc/latest/privatelink/create-interface-endpoint.html) that provide your function access to the resources in your Amazon VPC.

**Note**  
Amazon PrivateLink VPC endpoints are required for functions with event source mappings that use the default (on-demand) mode for event pollers. If your event source mapping uses [ provisioned mode](invocation-eventsourcemapping.md#invocation-eventsourcemapping-provisioned-mode), you don't need to configure Amazon PrivateLink VPC endpoints.

Create an endpoint to provide access to the following resources:
+  Lambda — Create an endpoint for the Lambda service principal. 
+  Amazon STS — Create an endpoint for the Amazon STS in order for a service principal to assume a role on your behalf. 
+  Secrets Manager — If your cluster uses Secrets Manager to store credentials, create an endpoint for Secrets Manager. 

Alternatively, configure a NAT gateway on each public subnet in the Amazon VPC. For more information, see [Enable internet access for VPC-connected Lambda functions](configuration-vpc-internet.md).

When you create an event source mapping for self-managed Apache Kafka, Lambda checks whether Elastic Network Interfaces (ENIs) are already present for the subnets and security groups configured for your Amazon VPC. If Lambda finds existing ENIs, it attempts to re-use them. Otherwise, Lambda creates new ENIs to connect to the event source and invoke your function.

**Note**  
Lambda functions always run inside VPCs owned by the Lambda service. Your function's VPC configuration does not affect the event source mapping. Only the networking configuration of the event source's determines how Lambda connects to your event source.

Configure the security groups for the Amazon VPC containing your cluster. By default, self-managed Apache Kafka uses the following ports: `9092`.
+ Inbound rules – Allow all traffic on the default broker port for the security group associated with your event source. Alternatively, you can use a self-referencing security group rule to allow access from instances within the same security group.
+ Outbound rules – Allow all traffic on port `443` for external destinations if your function needs to communicate with Amazon services. Alternatively, you can also use a self-referencing security group rule to limit access to the broker if you don't need to communicate with other Amazon services.
+ Amazon VPC endpoint inbound rules — If you are using an Amazon VPC endpoint, the security group associated with your Amazon VPC endpoint must allow inbound traffic on port `443` from the cluster security group.

If your cluster uses authentication, you can also restrict the endpoint policy for the Secrets Manager endpoint. To call the Secrets Manager API, Lambda uses your function role, not the Lambda service principal.

**Example VPC endpoint policy — Secrets Manager endpoint**  

```
{
      "Statement": [
          {
              "Action": "secretsmanager:GetSecretValue",
              "Effect": "Allow",
              "Principal": {
                  "AWS": [
                      "arn:aws-cn::iam::123456789012:role/my-role"
                  ]
              },
              "Resource": "arn:aws-cn::secretsmanager:us-west-2:123456789012:secret:my-secret"
          }
      ]
  }
```

When you use Amazon VPC endpoints, Amazon routes your API calls to invoke your function using the endpoint's Elastic Network Interface (ENI). The Lambda service principal needs to call `lambda:InvokeFunction` on any roles and functions that use those ENIs.

By default, Amazon VPC endpoints have open IAM policies that allow broad access to resources. Best practice is to restrict these policies to perform the needed actions using that endpoint. To ensure that your event source mapping is able to invoke your Lambda function, the VPC endpoint policy must allow the Lambda service principal to call `sts:AssumeRole` and `lambda:InvokeFunction`. Restricting your VPC endpoint policies to allow only API calls originating within your organization prevents the event source mapping from functioning properly, so `"Resource": "*"` is required in these policies.

The following example VPC endpoint policies show how to grant the required access to the Lambda service principal for the Amazon STS and Lambda endpoints.

**Example VPC Endpoint policy — Amazon STS endpoint**  

```
{
      "Statement": [
          {
              "Action": "sts:AssumeRole",
              "Effect": "Allow",
              "Principal": {
                  "Service": [
                      "lambda.amazonaws.com"
                  ]
              },
              "Resource": "*"
          }
      ]
    }
```

**Example VPC Endpoint policy — Lambda endpoint**  

```
{
      "Statement": [
          {
              "Action": "lambda:InvokeFunction",
              "Effect": "Allow",
              "Principal": {
                  "Service": [
                      "lambda.amazonaws.com"
                  ]
              },
              "Resource": "*"
          }
      ]
  }
```

# Configuring Lambda execution role permissions
Configure permissions

In addition to [accessing your self-managed Kafka cluster](kafka-cluster-auth.md), your Lambda function needs permissions to perform various API actions. You add these permissions to the function's [execution role](lambda-intro-execution-role.md). If your users need access to any API actions, add the required permissions to the identity policy for the Amazon Identity and Access Management (IAM) user or role.

**Topics**
+ [

## Required Lambda function permissions
](#smaa-api-actions-required)
+ [

## Optional Lambda function permissions
](#smaa-api-actions-optional)
+ [

## Adding permissions to your execution role
](#smaa-permissions-add-policy)
+ [

## Granting users access with an IAM policy
](#smaa-permissions-add-users)

## Required Lambda function permissions


To create and store logs in a log group in Amazon CloudWatch Logs, your Lambda function must have the following permissions in its execution role:
+ [logs:CreateLogGroup](https://docs.amazonaws.cn/AmazonCloudWatchLogs/latest/APIReference/API_CreateLogGroup.html)
+ [logs:CreateLogStream](https://docs.amazonaws.cn/AmazonCloudWatchLogs/latest/APIReference/API_CreateLogStream.html)
+ [logs:PutLogEvents](https://docs.amazonaws.cn/AmazonCloudWatchLogs/latest/APIReference/API_PutLogEvents.html)

## Optional Lambda function permissions


Your Lambda function might also need permissions to:
+ Describe your Secrets Manager secret.
+ Access your Amazon Key Management Service (Amazon KMS) customer managed key.
+ Access your Amazon VPC.
+ Send records of failed invocations to a destination.

### Secrets Manager and Amazon KMS permissions


Depending on the type of access control that you're configuring for your Kafka brokers, your Lambda function might need permission to access your Secrets Manager secret or to decrypt your Amazon KMS customer managed key. To access these resources, your function's execution role must have the following permissions:
+ [secretsmanager:GetSecretValue](https://docs.amazonaws.cn/secretsmanager/latest/apireference/API_GetSecretValue.html)
+ [kms:Decrypt](https://docs.amazonaws.cn/kms/latest/APIReference/API_Decrypt.html)

### VPC permissions


If only users within a VPC can access your self-managed Apache Kafka cluster, your Lambda function must have permission to access your Amazon VPC resources. These resources include your VPC, subnets, security groups, and network interfaces. To access these resources, your function's execution role must have the following permissions:
+ [ec2:CreateNetworkInterface](https://docs.amazonaws.cn/AWSEC2/latest/APIReference/API_CreateNetworkInterface.html)
+ [ec2:DescribeNetworkInterfaces](https://docs.amazonaws.cn/AWSEC2/latest/APIReference/API_DescribeNetworkInterfaces.html)
+ [ec2:DescribeVpcs](https://docs.amazonaws.cn/AWSEC2/latest/APIReference/API_DescribeVpcs.html)
+ [ec2:DeleteNetworkInterface](https://docs.amazonaws.cn/AWSEC2/latest/APIReference/API_DeleteNetworkInterface.html)
+ [ec2:DescribeSubnets](https://docs.amazonaws.cn/AWSEC2/latest/APIReference/API_DescribeSubnets.html)
+ [ec2:DescribeSecurityGroups](https://docs.amazonaws.cn/AWSEC2/latest/APIReference/API_DescribeSecurityGroups.html)

## Adding permissions to your execution role
Adding permissions

To access other Amazon services that your self-managed Apache Kafka cluster uses, Lambda uses the permissions policies that you define in your Lambda function's [execution role](lambda-intro-execution-role.md).

By default, Lambda is not permitted to perform the required or optional actions for a self-managed Apache Kafka cluster. You must create and define these actions in an [IAM trust policy](https://docs.amazonaws.cn/IAM/latest/UserGuide/id_roles_update-role-trust-policy.html) for your execution role. This example shows how you might create a policy that allows Lambda to access your Amazon VPC resources.

------
#### [ JSON ]

****  

```
{
        "Version":"2012-10-17",		 	 	 
        "Statement":[
           {
              "Effect":"Allow",
              "Action":[
                 "ec2:CreateNetworkInterface",
                 "ec2:DescribeNetworkInterfaces",
                 "ec2:DescribeVpcs",
                 "ec2:DeleteNetworkInterface",
                 "ec2:DescribeSubnets",
                 "ec2:DescribeSecurityGroups"
              ],
              "Resource":"*"
           }
        ]
     }
```

------

## Granting users access with an IAM policy
Adding users to an IAM policy

By default, users and roles don't have permission to perform [event source API operations](invocation-eventsourcemapping.md#event-source-mapping-api). To grant access to users in your organization or account, you create or update an identity-based policy. For more information, see [Controlling access to Amazon resources using policies](https://docs.amazonaws.cn/IAM/latest/UserGuide/access_controlling.html) in the *IAM User Guide*.

For troubleshooting authentication and authorization errors, see [Troubleshooting Kafka event source mapping errors](with-kafka-troubleshoot.md).

# Configuring self-managed Apache Kafka event sources for Lambda
Configure event source

To use a self-managed Apache Kafka cluster as an event source for your Lambda function, you create an [event source mapping](invocation-eventsourcemapping.md) that connects the two resources. This page describes how to create an event source mapping for self-managed Apache Kafka.

This page assumes that you've already properly configured your Kafka cluster and the network it resides in. If you need to set up your cluster or network, see [Configuring your self-managed Apache Kafka cluster and network for Lambda](with-kafka-cluster-network.md).

**Topics**
+ [

## Using a self-managed Apache Kafka cluster as an event source
](#kafka-esm-overview)
+ [

# Configuring cluster authentication methods in Lambda
](kafka-cluster-auth.md)
+ [

# Creating a Lambda event source mapping for a self-managed Apache Kafka event source
](kafka-esm-create.md)
+ [

# All self-managed Apache Kafka event source configuration parameters in Lambda
](kafka-esm-parameters.md)

## Using a self-managed Apache Kafka cluster as an event source
Self-managed Apache Kafka cluster as an event source

When you add your Apache Kafka or Amazon MSK cluster as a trigger for your Lambda function, the cluster is used as an [event source](invocation-eventsourcemapping.md).

Lambda reads event data from the Kafka topics that you specify as `Topics` in a [CreateEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html) request, based on the [starting position](kafka-starting-positions.md) that you specify. After successful processing, your Kafka topic is committed to your Kafka cluster.

Lambda reads messages sequentially for each Kafka topic partition. A single Lambda payload can contain messages from multiple partitions. When more records are available, Lambda continues processing records in batches, based on the BatchSize value that you specify in a [CreateEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html) request, until your function catches up with the topic.

After Lambda processes each batch, it commits the offsets of the messages in that batch. If your function returns an error for any of the messages in a batch, Lambda retries the whole batch of messages until processing succeeds or the messages expire. You can send records that fail all retry attempts to an on-failure destination for later processing.

**Note**  
While Lambda functions typically have a maximum timeout limit of 15 minutes, event source mappings for Amazon MSK, self-managed Apache Kafka, Amazon DocumentDB, and Amazon MQ for ActiveMQ and RabbitMQ only support functions with maximum timeout limits of 14 minutes.

# Configuring cluster authentication methods in Lambda
Cluster authentication

Lambda supports several methods to authenticate with your self-managed Apache Kafka cluster. Make sure that you configure the Kafka cluster to use one of these supported authentication methods. For more information about Kafka security, see the [Security](http://kafka.apache.org/documentation.html#security) section of the Kafka documentation.

## SASL/SCRAM authentication


Lambda supports Simple Authentication and Security Layer/Salted Challenge Response Authentication Mechanism (SASL/SCRAM) authentication with Transport Layer Security (TLS) encryption (`SASL_SSL`). Lambda sends the encrypted credentials to authenticate with the cluster. Lambda doesn't support SASL/SCRAM with plaintext (`SASL_PLAINTEXT`). For more information about SASL/SCRAM authentication, see [RFC 5802](https://tools.ietf.org/html/rfc5802).

Lambda also supports SASL/PLAIN authentication. Because this mechanism uses clear text credentials, the connection to the server must use TLS encryption to ensure that the credentials are protected.

For SASL authentication, you store the sign-in credentials as a secret in Amazon Secrets Manager. For more information about using Secrets Manager, see [Create an Amazon Secrets Manager secret](https://docs.amazonaws.cn/secretsmanager/latest/userguide/create_secret.html) in the *Amazon Secrets Manager User Guide*.

**Important**  
To use Secrets Manager for authentication, secrets must be stored in the same Amazon region as your Lambda function.

## Mutual TLS authentication


Mutual TLS (mTLS) provides two-way authentication between the client and server. The client sends a certificate to the server for the server to verify the client, and the server sends a certificate to the client for the client to verify the server. 

In self-managed Apache Kafka, Lambda acts as the client. You configure a client certificate (as a secret in Secrets Manager) to authenticate Lambda with your Kafka brokers. The client certificate must be signed by a CA in the server's trust store.

The Kafka cluster sends a server certificate to Lambda to authenticate the Kafka brokers with Lambda. The server certificate can be a public CA certificate or a private CA/self-signed certificate. The public CA certificate must be signed by a certificate authority (CA) that's in the Lambda trust store. For a private CA/self-signed certificate, you configure the server root CA certificate (as a secret in Secrets Manager). Lambda uses the root certificate to verify the Kafka brokers.

For more information about mTLS, see [ Introducing mutual TLS authentication for Amazon MSK as an event source](https://amazonaws-china.com/blogs/compute/introducing-mutual-tls-authentication-for-amazon-msk-as-an-event-source).

## Configuring the client certificate secret


The CLIENT\$1CERTIFICATE\$1TLS\$1AUTH secret requires a certificate field and a private key field. For an encrypted private key, the secret requires a private key password. Both the certificate and private key must be in PEM format.

**Note**  
Lambda supports the [PBES1](https://datatracker.ietf.org/doc/html/rfc2898/#section-6.1) (but not PBES2) private key encryption algorithms.

The certificate field must contain a list of certificates, beginning with the client certificate, followed by any intermediate certificates, and ending with the root certificate. Each certificate must start on a new line with the following structure:

```
-----BEGIN CERTIFICATE-----  
            <certificate contents>
-----END CERTIFICATE-----
```

Secrets Manager supports secrets up to 65,536 bytes, which is enough space for long certificate chains.

The private key must be in [PKCS \$18](https://datatracker.ietf.org/doc/html/rfc5208) format, with the following structure:

```
-----BEGIN PRIVATE KEY-----  
             <private key contents>
-----END PRIVATE KEY-----
```

For an encrypted private key, use the following structure:

```
-----BEGIN ENCRYPTED PRIVATE KEY-----  
              <private key contents>
-----END ENCRYPTED PRIVATE KEY-----
```

The following example shows the contents of a secret for mTLS authentication using an encrypted private key. For an encrypted private key, include the private key password in the secret.

```
{"privateKeyPassword":"testpassword",
"certificate":"-----BEGIN CERTIFICATE-----
MIIE5DCCAsygAwIBAgIRAPJdwaFaNRrytHBto0j5BA0wDQYJKoZIhvcNAQELBQAw
...
j0Lh4/+1HfgyE2KlmII36dg4IMzNjAFEBZiCRoPimO40s1cRqtFHXoal0QQbIlxk
cmUuiAii9R0=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIFgjCCA2qgAwIBAgIQdjNZd6uFf9hbNC5RdfmHrzANBgkqhkiG9w0BAQsFADBb
...
rQoiowbbk5wXCheYSANQIfTZ6weQTgiCHCCbuuMKNVS95FkXm0vqVD/YpXKwA/no
c8PH3PSoAaRwMMgOSA2ALJvbRz8mpg==
-----END CERTIFICATE-----",
"privateKey":"-----BEGIN ENCRYPTED PRIVATE KEY-----
MIIFKzBVBgkqhkiG9w0BBQ0wSDAnBgkqhkiG9w0BBQwwGgQUiAFcK5hT/X7Kjmgp
...
QrSekqF+kWzmB6nAfSzgO9IaoAaytLvNgGTckWeUkWn/V0Ck+LdGUXzAC4RxZnoQ
zp2mwJn2NYB7AZ7+imp0azDZb+8YG2aUCiyqb6PnnA==
-----END ENCRYPTED PRIVATE KEY-----"
}
```

## Configuring the server root CA certificate secret


You create this secret if your Kafka brokers use TLS encryption with certificates signed by a private CA. You can use TLS encryption for VPC, SASL/SCRAM, SASL/PLAIN, or mTLS authentication.

The server root CA certificate secret requires a field that contains the Kafka broker's root CA certificate in PEM format. The following example shows the structure of the secret.

```
{"certificate":"-----BEGIN CERTIFICATE-----
MIID7zCCAtegAwIBAgIBADANBgkqhkiG9w0BAQsFADCBmDELMAkGA1UEBhMCVVMx
EDAOBgNVBAgTB0FyaXpvbmExEzARBgNVBAcTClNjb3R0c2RhbGUxJTAjBgNVBAoT
HFN0YXJmaWVsZCBUZWNobm9sb2dpZXMsIEluYy4xOzA5BgNVBAMTMlN0YXJmaWVs
ZCBTZXJ2aWNlcyBSb290IENlcnRpZmljYXRlIEF1dG...
-----END CERTIFICATE-----"
}
```

# Creating a Lambda event source mapping for a self-managed Apache Kafka event source
Event source mapping

To create an event source mapping, you can use the Lambda console, the [Amazon Command Line Interface (CLI)](https://docs.amazonaws.cn/cli/latest/userguide/getting-started-install.html), or an [Amazon SDK](https://aws.amazon.com/getting-started/tools-sdks/).

The following console steps add a self-managed Apache Kafka cluster as a trigger for your Lambda function. Under the hood, this creates an event source mapping resource.

## Prerequisites

+ A self-managed Apache Kafka cluster. Lambda supports Apache Kafka version 0.10.1.0 and later.
+ An [execution role](lambda-intro-execution-role.md) with permission to access the Amazon resources that your self-managed Kafka cluster uses.

## Adding a self-managed Kafka cluster (console)
Using the Lambda console

Follow these steps to add your self-managed Apache Kafka cluster and a Kafka topic as a trigger for your Lambda function.

**To add an Apache Kafka trigger to your Lambda function (console)**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Choose the name of your Lambda function.

1. Under **Function overview**, choose **Add trigger**.

1. Under **Trigger configuration**, do the following:

   1. Choose the **Apache Kafka** trigger type.

   1. For **Bootstrap servers**, enter the host and port pair address of a Kafka broker in your cluster, and then choose **Add**. Repeat for each Kafka broker in the cluster.

   1. For **Topic name**, enter the name of the Kafka topic used to store records in the cluster.

   1. If you configure provisioned mode, enter a value for **Minimum event pollers**, a value for **Maximum event pollers**, and an optional value for PollerGroupName to specify grouping of multiple ESMs within the same event source VPC.

   1. (Optional) For **Batch size**, enter the maximum number of records to receive in a single batch.

   1. For **Batch window**, enter the maximum amount of seconds that Lambda spends gathering records before invoking the function.

   1. (Optional) For **Consumer group ID**, enter the ID of a Kafka consumer group to join.

   1. (Optional) For **Starting position**, choose **Latest** to start reading the stream from the latest record, **Trim horizon** to start at the earliest available record, or **At timestamp** to specify a timestamp to start reading from.

   1. (Optional) For **VPC**, choose the Amazon VPC for your Kafka cluster. Then, choose the **VPC subnets** and **VPC security groups**.

      This setting is required if only users within your VPC access your brokers.

      

   1. (Optional) For **Authentication**, choose **Add**, and then do the following:

      1. Choose the access or authentication protocol of the Kafka brokers in your cluster.
         + If your Kafka broker uses SASL/PLAIN authentication, choose **BASIC\$1AUTH**.
         + If your broker uses SASL/SCRAM authentication, choose one of the **SASL\$1SCRAM** protocols.
         + If you're configuring mTLS authentication, choose the **CLIENT\$1CERTIFICATE\$1TLS\$1AUTH** protocol.

      1. For SASL/SCRAM or mTLS authentication, choose the Secrets Manager secret key that contains the credentials for your Kafka cluster.

   1. (Optional) For **Encryption**, choose the Secrets Manager secret containing the root CA certificate that your Kafka brokers use for TLS encryption, if your Kafka brokers use certificates signed by a private CA.

      This setting applies to TLS encryption for SASL/SCRAM or SASL/PLAIN, and to mTLS authentication.

   1. To create the trigger in a disabled state for testing (recommended), clear **Enable trigger**. Or, to enable the trigger immediately, select **Enable trigger**.

1. To create the trigger, choose **Add**.

## Adding a self-managed Kafka cluster (Amazon CLI)
Using the Amazon CLI

Use the following example Amazon CLI commands to create and view a self-managed Apache Kafka trigger for your Lambda function.

### Using SASL/SCRAM


If Kafka users access your Kafka brokers over the internet, specify the Secrets Manager secret that you created for SASL/SCRAM authentication. The following example uses the [create-event-source-mapping](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/create-event-source-mapping.html) Amazon CLI command to map a Lambda function named `my-kafka-function` to a Kafka topic named `AWSKafkaTopic`.

```
aws lambda create-event-source-mapping \ 
  --topics AWSKafkaTopic \
  --source-access-configuration Type=SASL_SCRAM_512_AUTH,URI=arn:aws-cn:secretsmanager:us-east-1:111122223333:secret:MyBrokerSecretName \
  --function-name arn:aws-cn:lambda:us-east-1:111122223333:function:my-kafka-function \
  --self-managed-event-source '{"Endpoints":{"KAFKA_BOOTSTRAP_SERVERS":["abc3.xyz.com:9092", "abc2.xyz.com:9092"]}}'
```

### Using a VPC


If only Kafka users within your VPC access your Kafka brokers, you must specify your VPC, subnets, and VPC security group. The following example uses the [create-event-source-mapping](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/create-event-source-mapping.html) Amazon CLI command to map a Lambda function named `my-kafka-function` to a Kafka topic named `AWSKafkaTopic`.

```
aws lambda create-event-source-mapping \ 
  --topics AWSKafkaTopic \
  --source-access-configuration '[{"Type": "VPC_SUBNET", "URI": "subnet:subnet-0011001100"}, {"Type": "VPC_SUBNET", "URI": "subnet:subnet-0022002200"}, {"Type": "VPC_SECURITY_GROUP", "URI": "security_group:sg-0123456789"}]' \
  --function-name arn:aws-cn:lambda:us-east-1:111122223333:function:my-kafka-function \
  --self-managed-event-source '{"Endpoints":{"KAFKA_BOOTSTRAP_SERVERS":["abc3.xyz.com:9092", "abc2.xyz.com:9092"]}}'
```

### Viewing the status using the Amazon CLI


The following example uses the [get-event-source-mapping](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/get-event-source-mapping.html) Amazon CLI command to describe the status of the event source mapping that you created.

```
aws lambda get-event-source-mapping
              --uuid dh38738e-992b-343a-1077-3478934hjkfd7
```

# All self-managed Apache Kafka event source configuration parameters in Lambda
Configuration parameters

All Lambda event source types share the same [CreateEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html) and [UpdateEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_UpdateEventSourceMapping.html) API operations. However, only some of the parameters apply to self-managed Apache Kafka, as shown in the following table.


| Parameter | Required | Default | Notes | 
| --- | --- | --- | --- | 
|  BatchSize  |  N  |  100  |  Maximum: 10,000  | 
|  DestinationConfig  |  N  |  N/A  |  [Capturing discarded batches for Amazon MSK and self-managed Apache Kafka event sources](kafka-on-failure.md)  | 
|  Enabled  |  N  |  True  |  | 
|  FilterCriteria  |  N  |  N/A  |  [Control which events Lambda sends to your function](invocation-eventfiltering.md)  | 
|  FunctionName  |  Y  |  N/A  |    | 
|  KMSKeyArn  |  N  |  N/A  |  [Encryption of filter criteria](invocation-eventfiltering.md#filter-criteria-encryption)  | 
|  MaximumBatchingWindowInSeconds  |  N  |  500 ms  |  [Batching behavior](invocation-eventsourcemapping.md#invocation-eventsourcemapping-batching)  | 
|  ProvisionedPollersConfig  |  N  |  `MinimumPollers`: default value of 1 if not specified `MaximumPollers`: default value of 200 if not specified `PollerGroupName`: N/A  |  [Provisioned mode](kafka-scaling-modes.md#kafka-provisioned-mode)  | 
|  SelfManagedEventSource  |  Y  | N/A |  List of Kafka Brokers. Can set only on Create  | 
|  SelfManagedKafkaEventSourceConfig  |  N  |  Contains the ConsumerGroupId field which defaults to a unique value.  |  Can set only on Create  | 
|  SourceAccessConfigurations  |  N  |  No credentials  |  VPC information or authentication credentials for the cluster   For SASL\$1PLAIN, set to BASIC\$1AUTH  | 
|  StartingPosition  |  Y  |  N/A  |  AT\$1TIMESTAMP, TRIM\$1HORIZON, or LATEST Can set only on Create  | 
|  StartingPositionTimestamp  |  N  |  N/A  |  Required if StartingPosition is set to AT\$1TIMESTAMP  | 
|  Tags  |  N  |  N/A  |  [Using tags on event source mappings](tags-esm.md)  | 
|  Topics  |  Y  |  N/A  |  Topic name Can set only on Create  | 

**Note**  
When you specify a `PollerGroupName`, multiple ESMs within the same Amazon VPC can share Event Poller Unit (EPU) capacity. You can use this option to optimize Provisioned mode costs for your ESMs. Requirements for ESM grouping:  
ESMs must be within the same Amazon VPC
Maximum of 100 ESMs per poller group
Aggregate maximum pollers across all ESMs in a group cannot exceed 2000
You can update the `PollerGroupName` to move an ESM to a different group, or remove an ESM from a group by setting `PollerGroupName` to an empty string ("").

# Apache Kafka event poller scaling modes in Lambda
Event poller scaling

You can choose between two modes of event poller scaling for Amazon MSK and self-managed Apache Kafka event source mappings:
+ [On-demand mode (default)](#kafka-default-mode)
+ [Provisioned mode](#kafka-provisioned-mode)

## On-demand mode (default)


When you initially create the Kafka event source, Lambda allocates a default number of event pollers to process all partitions in the Kafka topic. Lambda automatically scales up or down the number of [event pollers](invocation-eventsourcemapping.md#invocation-eventsourcemapping-provisioned-mode) based on message load.

In one-minute intervals, Lambda evaluates the offset lag of all the partitions in the topic. If the offset lag is too high, the partition is receiving messages faster than Lambda can process them. If necessary, Lambda adds or removes event pollers from the topic. This autoscaling process of adding or removing event pollers occurs within three minutes of evaluation.

If your target Lambda function is throttled, Lambda reduces the number of event pollers. This action reduces the workload on the function by reducing the number of messages that event pollers can retrieve and send to the function.

## Provisioned mode


For workloads where you need to fine-tune the throughput of your event source mapping, you can use provisioned mode. In provisioned mode, you define minimum and maximum limits for the amount of provisioned event pollers. These provisioned event pollers are dedicated to your event source mapping, and can handle unexpected message spikes through responsive autoscaling. We recommend that you use provisioned mode for Kafka workloads that have strict performance requirements.

In Lambda, an event poller is a compute unit with throughput capabilities that vary by event source type. For Amazon MSK and self-managed Apache Kafka, each event poller can handle up to 5 MB/sec of throughput or up to 5 concurrent invocations. For example, if your event source produces an average payload of 1 MB and the average duration of your function is 1 second, a single Kafka event poller can support 5 MB/sec throughput and 5 concurrent Lambda invocations (assuming no payload transformation). For Amazon SQS, each event poller can handle up to 1 MB/sec of throughput or up to 10 concurrent invocations. Using provisioned mode incurs additional costs based on your event poller usage. For pricing details, see [Amazon Lambda pricing](https://aws.amazon.com/lambda/pricing/).

**Note**  
When using provisioned mode, you don't need to create Amazon PrivateLink VPC endpoints or grant the associated permissions as part of your network configuration.

In provisioned mode, the range of accepted values for the minimum number of event pollers (`MinimumPollers`) is between 1 and 200, inclusive. The range of accepted values for the maximum number of event pollers (`MaximumPollers`) is between 1 and 2,000, inclusive. `MaximumPollers` must be greater than or equal to `MinimumPollers`. In addition, to maintain ordered processing within partitions, Lambda caps the `MaximumPollers` to the number of partitions in the topic.

For more details about choosing appropriate values for minimum and maximum event pollers, see [Best practices](#kafka-provisioned-mode-bp).

You can configure provisioned mode for your Kafka event source mapping using the console or the Lambda API.

**To configure provisioned mode for an existing event source mapping (console)**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Choose the function with the event source mapping you want to configure provisioned mode for.

1. Choose **Configuration**, then choose **Triggers**.

1. Choose the event source mapping that you want to configure provisioned mode for, then choose **Edit**.

1. Under **Provisioned mode**, select **Configure**.
   + For **Minimum event pollers**, enter a value between 1 and 200. If you don't specify a value, Lambda chooses a default value of 1.
   + For **Maximum event pollers**, enter a value between 1 and 2,000. This value must be greater than or equal to your value for **Minimum event pollers**. If you don't specify a value, Lambda chooses a default value of 200.

1. Choose **Save**.

You can configure provisioned mode programmatically using the [ProvisionedPollerConfig](https://docs.amazonaws.cn/lambda/latest/api/API_ProvisionedPollerConfig.html) object in your [ EventSourceMappingConfiguration](https://docs.amazonaws.cn/lambda/latest/api/API_EventSourceMappingConfiguration.html). For example, the following [UpdateEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_UpdateEventSourceMapping.html) CLI command configures a `MinimumPollers` value of 5, and a `MaximumPollers` value of 100.

```
aws lambda update-event-source-mapping \
    --uuid a1b2c3d4-5678-90ab-cdef-EXAMPLE11111 \
    --provisioned-poller-config '{"MinimumPollers": 5, "MaximumPollers": 100}'
```

After configuring provisioned mode, you can observe the usage of event pollers for your workload by monitoring the `ProvisionedPollers` metric. For more information, see [Event source mapping metrics](monitoring-metrics-types.md#event-source-mapping-metrics).

To disable provisioned mode and return to default (on-demand) mode, you can use the following [UpdateEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_UpdateEventSourceMapping.html) CLI command:

```
aws lambda update-event-source-mapping \
    --uuid a1b2c3d4-5678-90ab-cdef-EXAMPLE11111 \
    --provisioned-poller-config '{}'
```

## Advanced error handling and performance features


For Kafka event source mappings with provisioned mode enabled, you can configure additional features to improve error handling and performance:
+ [Retry configurations](kafka-retry-configurations.md) – Control how Lambda handles failed records with maximum retry attempts, record age limits, batch splitting, and partial batch responses.
+ [Kafka on-failure destinations](kafka-on-failure-destination.md) – Send failed records to a Kafka topic for later processing or analysis.

## Best practices and considerations when using provisioned mode
Best practices

The optimal configuration of minimum and maximum event pollers for your event source mapping depends on your application's performance requirements. We recommend that you start with the default minimum event pollers to baseline the performance profile. Adjust your configuration based on observed message processing patterns and your desired performance profile.

For workloads with spiky traffic and strict performance needs, increase the minimum event pollers to handle sudden surges in messages. To determine the minimum event pollers required, consider your workload's messages per second and average payload size, and use the throughput capacity of a single event poller (up to 5 MBps) as a reference.

To maintain ordered processing within a partition, Lambda limits the maximum event pollers to the number of partitions in the topic. Additionally, the maximum event pollers your event source mapping can scale to depends on the function's concurrency settings.

When activating provisioned mode, update your network settings to remove Amazon PrivateLink VPC endpoints and associated permissions.

## Cost Optimization for Provisioned mode


### Provisioned mode pricing


Provisioned mode is charged based on the provisioned minimum event pollers, and the event pollers consumed during autoscaling. Charges are calculated using a billing unit called Event Poller Unit (EPU). You pay for the number and duration of EPUs used, measured in Event-Poller-Unit-hours. You can use Provisioned mode with either a single ESM for performance-sensitive applications or you can group multiple ESMs within the same VPC to share EPU capacity and costs. We will deep dive on two capabilities that help you optimize your Provisioned mode costs. For pricing details, see [Amazon Lambda pricing](https://aws.amazon.com/lambda/pricing/).

### Enhanced EPU Utilization


Each EPU supports up to 20 MB/s throughput capacity for event polling and supports a default of 10 event pollers. When you create a Provisioned mode for Kafka ESM by setting minimum and maximum pollers, it uses minimum poller number to provision EPUs based on default of 10 event pollers per EPU. However, each event poller can independently scale to support up to 5 MB/s of throughput capacity, which may require lower density of event pollers on a specific EPU and can trigger scaling of EPUs. The number of event pollers allocated on an EPU depends on the compute capacity consumed by each event poller. This approach of enhanced EPU utilization allows event pollers with varying throughput requirements to utilize EPU capacity effectively, reducing costs for all ESMs.

### ESM grouping


To optimize your Provisioned mode costs further, you can group multiple Kafka ESMs to share EPU capacity. With ESM grouping and enhanced EPU Utilization, you can reduce your Provisioned mode costs up to 90% for low-throughput workloads compared to running in single ESM mode. All ESMs that require less than 1 EPU capacity will benefit from ESM grouping. Such ESMs typically require few minimum event pollers to support their throughput needs. This capability will allow you to adopt Provisioned mode for all your Kafka workloads, and benefit from features like schema validation, filtering of Avro/Protobuf events, low-latency invocations, and enhanced error handling that only available in Provisioned mode.

When you configure the `PollerGroupName` parameter with the same value for multiple ESMs within the same Amazon VPC, those ESMs share EPU resources instead of each requiring dedicated EPU capacity. You can group up to 100 ESMs per poller group and aggregate maximum pollers across all ESMs in a group cannot exceed 2000.

#### To configure ESM grouping (console)


1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Choose your function.

1. Choose **Configuration**, and then choose **Triggers**.

1. When creating a new Kafka event source mapping, or editing an existing one, select **Configure** under **Provisioned mode**.

1. For **Minimum event pollers**, enter a value between 1 and 200.

1. For **Maximum event pollers**, enter a value between 1 and 2,000.

1. For **Poller group name**, enter an identifier for the group. Use the same name for other ESMs you want to group together.

1. Choose **Save**.

#### To configure ESM grouping (Amazon CLI)


The following example creates an ESM with a poller group named `production-app-group`:

```
aws lambda create-event-source-mapping \
  --function-name myFunction1 \
  --event-source-arn arn:aws-cn:kafka:us-east-1:123456789012:cluster/MyCluster/abcd1234 \
  --topics topic1 \
  --starting-position LATEST \
  --provisioned-poller-config '{
    "MinimumPollers": 1, 
    "MaximumPollers": 10, 
    "PollerGroupName": "production-app-group"
  }'
```

To add another ESM to the same group (sharing EPU capacity), use the same PollerGroupName:

```
aws lambda create-event-source-mapping \
  --function-name myFunction2 \
  --event-source-arn arn:aws-cn:kafka:us-east-1:123456789012:cluster/MyCluster/abcd1234 \
  --topics topic2 \
  --starting-position LATEST \
  --provisioned-poller-config '{
    "MinimumPollers": 1, 
    "MaximumPollers": 10, 
    "PollerGroupName": "production-app-group"
  }'
```

**Note**  
You can update the `PollerGroupName` to move an ESM to a different group, or remove an ESM from a group by passing an empty string ("") for `PollerGroupName`:

```
# Move ESM to a different group
aws lambda update-event-source-mapping \
  --uuid a1b2c3d4-5678-90ab-cdef-EXAMPLE11111 \
  --provisioned-poller-config '{
    "MinimumPollers": 1, 
    "MaximumPollers": 10, 
    "PollerGroupName": "new-group-name"
  }'

# Remove ESM from group (use dedicated resources)
aws lambda update-event-source-mapping \
  --uuid a1b2c3d4-5678-90ab-cdef-EXAMPLE11111 \
  --provisioned-poller-config '{
    "MinimumPollers": 1, 
    "MaximumPollers": 10, 
    "PollerGroupName": ""
  }'
```

#### Grouping strategy considerations

+ **Application boundary** – Group ESMs that belong to the same applications or services for better cost allocation and management. Consider using naming conventions like `app-name-environment` (e.g., `order-processor-prod`).
+ **Traffic pattern** – Avoid grouping ESMs with high throughput and spiky traffic pattern, as this may lead to resource contention.
+ **Blast radius** – Consider the impact if the shared infrastructure experiences issues. All ESMs in the same group are affected by shared resource limitations. For mission-critical workloads, you may want to use separate groups or dedicated ESMs.

#### Cost optimization example


Consider a scenario where you have 10 ESMs, each configured with 1 event poller and throughput under 2 MB/s:

**Without grouping:**
+ Each ESM requires its own EPU
+ Total EPUs needed: 10
+ Cost per EPU: \$10.185/hour in US East (N. Virginia)
+ Monthly EPU cost (720 hours): 10 × 720 × \$10.185 = \$11,332

**With grouping:**
+ All 10 ESMs share EPU capacity
+ 10 event pollers fit in 1 EPU (with new 10 poller per EPU support)
+ Total EPUs needed: 1
+ Monthly EPU cost (720 hours): 1 × 720 × \$10.185 = \$1133.20
+ **Cost savings: 90%** (\$11,198.80 savings per month)

# Apache Kafka polling and stream starting positions in Lambda
Polling and stream positions

The [ StartingPosition parameter](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html#lambda-CreateEventSourceMapping-request-StartingPosition) tells Lambda when to start reading messages from your Amazon MSK or self-managed Apache Kafka stream. There are three options to choose from:
+ **Latest** – Lambda starts reading just after the most recent record in the Kafka topic.
+ **Trim horizon** – Lambda starts reading from the last untrimmed record in the Kafka topic. This is also the oldest record in the topic.
+ **At timestamp** – Lambda starts reading from a position defined by a timestamp, in Unix time seconds. Use the [ StartingPositionTimestamp parameter](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html#lambda-CreateEventSourceMapping-request-StartingPositionTimestamp) to specify the timestamp.

Stream polling during an event source mapping create or update is eventually consistent:
+ During event source mapping creation, it may take several minutes to start polling events from the stream.
+ During event source mapping updates, it may take up to 90 seconds to stop and restart polling events from the stream.

This behavior means that if you specify `LATEST` as the starting position for the stream, the event source mapping could miss events during a create or update. To ensure that no events are missed, specify either `TRIM_HORIZON` or `AT_TIMESTAMP`.

# Customizable consumer group ID in Lambda
Consumer group ID

When setting up Amazon MSK or self-managed Apache Kafka as an event source, you can specify a [consumer group](https://developer.confluent.io/learn-more/kafka-on-the-go/consumer-groups/) ID. This consumer group ID is an existing identifier for the Kafka consumer group that you want your Lambda function to join. You can use this feature to seamlessly migrate any ongoing Kafka record processing setups from other consumers to Lambda.

Kafka distributes messages across all consumers in a consumer group. If you specify a consumer group ID that has other active consumers, Lambda receives only a portion of the messages from the Kafka topic. If you want Lambda to handle all messages in the topic, turn off any other consumers in that consumer group.

Additionally, if you specify a consumer group ID, and Kafka finds a valid existing consumer group with the same ID, Lambda ignores the [StartingPosition](kafka-starting-positions.md) for your event source mapping. Instead, Lambda begins processing records according to the committed offset of the consumer group. If you specify a consumer group ID, and Kafka cannot find an existing consumer group, then Lambda configures your event source with the specified `StartingPosition`.

The consumer group ID that you specify must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value.

# Filtering events from Amazon MSK and self-managed Apache Kafka event sources
Event filtering

You can use event filtering to control which records from a stream or queue Lambda sends to your function. For general information about how event filtering works, see [Control which events Lambda sends to your function](invocation-eventfiltering.md).

**Note**  
Amazon MSK and self-managed Apache Kafka event source mappings only support filtering on the `value` key.

**Topics**
+ [

## Kafka event filtering basics
](#filtering-kafka)

## Kafka event filtering basics


Suppose a producer is writing messages to a topic in your Kafka cluster, either in valid JSON format or as plain strings. An example record would look like the following, with the message converted to a Base64 encoded string in the `value` field.

```
{
    "mytopic-0":[
        {
            "topic":"mytopic",
            "partition":0,
            "offset":15,
            "timestamp":1545084650987,
            "timestampType":"CREATE_TIME",
            "value":"SGVsbG8sIHRoaXMgaXMgYSB0ZXN0Lg==",
            "headers":[]
        }
    ]
}
```

Suppose your Apache Kafka producer is writing messages to your topic in the following JSON format.

```
{
    "device_ID": "AB1234",
    "session":{
        "start_time": "yyyy-mm-ddThh:mm:ss",
        "duration": 162
    }
}
```

You can use the `value` key to filter records. Suppose you wanted to filter only those records where `device_ID` begins with the letters AB. The `FilterCriteria` object would be as follows.

```
{
    "Filters": [
        {
            "Pattern": "{ \"value\" : { \"device_ID\" : [ { \"prefix\": \"AB\" } ] } }"
        }
    ]
}
```

For added clarity, here is the value of the filter's `Pattern` expanded in plain JSON.

```
{
    "value": {
        "device_ID": [ { "prefix": "AB" } ]
      }
}
```

You can add your filter using the console, Amazon CLI or an Amazon SAM template.

------
#### [ Console ]

To add this filter using the console, follow the instructions in [Attaching filter criteria to an event source mapping (console)](invocation-eventfiltering.md#filtering-console) and enter the following string for the **Filter criteria**.

```
{ "value" : { "device_ID" : [ { "prefix":  "AB" } ] } }
```

------
#### [ Amazon CLI ]

To create a new event source mapping with these filter criteria using the Amazon Command Line Interface (Amazon CLI), run the following command.

```
aws lambda create-event-source-mapping \
    --function-name my-function \
    --event-source-arn arn:aws:kafka:us-east-2:123456789012:cluster/my-cluster/b-8ac7cc01-5898-482d-be2f-a6b596050ea8 \
    --filter-criteria '{"Filters": [{"Pattern": "{ \"value\" : { \"device_ID\" : [ { \"prefix\":  \"AB\" } ] } }"}]}'
```

To add these filter criteria to an existing event source mapping, run the following command.

```
aws lambda update-event-source-mapping \
    --uuid "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE" \
    --filter-criteria '{"Filters": [{"Pattern": "{ \"value\" : { \"device_ID\" : [ { \"prefix\":  \"AB\" } ] } }"}]}'
```

------
#### [ Amazon SAM ]

To add this filter using Amazon SAM, add the following snippet to the YAML template for your event source.

```
FilterCriteria:
  Filters:
    - Pattern: '{ "value" : { "device_ID" : [ { "prefix":  "AB" } ] } }'
```

------

With Kafka, you can also filter records where the message is a plain string. Suppose you want to ignore those messages where the string is "error". The `FilterCriteria` object would look as follows.

```
{
    "Filters": [
        {
            "Pattern": "{ \"value\" : [ { \"anything-but\": [ \"error\" ] } ] }"
        }
    ]
}
```

For added clarity, here is the value of the filter's `Pattern` expanded in plain JSON.

```
{
    "value": [
        {
        "anything-but": [ "error" ]
        }
    ]
}
```

You can add your filter using the console, Amazon CLI or an Amazon SAM template.

------
#### [ Console ]

To add this filter using the console, follow the instructions in [Attaching filter criteria to an event source mapping (console)](invocation-eventfiltering.md#filtering-console) and enter the following string for the **Filter criteria**.

```
{ "value" : [ { "anything-but": [ "error" ] } ] }
```

------
#### [ Amazon CLI ]

To create a new event source mapping with these filter criteria using the Amazon Command Line Interface (Amazon CLI), run the following command.

```
aws lambda create-event-source-mapping \
    --function-name my-function \
    --event-source-arn arn:aws:kafka:us-east-2:123456789012:cluster/my-cluster/b-8ac7cc01-5898-482d-be2f-a6b596050ea8 \
    --filter-criteria '{"Filters": [{"Pattern": "{ \"value\" : [ { \"anything-but\": [ \"error\" ] } ] }"}]}'
```

To add these filter criteria to an existing event source mapping, run the following command.

```
aws lambda update-event-source-mapping \
    --uuid "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE" \
    --filter-criteria '{"Filters": [{"Pattern": "{ \"value\" : [ { \"anything-but\": [ \"error\" ] } ] }"}]}'
```

------
#### [ Amazon SAM ]

To add this filter using Amazon SAM, add the following snippet to the YAML template for your event source.

```
FilterCriteria:
  Filters:
    - Pattern: '{ "value" : [ { "anything-but": [ "error" ] } ] }'
```

------

Kafka messages must be UTF-8 encoded strings, either plain strings or in JSON format. That's because Lambda decodes Kafka byte arrays into UTF-8 before applying filter criteria. If your messages use another encoding, such as UTF-16 or ASCII, or if the message format doesn't match the `FilterCriteria` format, Lambda processes metadata filters only. The following table summarizes the specific behavior:


| Incoming message format | Filter pattern format for message properties | Resulting action | 
| --- | --- | --- | 
|  Plain string  |  Plain string  |  Lambda filters based on your filter criteria.  | 
|  Plain string  |  No filter pattern for data properties  |  Lambda filters (on the other metadata properties only) based on your filter criteria.  | 
|  Plain string  |  Valid JSON  |  Lambda filters (on the other metadata properties only) based on your filter criteria.  | 
|  Valid JSON  |  Plain string  |  Lambda filters (on the other metadata properties only) based on your filter criteria.  | 
|  Valid JSON  |  No filter pattern for data properties  |  Lambda filters (on the other metadata properties only) based on your filter criteria.  | 
|  Valid JSON  |  Valid JSON  |  Lambda filters based on your filter criteria.  | 
|  Non-UTF-8 encoded string  |  JSON, plain string, or no pattern  |  Lambda filters (on the other metadata properties only) based on your filter criteria.  | 

# Using schema registries with Kafka event sources in Lambda
Schema registries with event sources

 Schema registries help you define and manage data stream schemas. A schema defines the structure and format of a data record. In the context of Kafka event source mappings, you can configure a schema registry to validate the structure and format of Kafka messages against predefined schemas before they reach your Lambda function. This adds a layer of data governance to your application and allows you to efficiently manage data formats, ensure schema compliance, and optimize costs through event filtering. 

 This feature works with all programming languages, but consider these important points: 
+ Powertools for Lambda provides specific support for Java, Python, and TypeScript, maintaining consistency with existing Kafka development patterns and allowing direct access to business objects without custom deserialization code
+ This feature is only available for event source mappings using provisioned mode. Schema registry doesn't support event source mappings in on-demand mode. If you're using provisioned mode and you have a schema registry configured, you can't change to on-demand mode unless you remove your schema registry configuration first. For more information, see [Provisioned mode](invocation-eventsourcemapping.md#invocation-eventsourcemapping-provisioned-mode)
+ You can configure only one schema registry per event source mapping (ESM). Using a schema registry with your Kafka event source may increase your Lambda Event Poller Unit (EPU) usage, which is a pricing dimension for Provisioned mode. 

**Topics**
+ [

## Schema registry options
](#services-consume-kafka-events-options)
+ [

## How Lambda performs schema validation for Kafka messages
](#services-consume-kafka-events-how)
+ [

## Configuring a Kafka schema registry
](#services-consume-kafka-events-config)
+ [

## Filtering for Avro and Protobuf
](#services-consume-kafka-events-filtering)
+ [

## Payload formats and deserialization behavior
](#services-consume-kafka-events-payload)
+ [

## Working with deserialized data in Lambda functions
](#services-consume-kafka-events-payload-examples)
+ [

## Authentication methods for your schema registry
](#services-consume-kafka-events-auth)
+ [

## Error handling and troubleshooting for schema registry issues
](#services-consume-kafka-events-troubleshooting)

## Schema registry options


 Lambda supports the following schema registry options: 
+ [Amazon Glue Schema Registry](https://docs.aws.amazon.com/glue/latest/dg/schema-registry.html)
+ [Confluent Cloud Schema Registry](https://docs.confluent.io/platform/current/schema-registry/index.html)
+ [Self-managed Confluent Schema Registry](https://docs.confluent.io/platform/current/schema-registry/index.html)

 Your schema registry supports validating messages in the following data formats: 
+ Apache Avro
+ Protocol Buffers (Protobuf)
+ JSON Schema (JSON-SE)

 To use a schema registry, first ensure that your event source mapping is in provisioned mode. When you use a schema registry, Lambda adds metadata about the schema to the payload. For more information, see [Payload formats and deserialization behavior](#services-consume-kafka-events-payload). 

## How Lambda performs schema validation for Kafka messages


 When you configure a schema registry, Lambda performs the following steps for each Kafka message: 

1. Lambda polls the Kafka record from your cluster.

1. Lambda validates selected message attributes in the record against a specific schema in your schema registry.
   + If the schema associated with the message is not found in the registry, Lambda sends the message to a DLQ with reason code `SCHEMA_NOT_FOUND`.

1. Lambda deserializes the message according to the schema registry configuration to validate the message. If event filtering is configured, Lambda then performs filtering based on the configured filter criteria.
   + If deserialization fails, Lambda sends the message to a DLQ with reason code `DESERIALIZATION_ERROR`. If no DLQ is configured, Lambda drops the message.

1. If the message is validated by the schema registry, and isn't filtered out by your filter criteria, Lambda invokes your function with the message.

 This feature is intended to validate messages that are already produced using Kafka clients integrated with a schema registry. We recommend configuring your Kafka producers to work with your schema registry to create properly formatted messages. 

## Configuring a Kafka schema registry


 The following console steps add a Kafka schema registry configuration to your event source mapping. 

**To add a Kafka schema registry configuration to your event source mapping (console)**

1. Open the [Function page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Choose **Configuration**.

1. Choose **Triggers**.

1. Select the Kafka event source mapping that you want to configure a schema registry for, and choose **Edit**.

1. Under **Event poller configuration**, choose **Configure schema registry**. Your event source mapping must be in provisioned mode to see this option.

1. For **Schema registry URI**, enter the ARN of your Amazon Glue schema registry, or the HTTPS URL of your Confluent Cloud schema registry or Self-Managed Confluent Schema Registry.

1. The following configuration steps tell Lambda how to access your schema registry. For more information, see [Authentication methods for your schema registry](#services-consume-kafka-events-auth).
   + For **Access configuration type**, choose the type of authentication Lambda uses to access your schema registry.
   + For **Access configuration URI**, enter the ARN of the Secrets Manager secret to authenticate with your schema registry, if applicable. Ensure that your function's [execution role](with-msk-permissions.md) contains the correct permissions.

1. The **Encryption** field applies only if your schema registry is signed by a private Certificate Authority (CA) or a certificate authority (CA) that's not in the Lambda trust store.. If applicable, provide the secret key containing the private CA certificate used by your schema registry for TLS encryption.

1. For **Event record format**, choose how you want Lambda to deliver the records your function after schema validation. For more information, see [Payload format examples](#services-consume-kafka-events-payload).
   + If you choose **JSON**, Lambda delivers the attributes that you select in the Schema validation attribute below in standard JSON format. For the attributes that you don't select, Lambda delivers them as-is.
   + If you choose **SOURCE**, Lambda delivers the attributes that you select in the Schema validation attribute below in its original source format.

1. For **Schema validation attribute**, select the message attributes that you want Lambda to validate and deserialize using your schema registry. You must select at least one of either **KEY** or **VALUE**. If you chose JSON for event record format, Lambda also deserializes the selected message attributes before sending them to your function. For more information, see [Payload formats and deserialization behavior](#services-consume-kafka-events-payload).

1. Choose **Save**.

 You can also use the Lambda API to create or update your event source mapping with a schema registry configuration. The following examples demonstrate how to configure an Amazon Glue or Confluent schema registry using the Amazon CLI, which corresponds to the [UpdateEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_UpdateEventSourceMapping.html) and [CreateEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html) API operations in the *Amazon Lambda API Reference*: 

**Important**  
If you are updating any schema registry configuration field using the Amazon CLI or the `update-event-source-mapping` API, you must update all the fields of schema registry configuration.

------
#### [ Create Event Source Mapping ]

```
aws lambda create-event-source-mapping \
  --function-name my-schema-validator-function \
  --event-source-arn arn:aws:kafka:us-east-1:123456789012:cluster/my-cluster/a1b2c3d4-5678-90ab-cdef-11111EXAMPLE \
  --topics my-kafka-topic \
  --provisioned-poller-config MinimumPollers=1,MaximumPollers=1 \
  --amazon-managed-kafka-event-source-mapping '{
      "SchemaRegistryConfig" : {
          "SchemaRegistryURI": "https://abcd-ef123.us-west-2.aws.confluent.cloud",
          "AccessConfigs": [{
              "Type": "BASIC_AUTH", 
              "URI": "arn:aws:secretsmanager:us-east-1:123456789012:secret:secretName"
          }],
          "EventRecordFormat": "JSON",
          "SchemaValidationConfigs": [
          { 
              "Attribute": "KEY" 
          },
          { 
              "Attribute": "VALUE" 
          }]
      }
  }'
```

------
#### [ Update Amazon Glue Schema Registry ]

```
aws lambda update-event-source-mapping \
    --uuid a1b2c3d4-5678-90ab-cdef-EXAMPLE11111 \
    --amazon-managed-kafka-event-source-mapping '{
        "SchemaRegistryConfig" : {
            "SchemaRegistryURI": "arn:aws:glue:us-east-1:123456789012:registry/registryName",
            "EventRecordFormat": "JSON",
            "SchemaValidationConfigs": [
            { 
                "Attribute": "KEY" 
            },
            { 
                "Attribute": "VALUE" 
            }]
        }
    }'
```

------
#### [ Update Confluent Schema Registry with Authentication ]

```
aws lambda update-event-source-mapping \
    --uuid a1b2c3d4-5678-90ab-cdef-EXAMPLE11111 \
    --amazon-managed-kafka-event-source-mapping '{
        "SchemaRegistryConfig" : {
            "SchemaRegistryURI": "https://abcd-ef123.us-west-2.aws.confluent.cloud",
            "AccessConfigs": [{
                "Type": "BASIC_AUTH", 
                "URI": "arn:aws:secretsmanager:us-east-1:123456789012:secret:secretName"
            }],
            "EventRecordFormat": "JSON",
            "SchemaValidationConfigs": [
            { 
                "Attribute": "KEY" 
            },
            { 
                "Attribute": "VALUE" 
            }]
        }
    }'
```

------
#### [ Update Confluent Schema Registry without Authentication ]

```
aws lambda update-event-source-mapping \
    --uuid a1b2c3d4-5678-90ab-cdef-EXAMPLE11111 \
    --amazon-managed-kafka-event-source-mapping '{
        "SchemaRegistryConfig" : {
            "SchemaRegistryURI": "https://abcd-ef123.us-west-2.aws.confluent.cloud",
            "EventRecordFormat": "JSON",
            "SchemaValidationConfigs": [
            { 
                "Attribute": "KEY" 
            },
            { 
                "Attribute": "VALUE" 
            }]
        }
    }'
```

------
#### [ Remove Schema Registry Configuration ]

To remove a schema registry configuration from your event source mapping, you can use the CLI command [UpdateEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_UpdateEventSourceMapping.html) in the *Amazon Lambda API Reference*.

```
aws lambda update-event-source-mapping \
    --uuid a1b2c3d4-5678-90ab-cdef-EXAMPLE11111 \
    --amazon-managed-kafka-event-source-mapping '{
        "SchemaRegistryConfig" : {}
    }'
```

------

## Filtering for Avro and Protobuf


 When using Avro or Protobuf formats with a schema registry, you can apply event filtering to your Lambda function. The filter patterns are applied to the deserialized classic JSON representation of your data after schema validation. For example, with an Avro schema defining product details including price, you can filter messages based on the price value: 

**Note**  
 When being deserialized, Avro is converted to standard JSON, which means it cannot be directly converted back to an Avro object. If you need to convert to an Avro object, use the SOURCE format instead.   
 For Protobuf deserialization, field names in the resulting JSON match those defined in your schema, rather than being converted to camel case as Protobuf typically does. Keep this in mind when creating filtering patterns. 

```
aws lambda create-event-source-mapping \
    --function-name myAvroFunction \
    --topics myAvroTopic \
    --starting-position TRIM_HORIZON \
    --kafka-bootstrap-servers '["broker1:9092", "broker2:9092"]' \
    --schema-registry-config '{
        "SchemaRegistryURI": "arn:aws:glue:us-east-1:123456789012:registry/myAvroRegistry",
        "EventRecordFormat": "JSON",
        "SchemaValidationConfigs": [
            { 
                "Attribute": "VALUE" 
            }
        ]
    }' \
    --filter-criteria '{
        "Filters": [
            {
                "Pattern": "{ \"value\" : { \"field_1\" : [\"value1\"], \"field_2\" : [\"value2\"] } }"
            }
        ]
    }'
```

 In this example, the filter pattern analyzes the `value` object, matching messages in `field_1` with `"value1"` and `field_2` with `"value2"`. The filter criteria are evaluated against the deserialized data, after Lambda converts the message from Avro format to JSON. 

 For more detailed information on event filtering, see [Lambda event filtering](invocation-eventfiltering.md). 

## Payload formats and deserialization behavior


 When using a schema registry, Lambda delivers the final payload to your function in a format similar to the [regular event payload](with-msk.md#msk-sample-event), with some additional fields. The additional fields depend on the `SchemaValidationConfigs` parameter. For each attribute that you select for validation (key or value), Lambda adds corresponding schema metadata to the payload. 

**Note**  
You must update your [aws-lambda-java-events](https://github.com/aws/aws-lambda-java-libs/tree/main/aws-lambda-java-events) to version 3.16.0 or above to use schema metadata fields.

 For example, if you validate the `value` field, Lambda adds a field called `valueSchemaMetadata` to your payload. Similarly, for the `key` field, Lambda adds a field called `keySchemaMetadata`. This metadata contains information about the data format and the schema ID used for validation: 

```
"valueSchemaMetadata": {
    "dataFormat": "AVRO",
    "schemaId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE11111"
}
```

 The `EventRecordFormat` parameter can be set to either `JSON` or `SOURCE`, which determines how Lambda handles schema-validated data before delivering it to your function. Each option provides different processing capabilities: 
+ `JSON` - Lambda deserializes the validated attributes into standard JSON format, making the data ready for direct use in languages with native JSON support. This format is ideal when you don't need to preserve the original binary format or work with generated classes.
+ `SOURCE` - Lambda preserves the original binary format of the data as a Base64-encoded string, allowing direct conversion to Avro or Protobuf objects. This format is essential when working with strongly-typed languages or when you need to maintain the full capabilities of Avro or Protobuf schemas.

Based on these format characteristics and language-specific considerations, we recommend the following formats:


**Recommended formats based on programming language**  

| Language | Avro | Protobuf | JSON | 
| --- | --- | --- | --- | 
| Java | SOURCE | SOURCE | SOURCE | 
| Python | JSON | JSON | JSON | 
| NodeJS | JSON | JSON | JSON | 
| .NET | SOURCE | SOURCE | SOURCE | 
| Others | JSON | JSON | JSON | 

The following sections describe these formats in detail and provide example payloads for each format.

### JSON format


 If you choose `JSON` as the `EventRecordFormat`, Lambda validates and deserializes the message attributes that you've selected in the `SchemaValidationConfigs` field (the `key` and/or `value` attributes). Lambda delivers these selected attributes as base64-encoded strings of their standard JSON representation in your function. 

**Note**  
 When being deserialized, Avro is converted to standard JSON, which means it cannot be directly converted back to an Avro object. If you need to convert to an Avro object, use the SOURCE format instead.   
 For Protobuf deserialization, field names in the resulting JSON match those defined in your schema, rather than being converted to camel case as Protobuf typically does. Keep this in mind when creating filtering patterns. 

 The following shows an example payload, assuming you choose `JSON` as the `EventRecordFormat`, and both the `key` and `value` attributes as `SchemaValidationConfigs`: 

```
{
   "eventSource":"aws:kafka",
   "eventSourceArn":"arn:aws:kafka:us-east-1:123456789012:cluster/vpc-2priv-2pub/a1b2c3d4-5678-90ab-cdef-EXAMPLE11111-1",
   "bootstrapServers":"b-2.demo-cluster-1.a1bcde.c1.kafka.us-east-1.amazonaws.com:9092,b-1.demo-cluster-1.a1bcde.c1.kafka.us-east-1.amazonaws.com:9092",
   "records":{
      "mytopic-0":[
         {
            "topic":"mytopic",
            "partition":0,
            "offset":15,
            "timestamp":1545084650987,
            "timestampType":"CREATE_TIME",
            "key":"abcDEFghiJKLmnoPQRstuVWXyz1234==", //Base64 encoded string of JSON
            "keySchemaMetadata": {
                "dataFormat": "AVRO",
                "schemaId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE11111"
            },
            "value":"abcDEFghiJKLmnoPQRstuVWXyz1234", //Base64 encoded string of JSON
            "valueSchemaMetadata": {
                "dataFormat": "AVRO",
                "schemaId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE11111"
            },
            "headers":[
               {
                  "headerKey":[
                     104,
                     101,
                     97,
                     100,
                     101,
                     114,
                     86,
                     97,
                     108,
                     117,
                     101
                  ]
               }
            ]
         }
      ]
   }
}
```

 In this example: 
+ Both `key` and `value` are base64-encoded strings of their JSON representation after deserialization.
+ Lambda includes schema metadata for both attributes in `keySchemaMetadata` and `valueSchemaMetadata`.
+ Your function can decode the `key` and `value` strings to access the deserialized JSON data.

 The JSON format is recommended for languages that aren't strongly typed, such as Python or Node.js. These languages have native support for converting JSON into objects. 

### Source format


 If you choose `SOURCE` as the `EventRecordFormat`, Lambda still validates the record against the schema registry, but delivers the original binary data to your function without deserialization. This binary data is delivered as a Base64 encoded string of the original byte data, with producer-appended metadata removed. As a result, you can directly convert the raw binary data into Avro and Protobuf objects within your function code. We recommend using Powertools for Amazon Lambda, which will deserialize the raw binary data and give you Avro and Protobuf objects directly. 

 For example, if you configure Lambda to validate both the `key` and `value` attributes but use the `SOURCE` format, your function receives a payload like this: 

```
{
    "eventSource": "aws:kafka",
    "eventSourceArn": "arn:aws:kafka:us-east-1:123456789012:cluster/vpc-2priv-2pub/a1b2c3d4-5678-90ab-cdef-EXAMPLE11111-1",
    "bootstrapServers": "b-2.demo-cluster-1.a1bcde.c1.kafka.us-east-1.amazonaws.com:9092,b-1.demo-cluster-1.a1bcde.c1.kafka.us-east-1.amazonaws.com:9092",
    "records": {
        "mytopic-0": [
            {
                "topic": "mytopic",
                "partition": 0,
                "offset": 15,
                "timestamp": 1545084650987,
                "timestampType": "CREATE_TIME",
                "key": "abcDEFghiJKLmnoPQRstuVWXyz1234==", // Base64 encoded string of Original byte data, producer-appended metadata removed
                "keySchemaMetadata": {
                    "dataFormat": "AVRO",
                    "schemaId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE11111"
                },
                "value": "abcDEFghiJKLmnoPQRstuVWXyz1234==", // Base64 encoded string of Original byte data, producer-appended metadata removed
                "valueSchemaMetadata": {
                    "dataFormat": "AVRO",
                    "schemaId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE11111"
                },
                "headers": [
                    {
                        "headerKey": [
                            104,
                            101,
                            97,
                            100,
                            101,
                            114,
                            86,
                            97,
                            108,
                            117,
                            101
                        ]
                    }
                ]
            }
        ]
    }
}
```

 In this example: 
+ Both `key` and `value` contain the original binary data as Base64 encoded strings.
+ Your function needs to handle deserialization using the appropriate libraries.

 Choosing `SOURCE` for `EventRecordFormat` is recommended if you're using Avro-generated or Protobuf-generated objects, especially with Java functions. This is because Java is strongly typed, and requires specific deserializers for Avro and Protobuf formats. In your function code, you can use your preferred Avro or Protobuf library to deserialize the data. 

## Working with deserialized data in Lambda functions


Powertools for Amazon Lambda helps you deserialize Kafka records in your function code based on the format you use. This utility simplifies working with Kafka records by handling data conversion and providing ready-to-use objects.

 To use Powertools for Amazon Lambda in your function, you need to add Powertools for Amazon Lambda either as a layer or include it as a dependency when building your Lambda function. For setup instructions and more information, see the Powertools for Amazon Lambda documentation for your preferred language: 
+ [Powertools for Amazon Lambda (Java)](https://docs.powertools.aws.dev/lambda/java/latest/utilities/kafka/)
+ [Powertools for Amazon Lambda (Python)](https://docs.powertools.aws.dev/lambda/python/latest/utilities/kafka/)
+ [Powertools for Amazon Lambda (TypeScript)](https://docs.powertools.aws.dev/lambda/typescript/latest/features/kafka/)
+ [Powertools for Amazon Lambda (.NET)](https://docs.powertools.aws.dev/lambda/dotnet/utilities/kafka/)

**Note**  
When working with schema registry integration, you can choose `SOURCE` or `JSON` format. Each option supports different serialization formats as shown below:  


| Format | Supports | 
| --- | --- | 
|  SOURCE  |  Avro and Protobuf (using Lambda Schema Registry integration)  | 
|  JSON  |  JSON data  | 

 When using the `SOURCE` or `JSON` format, you can use Powertools for Amazon to help deserialize the data in your function code. Here are examples of how to handle different data formats: 

------
#### [ AVRO ]

Java example:

```
package org.demo.kafka;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.demo.kafka.avro.AvroProduct;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;

import software.amazon.lambda.powertools.kafka.Deserialization;
import software.amazon.lambda.powertools.kafka.DeserializationType;
import software.amazon.lambda.powertools.logging.Logging;

public class AvroDeserializationFunction implements RequestHandler<ConsumerRecords<String, AvroProduct>, String> {

    private static final Logger LOGGER = LoggerFactory.getLogger(AvroDeserializationFunction.class);

    @Override
    @Logging
    @Deserialization(type = DeserializationType.KAFKA_AVRO)
    public String handleRequest(ConsumerRecords<String, AvroProduct> records, Context context) {
        for (ConsumerRecord<String, AvroProduct> consumerRecord : records) {
            LOGGER.info("ConsumerRecord: {}", consumerRecord);

            AvroProduct product = consumerRecord.value();
            LOGGER.info("AvroProduct: {}", product);

            String key = consumerRecord.key();
            LOGGER.info("Key: {}", key);
        }

        return "OK";
    }

}
```

Python example:

```
from aws_lambda_powertools.utilities.kafka_consumer.kafka_consumer import kafka_consumer
from aws_lambda_powertools.utilities.kafka_consumer.schema_config import SchemaConfig
from aws_lambda_powertools.utilities.kafka_consumer.consumer_records import ConsumerRecords

from aws_lambda_powertools.utilities.typing import LambdaContext
from aws_lambda_powertools import Logger

logger = Logger(service="kafkaConsumerPowertools")

value_schema_str = open("customer_profile.avsc", "r").read()

schema_config = SchemaConfig(
value_schema_type="AVRO",
value_schema=value_schema_str)

@kafka_consumer(schema_config=schema_config)
def lambda_handler(event: ConsumerRecords, context:LambdaContext):

  for record in event.records:
      value = record.value
      logger.info(f"Received value: {value}")
```

TypeScript example:

```
import { kafkaConsumer } from '@aws-lambda-powertools/kafka';

import type { ConsumerRecords } from '@aws-lambda-powertools/kafka/types';
import { Logger } from '@aws-lambda-powertools/logger';
import type { Context } from 'aws-lambda';

const logger = new Logger();

type Value = {
    id: number;
    name: string;
    price: number;
};

const schema = '{   
    "type": "record",   
    "name": "Product",   
    "fields": [     
        { "name": "id", "type": "int" },     
        { "name": "name", "type": "string" },     
        { "name": "price", "type": "double" }   
    ] 
}';

export const handler = kafkaConsumer<string, Value>(
    (event: ConsumerRecords<string, Value>, _context: Context) => {
        for (const record of event.records) {
            logger.info(Processing record with key: ${record.key});
            logger.info(Record value: ${JSON.stringify(record.value)});
            // You can add more processing logic here
        }
    },
    {
        value: {
            type: 'avro',
            schema: schema,
        },
    }
);
```

.NET example:

```
using Amazon.Lambda.Core;
using AWS.Lambda.Powertools.Kafka;
using AWS.Lambda.Powertools.Kafka.Avro;
using AWS.Lambda.Powertools.Logging;
using Com.Example;

// Assembly attribute to enable the Lambda function's Kafka event to be converted into a .NET class.
[assembly: LambdaSerializer(typeof(PowertoolsKafkaAvroSerializer))]

namespace ProtoBufClassLibrary;

public class Function
{
    public string FunctionHandler(ConsumerRecords<string, CustomerProfile> records, ILambdaContext context)
    {
        foreach (var record in records)
        {
            Logger.LogInformation("Processing messagem from topic: {topic}", record.Topic);
            Logger.LogInformation("Partition: {partition}, Offset: {offset}", record.Partition, record.Offset);
            Logger.LogInformation("Produced at: {timestamp}", record.Timestamp);
            
            foreach (var header in record.Headers.DecodedValues())
            {
                Logger.LogInformation($"{header.Key}: {header.Value}");
            }
            
            Logger.LogInformation("Processing order for: {fullName}", record.Value.FullName);
        }
    
        return "Processed " + records.Count() + " records";
    }
}
```

------
#### [ PROTOBUF ]

Java example:

```
package org.demo.kafka;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.demo.kafka.protobuf.ProtobufProduct;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;

import software.amazon.lambda.powertools.kafka.Deserialization;
import software.amazon.lambda.powertools.kafka.DeserializationType;
import software.amazon.lambda.powertools.logging.Logging;

public class ProtobufDeserializationFunction
        implements RequestHandler<ConsumerRecords<String, ProtobufProduct>, String> {

    private static final Logger LOGGER = LoggerFactory.getLogger(ProtobufDeserializationFunction.class);

    @Override
    @Logging
    @Deserialization(type = DeserializationType.KAFKA_PROTOBUF)
    public String handleRequest(ConsumerRecords<String, ProtobufProduct> records, Context context) {
        for (ConsumerRecord<String, ProtobufProduct> consumerRecord : records) {
            LOGGER.info("ConsumerRecord: {}", consumerRecord);

            ProtobufProduct product = consumerRecord.value();
            LOGGER.info("ProtobufProduct: {}", product);

            String key = consumerRecord.key();
            LOGGER.info("Key: {}", key);
        }

        return "OK";
    }

}
```

Python example:

```
from aws_lambda_powertools.utilities.kafka_consumer.kafka_consumer import kafka_consumer

from aws_lambda_powertools.utilities.kafka_consumer.schema_config import SchemaConfig
from aws_lambda_powertools.utilities.kafka_consumer.consumer_records import ConsumerRecords

from aws_lambda_powertools.utilities.typing import LambdaContext
from aws_lambda_powertools import Logger

from user_pb2 import User # protobuf generated class

logger = Logger(service="kafkaConsumerPowertools")

schema_config = SchemaConfig(
value_schema_type="PROTOBUF",
value_schema=User)

@kafka_consumer(schema_config=schema_config)
def lambda_handler(event: ConsumerRecords, context:LambdaContext):

  for record in event.records:
      value = record.value
      logger.info(f"Received value: {value}")
```

TypeScript example:

```
import { kafkaConsumer } from '@aws-lambda-powertools/kafka';
import type { ConsumerRecords } from '@aws-lambda-powertools/kafka/types';
import { Logger } from '@aws-lambda-powertools/logger';
import type { Context } from 'aws-lambda';
import { Product } from './product.generated.js';

const logger = new Logger();

type Value = {
    id: number;
    name: string;
    price: number;
};

export const handler = kafkaConsumer<string, Value>(
    (event: ConsumerRecords<string, Value>, _context: Context) => {
        for (const record of event.records) {
            logger.info(Processing record with key: ${record.key});
            logger.info(Record value: ${JSON.stringify(record.value)});
        }
    },
    {
        value: {
            type: 'protobuf',
            schema: Product,
        },
    }
);
```

.NET example:

```
using Amazon.Lambda.Core;
using AWS.Lambda.Powertools.Kafka;
using AWS.Lambda.Powertools.Kafka.Protobuf;
using AWS.Lambda.Powertools.Logging;
using Com.Example;

// Assembly attribute to enable the Lambda function's Kafka event to be converted into a .NET class.
[assembly: LambdaSerializer(typeof(PowertoolsKafkaProtobufSerializer))]

namespace ProtoBufClassLibrary;

public class Function
{
    public string FunctionHandler(ConsumerRecords<string, CustomerProfile> records, ILambdaContext context)
    {
        foreach (var record in records)
        {
            Logger.LogInformation("Processing messagem from topic: {topic}", record.Topic);
            Logger.LogInformation("Partition: {partition}, Offset: {offset}", record.Partition, record.Offset);
            Logger.LogInformation("Produced at: {timestamp}", record.Timestamp);
            
            foreach (var header in record.Headers.DecodedValues())
            {
                Logger.LogInformation($"{header.Key}: {header.Value}");
            }
            
            Logger.LogInformation("Processing order for: {fullName}", record.Value.FullName);
        }
    
        return "Processed " + records.Count() + " records";
    }
}
```

------
#### [ JSON ]

Java example:

```
package org.demo.kafka;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;

import software.amazon.lambda.powertools.kafka.Deserialization;
import software.amazon.lambda.powertools.kafka.DeserializationType;
import software.amazon.lambda.powertools.logging.Logging;

public class JsonDeserializationFunction implements RequestHandler<ConsumerRecords<String, Product>, String> {

    private static final Logger LOGGER = LoggerFactory.getLogger(JsonDeserializationFunction.class);

    @Override
    @Logging
    @Deserialization(type = DeserializationType.KAFKA_JSON)
    public String handleRequest(ConsumerRecords<String, Product> consumerRecords, Context context) {
        for (ConsumerRecord<String, Product> consumerRecord : consumerRecords) {
            LOGGER.info("ConsumerRecord: {}", consumerRecord);

            Product product = consumerRecord.value();
            LOGGER.info("Product: {}", product);

            String key = consumerRecord.key();
            LOGGER.info("Key: {}", key);
        }

        return "OK";
    }
}
```

Python example:

```
from aws_lambda_powertools.utilities.kafka_consumer.kafka_consumer import kafka_consumer

from aws_lambda_powertools.utilities.kafka_consumer.schema_config import SchemaConfig
from aws_lambda_powertools.utilities.kafka_consumer.consumer_records import ConsumerRecords

from aws_lambda_powertools.utilities.typing import LambdaContext
from aws_lambda_powertools import Logger

logger = Logger(service="kafkaConsumerPowertools")

schema_config = SchemaConfig(value_schema_type="JSON")

@kafka_consumer(schema_config=schema_config)
def lambda_handler(event: ConsumerRecords, context:LambdaContext):

  for record in event.records:
      value = record.value
      logger.info(f"Received value: {value}")
```

TypeScript example:

```
import { kafkaConsumer } from '@aws-lambda-powertools/kafka';
import type { ConsumerRecords } from '@aws-lambda-powertools/kafka/types';
import { Logger } from '@aws-lambda-powertools/logger';
import type { Context } from 'aws-lambda';

const logger = new Logger();

type Value = {
    id: number;
    name: string;
    price: number;
};

export const handler = kafkaConsumer<string, Value>(
    (event: ConsumerRecords<string, Value>, _context: Context) => {
        for (const record of event.records) {
            logger.info(Processing record with key: ${record.key});
            logger.info(Record value: ${JSON.stringify(record.value)});
            // You can add more processing logic here
        }
    },
    {
        value: {
            type: 'json',
        },
    }
);
```

.NET example:

```
using Amazon.Lambda.Core;
using AWS.Lambda.Powertools.Kafka;
using AWS.Lambda.Powertools.Kafka.Json;
using AWS.Lambda.Powertools.Logging;
using Com.Example;

// Assembly attribute to enable the Lambda function's Kafka event to be converted into a .NET class.
[assembly: LambdaSerializer(typeof(PowertoolsKafkaJsonSerializer))]

namespace JsonClassLibrary;

public class Function
{
    public string FunctionHandler(ConsumerRecords<string, CustomerProfile> records, ILambdaContext context)
    {
        foreach (var record in records)
        {
            Logger.LogInformation("Processing messagem from topic: {topic}", record.Topic);
            Logger.LogInformation("Partition: {partition}, Offset: {offset}", record.Partition, record.Offset);
            Logger.LogInformation("Produced at: {timestamp}", record.Timestamp);
            
            foreach (var header in record.Headers.DecodedValues())
            {
                Logger.LogInformation($"{header.Key}: {header.Value}");
            }
            
            Logger.LogInformation("Processing order for: {fullName}", record.Value.FullName);
        }
    
        return "Processed " + records.Count() + " records";
    }
}
```

------

## Authentication methods for your schema registry


 To use a schema registry, Lambda needs to be able to securely access it. If you're working with an Amazon Glue schema registry, Lambda relies on IAM authentication. This means that your function's [execution role](lambda-intro-execution-role.md) must have the following permissions to access the Amazon Glue registry: 
+ [GetRegistry](https://docs.amazonaws.cn/glue/latest/webapi/API_GetRegistry.html) in the *Amazon Glue Web API Reference*
+ [GetSchemaVersion](https://docs.amazonaws.cn/glue/latest/webapi/API_GetSchemaVersion.html) in the *Amazon Glue Web API Reference*

Example of the required IAM policy: 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "glue:GetRegistry",
                "glue:GetSchemaVersion"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}
```

------

**Note**  
 For Amazon Glue schema registries, if you provide `AccessConfigs` for a Amazon Glue registry, Lambda will return a validation exception. 

If you're working with a Confluent schema registry, you can choose one of three supported authentication methods for the `Type` parameter of your [KafkaSchemaRegistryAccessConfig](https://docs.amazonaws.cn/lambda/latest/api/API_KafkaSchemaRegistryAccessConfig) object:
+ **BASIC\$1AUTH** — Lambda uses username and password or API Key and API Secret authentication to access your registry. If you choose this option, provide the Secrets Manager ARN containing your credentials in the URI field.
+ **CLIENT\$1CERTIFICATE\$1TLS\$1AUTH** — Lambda uses mutual TLS authentication with client certificates. To use this option, Lambda needs access to both the certificate and the private key. Provide the Secrets Manager ARN containing these credentials in the URI field.
+ **NO\$1AUTH** — The public CA certificate must be signed by a certificate authority (CA) that's in the Lambda trust store. For a private CA/self-signed certificate, you configure the server root CA certificate. To use this option, omit the `AccessConfigs` parameter.

 Additionally, if Lambda needs access to a private CA certificate to verify your schema registry's TLS certificate, choose `SERVER_ROOT_CA_CERT` as the `Type` and provide the Secrets Manager ARN to the certificate in the URI field. 

**Note**  
 To configure the `SERVER_ROOT_CA_CERT` option in the console, provide the secret ARN containing the certificate in the **Encryption** field. 

 The authentication configuration for your schema registry is separate from any authentication you've configured for your Kafka cluster. You must configure both separately, even if they use similar authentication methods. 

## Error handling and troubleshooting for schema registry issues
Error handling and troubleshooting

When using a schema registry with your Amazon MSK event source, you may encounter various errors. This section provides guidance on common issues and how to resolve them.

### Configuration errors


These errors occur when setting up your schema registry configuration.

Provisioned mode required  
**Error message:** `SchemaRegistryConfig is only available for Provisioned Mode. To configure Schema Registry, please enable Provisioned Mode by specifying MinimumPollers in ProvisionedPollerConfig.`  
**Resolution:** Enable provisioned mode for your event source mapping by configuring the `MinimumPollers` parameter in `ProvisionedPollerConfig`.

Invalid schema registry URL  
**Error message:** `Malformed SchemaRegistryURI provided. Please provide a valid URI or ARN. For example, https://schema-registry.example.com:8081 or arn:aws:glue:us-east-1:123456789012:registry/ExampleRegistry.`  
**Resolution:** Provide a valid HTTPS URL for Confluent Schema Registry or a valid ARN for Amazon Glue Schema Registry.

Invalid or missing event record format  
**Error message:** `EventRecordFormat is a required field for SchemaRegistryConfig. Please provide one of supported format types: SOURCE, JSON.`  
**Resolution:** Specify either SOURCE or JSON as the EventRecordFormat in your schema registry configuration.

Duplicate validation attributes  
**Error message:** `Duplicate KEY/VALUE Attribute in SchemaValidationConfigs. SchemaValidationConfigs must contain at most one KEY/VALUE Attribute.`  
**Resolution:** Remove duplicate KEY or VALUE attributes from your SchemaValidationConfigs. Each attribute type can only appear once.

Missing validation configuration  
**Error message:** `SchemaValidationConfigs is a required field for SchemaRegistryConfig.`  
**Resolution:** Add SchemaValidationConfigs to your configuration, specifying at least one validation attribute (KEY or VALUE).

### Access and permission errors


These errors occur when Lambda cannot access the schema registry due to permission or authentication issues.

Amazon Glue Schema Registry access denied  
**Error message:** `Cannot access Glue Schema with provided role. Please ensure the provided role can perform the GetRegistry and GetSchemaVersion Actions on your schema.`  
**Resolution:** Add the required permissions (`glue:GetRegistry` and `glue:GetSchemaVersion`) to your function's execution role.

Confluent Schema Registry access denied  
**Error message:** `Cannot access Confluent Schema with the provided access configuration.`  
**Resolution:** Verify that your authentication credentials (stored in Secrets Manager) are correct and have the necessary permissions to access the schema registry.

Cross-account Amazon Glue Schema Registry  
**Error message:** `Cross-account Glue Schema Registry ARN not supported.`  
**Resolution:** Use a Amazon Glue Schema Registry that's in the same Amazon account as your Lambda function.

Cross-region Amazon Glue Schema Registry  
**Error message:** `Cross-region Glue Schema Registry ARN not supported.`  
**Resolution:** Use a Amazon Glue Schema Registry that's in the same region as your Lambda function.

Secret access issues  
**Error message:** `Lambda received InvalidRequestException from Secrets Manager.`  
**Resolution:** Verify that your function's execution role has permission to access the secret and that the secret is not encrypted with a default Amazon KMS key if accessing from a different account.

### Connection errors


These errors occur when Lambda cannot establish a connection to the schema registry.

VPC connectivity issues  
**Error message:** `Cannot connect to your Schema Registry. Your Kafka cluster's VPC must be able to connect to the schema registry. You can provide access by configuring Amazon PrivateLink or a NAT Gateway or VPC Peering between Kafka Cluster VPC and the schema registry VPC.`  
**Resolution:** Configure your VPC networking to allow connections to the schema registry using Amazon PrivateLink, a NAT Gateway, or VPC peering.

TLS handshake failure  
**Error message:** `Unable to establish TLS handshake with the schema registry. Please provide correct CA-certificate or client certificate using Secrets Manager to access your schema registry.`  
**Resolution:** Verify that your CA certificates and client certificates (for mTLS) are correct and properly configured in Secrets Manager.

Throttling  
**Error message:** `Receiving throttling errors when accessing the schema registry. Please increase API TPS limits for your schema registry.`  
**Resolution:** Increase the API rate limits for your schema registry or reduce the rate of requests from your application.

Self-managed schema registry errors  
**Error message:** `Lambda received an internal server an unexpected error from the provided self-managed schema registry.`  
**Resolution:** Check the health and configuration of your self-managed schema registry server.

# Low latency processing for Kafka event sources
Low latency Apache Kafka

Amazon Lambda natively supports low latency event processing for applications that require consistent end-to-end latencies of less than 100 milliseconds. This page provides configuration details and recommendations to enable low latency workflows.

## Enable low latency processing


To enable low latency processing on a Kafka event source mapping, the following basic configuration is required: 
+ Enable provisioned mode. For more information, see [Provisioned mode](kafka-scaling-modes.md#kafka-provisioned-mode).
+ Set the event source mapping's `MaximumBatchingWindowInSeconds` parameter to 0. For more information, see [Batching behavior](invocation-eventsourcemapping.md#invocation-eventsourcemapping-batching).

## Fine-tuning your low latency Kafka ESM


Consider the following recommendations to optimize your Kafka event source mapping for low latency:

### Provisioned mode configuration


In provisioned mode for Kafka event source mapping, Lambda allows you to fine-tune the throughput of your event source mapping by configuring a minimum and maximum number of resources called **event pollers**. An event poller (or **a poller**) represents a compute resource that underpins an event source mapping in the provisioned mode, and allocates up to 5 MB/s throughput. Each event poller supports up to 5 concurrent Lambda invocations.

To determine the optimal poller configuration for your application, consider your peak ingestion rate and processing requirements. Let's look at a simplified example:

With a batch size of 20 records and an average target function duration of 50ms, each poller can handle 2,000 records per second subject to 5 MB/s limit. This is calculated as: (20 records × 1000ms/50ms) × 5 concurrent Lambda invocations. Therefore, if your desired peak ingestion rate is 20,000 records per second, you would need at least 10 event pollers.

**Note**  
We recommend to provision additional event pollers as buffer to avoid consistently operating at maximum capacity.

Provisioned mode automatically scales your event pollers based on traffic patterns within configured minimum and maximum **event pollers** which can trigger rebalance, and therefore, introduce additional latency. You can disable auto-scaling by configuring same value for minimum and maximum **event poller**.

### Additional considerations


Some of the additional considerations include:
+ Cold starts from the invocation of your Lambda target function can potentially increase end-to-end latency. To reduce this risk, consider enabling [provisioned concurrency](provisioned-concurrency.md) or [SnapStart](snapstart.md) on your event source mapping's target function. Additionally, optimize your function's memory allocation to ensure consistent and optimal executions.
+ When `MaximumBatchingWindowInSeconds` is set to 0, Lambda will immediately process any available records without waiting to fill the complete batch size. For example, if your batch size is set to 1,000 records but only 100 records are available, Lambda will process those 100 records immediately rather than waiting for the full 1,000 records to accumulate.

**Important**  
The optimal configuration for low latency processing varies significantly based on your specific workload. We strongly recommend testing different configurations with your actual workload to determine the best settings for your use case. 

# Configuring error handling controls for Kafka event sources
Retry configurations

You can configure how Lambda handles errors and retries for your Kafka event source mappings. These configurations help you control how Lambda processes failed records and manages retry behavior.

## Available retry configurations


The following retry configurations are available for both Amazon MSK and self-managed Kafka event sources:
+ **Maximum retry attempts** – The maximum number of times Lambda retries when your function returns an error. This doesn't count the initial invocation attempt. The default is -1 (infinite). When you configure both infinite retries and an [on-failure destination](kafka-on-failure-destination.md), Lambda automatically applies a maximum of 10 retry attempts.
+ **Maximum record age** – The maximum age of a record that Lambda sends to your function. The default is -1 (infinite).
+ **Split batch on error** – When your function returns an error, split the batch into two smaller batches and retry each separately. This helps isolate problematic records.
+ **Partial batch response** – Allow your function to return information about which records in a batch failed processing, so Lambda can retry only the failed records.

## Configuring error handling controls (console)


You can configure retry behavior when creating or updating a Kafka event source mapping in the Lambda console.

**To configure retry behavior for a Kafka event source (console)**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Choose your function name.

1. Do one of the following:
   + To add a new Kafka trigger, under **Function overview**, choose **Add trigger**.
   + To modify an existing Kafka trigger, choose the trigger and then choose **Edit**.

1. Under **Event poller configuration**, select provisioned mode to configure error handling controls:

   1. For **Retry attempts**, enter the maximum number of retry attempts (0-10000, or -1 for infinite).

   1. For **Maximum record age**, enter the maximum age in seconds (60-604800, or -1 for infinite).

   1. To enable batch splitting when errors occur, select **Split batch on error**.

   1. To enable partial batch response, select **ReportBatchItemFailures**.

1. Choose **Add** or **Save**.

## Configuring retry behavior (Amazon CLI)


Use the following Amazon CLI commands to configure retry behavior for your Kafka event source mappings.

### Creating an event source mapping with retry configurations


The following example creates a self-managed Kafka event source mapping with error handling controls:

```
aws lambda create-event-source-mapping \
  --function-name my-kafka-function \
  --topics my-kafka-topic \
  --source-access-configuration Type=SASL_SCRAM_512_AUTH,URI=arn:aws-cn:secretsmanager:us-east-1:111122223333:secret:MyBrokerSecretName \
  --self-managed-event-source '{"Endpoints":{"KAFKA_BOOTSTRAP_SERVERS":["abc.xyz.com:9092"]}}' \
  --starting-position LATEST \
  --provisioned-poller-config MinimumPollers=1,MaximumPollers=1 \
  --maximum-retry-attempts 3 \
  --maximum-record-age-in-seconds 3600 \
  --bisect-batch-on-function-error \
  --function-response-types "ReportBatchItemFailures"
```

For Amazon MSK event sources:

```
aws lambda create-event-source-mapping \
  --event-source-arn arn:aws-cn:kafka:us-east-1:111122223333:cluster/my-cluster/fc2f5bdf-fd1b-45ad-85dd-15b4a5a6247e-2 \
  --topics AWSMSKKafkaTopic \
  --starting-position LATEST \
  --function-name my-kafka-function \
  --source-access-configurations '[{"Type": "SASL_SCRAM_512_AUTH","URI": "arn:aws-cn:secretsmanager:us-east-1:111122223333:secret:my-secret"}]' \
  --provisioned-poller-config MinimumPollers=1,MaximumPollers=1 \
  --maximum-retry-attempts 3 \
  --maximum-record-age-in-seconds 3600 \
  --bisect-batch-on-function-error \
  --function-response-types "ReportBatchItemFailures"
```

### Updating retry configurations


Use the `update-event-source-mapping` command to modify retry configurations for an existing event source mapping:

```
aws lambda update-event-source-mapping \
  --uuid 12345678-1234-1234-1234-123456789012 \
  --maximum-retry-attempts 5 \
  --maximum-record-age-in-seconds 7200 \
  --bisect-batch-on-function-error \
  --function-response-types "ReportBatchItemFailures"
```

## PartialBatchResponse


Partial batch response, also known as ReportBatchItemFailures, is a key feature for error handling in Lambda's integration with Kafka sources. Without this feature, when an error occurs in one of the items in a batch, it results in reprocessing all messages in that batch. With partial batch response enabled and implemented, the handler returns identifiers only for the failed messages, allowing Lambda to retry just those specific items. This provides greater control over how batches containing failed messages are processed.

To report batch errors, you will use this JSON schema:

```
{
  "batchItemFailures": [
    {
      "itemIdentifier": {
        "partition": "topic-partition_number",
        "offset": 100
      }
    },
    ...
  ]
}
```

**Important**  
If you return an empty valid JSON or null, the event source mapping will consider a batch as successfully processed. Any invalid topic-partition\$1number or offset returned that was not present in the invoked event will be treated as failure and entire batch will be retried.

The following code examples show how to implement partial batch response for Lambda functions that receive events from Kafka sources. The function reports the batch item failures in the response, signaling to Lambda to retry those messages later.

Here is a Python Lambda handler implementation that shows this approach:

```
import base64
from typing import Any, Dict, List

def lambda_handler(event: Dict[str, Any], context: Any) -> Dict[str, List[Dict[str, Dict[str, Any]]]]:
    failures: List[Dict[str, Dict[str, Any]]] = []
    records_dict = event.get("records", {})
    
    for topic_partition, records_list in records_dict.items():
        for record in records_list:
            topic = record.get("topic")
            partition = record.get("partition")
            offset = record.get("offset")
            value_b64 = record.get("value")
            
            try:
                data = base64.b64decode(value_b64).decode("utf-8")
                process_message(data)
            except Exception as exc:
                print(f"Failed to process record topic={topic} partition={partition} offset={offset}: {exc}")
                item_identifier: Dict[str, Any] = {
                    "partition": f"{topic}-{partition}",
                    "offset": int(offset) if offset is not None else None,
                }
                failures.append({"itemIdentifier": item_identifier})
    
    return {"batchItemFailures": failures}

def process_message(data: str) -> None:
    # Your business logic for a single message
    pass
```

Here is a Node.js version:

```
const { Buffer } = require("buffer");

const handler = async (event) => {
  const failures = [];
  
  for (let topicPartition in event.records) {
    const records = event.records[topicPartition];
    
    for (const record of records) {
      const topic = record.topic;
      const partition = record.partition;
      const offset = record.offset;
      const valueBase64 = record.value;
      const data = Buffer.from(valueBase64, "base64").toString("utf8");
      
      try {
        await processMessage(data);
      } catch (error) {
        console.error("Failed to process record", { topic, partition, offset, error });
        const itemIdentifier = {
          "partition": `${topic}-${partition}`,
          "offset": Number(offset),
        };
        failures.push({ itemIdentifier });
      }
    }
  }
  
  return { batchItemFailures: failures };
};

async function processMessage(payload) {
  // Your business logic for a single message
}

module.exports = { handler };
```

# Capturing discarded batches for Amazon MSK and self-managed Apache Kafka event sources
Retain failed invocations

To retain records of failed event source mapping invocations, add a destination to your function's event source mapping. Each record sent to the destination is a JSON document containing metadata about the failed invocation. For Amazon S3 destinations, Lambda also sends the entire invocation record along with the metadata. You can configure any Amazon SNS topic, Amazon SQS queue, Amazon S3 bucket, or Kafka as a destination.

With Amazon S3 destinations, you can use the [Amazon S3 Event Notifications](https://docs.amazonaws.cn/) feature to receive notifications when objects are uploaded to your destination S3 bucket. You can also configure S3 Event Notifications to invoke another Lambda function to perform automated processing on failed batches.

Your execution role must have permissions for the destination:
+ **For an SQS destination:** [sqs:SendMessage](https://docs.amazonaws.cn/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html)
+ **For an SNS destination:** [sns:Publish](https://docs.amazonaws.cn/sns/latest/api/API_Publish.html)
+ **For an S3 destination:** [ s3:PutObject](https://docs.amazonaws.cn/AmazonS3/latest/API/API_PutObject.html) and [s3:ListBucket](https://docs.amazonaws.cn/AmazonS3/latest/API/ListObjectsV2.html)
+ **For a Kafka destination:** [kafka-cluster:WriteData](https://docs.aws.amazon.com/msk/latest/developerguide/kafka-actions.html)

You can configure a Kafka topic as an on-failure destination for your Kafka event source mappings. When Lambda can't process records after exhausting retry attempts or when records exceed the maximum age, Lambda sends the failed records to the specified Kafka topic for later processing. Refer to [Using a Kafka topic as an on-failure destination](kafka-on-failure-destination.md).

You must deploy a VPC endpoint for your on-failure destination service inside your Kafka cluster VPC.

Additionally, if you configured a KMS key on your destination, Lambda needs the following permissions depending on the destination type:
+ If you've enabled encryption with your own KMS key for an S3 destination, [kms:GenerateDataKey](https://docs.amazonaws.cn/kms/latest/APIReference/API_GenerateDataKey.html) is required. If the KMS key and S3 bucket destination are in a different account from your Lambda function and execution role, configure the KMS key to trust the execution role to allow kms:GenerateDataKey.
+ If you've enabled encryption with your own KMS key for SQS destination, [kms:Decrypt](https://docs.amazonaws.cn/kms/latest/APIReference/API_Decrypt.html) and [kms:GenerateDataKey](https://docs.amazonaws.cn/kms/latest/APIReference/API_GenerateDataKey.html) are required. If the KMS key and SQS queue destination are in a different account from your Lambda function and execution role, configure the KMS key to trust the execution role to allow kms:Decrypt, kms:GenerateDataKey, [kms:DescribeKey](https://docs.amazonaws.cn/kms/latest/APIReference/API_DescribeKey.html), and [kms:ReEncrypt](https://docs.amazonaws.cn/kms/latest/APIReference/API_ReEncrypt.html).
+ If you've enabled encryption with your own KMS key for SNS destination, [kms:Decrypt](https://docs.amazonaws.cn/kms/latest/APIReference/API_Decrypt.html) and [kms:GenerateDataKey](https://docs.amazonaws.cn/kms/latest/APIReference/API_GenerateDataKey.html) are required. If the KMS key and SNS topic destination are in a different account from your Lambda function and execution role, configure the KMS key to trust the execution role to allow kms:Decrypt, kms:GenerateDataKey, [kms:DescribeKey](https://docs.amazonaws.cn/kms/latest/APIReference/API_DescribeKey.html), and [kms:ReEncrypt](https://docs.amazonaws.cn/kms/latest/APIReference/API_ReEncrypt.html).

## Configuring on-failure destinations for a Kafka event source mapping


To configure an on-failure destination using the console, follow these steps:

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Choose a function.

1. Under **Function overview**, choose **Add destination**.

1. For **Source**, choose **Event source mapping invocation**.

1. For **Event source mapping**, choose an event source that's configured for this function.

1. For **Condition**, select **On failure**. For event source mapping invocations, this is the only accepted condition.

1. For **Destination type**, choose the destination type that Lambda sends invocation records to.

1. For **Destination**, choose a resource.

1. Choose **Save**.

You can also configure an on-failure destination using the Amazon CLI. For example, the following [create-event-source-mapping](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/create-event-source-mapping.html) command adds an event source mapping with an SQS on-failure destination to `MyFunction`:

```
aws lambda create-event-source-mapping \
--function-name "MyFunction" \
--event-source-arn arn:aws-cn:kafka:us-east-1:123456789012:cluster/vpc-2priv-2pub/751d2973-a626-431c-9d4e-d7975eb44dd7-2 \
--destination-config '{"OnFailure": {"Destination": "arn:aws-cn:sqs:us-east-1:123456789012:dest-queue"}}'
```

The following [update-event-source-mapping](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-event-source-mapping.html) command adds an S3 on-failure destination to the event source associated with the input `uuid`:

```
aws lambda update-event-source-mapping \
--uuid f89f8514-cdd9-4602-9e1f-01a5b77d449b \
--destination-config '{"OnFailure": {"Destination": "arn:aws-cn:s3:::dest-bucket"}}'
```

To remove a destination, supply an empty string as the argument to the `destination-config` parameter:

```
aws lambda update-event-source-mapping \
--uuid f89f8514-cdd9-4602-9e1f-01a5b77d449b \
--destination-config '{"OnFailure": {"Destination": ""}}'
```

### Security best practices for Amazon S3 destinations


Deleting an S3 bucket that's configured as a destination without removing the destination from your function's configuration can create a security risk. If another user knows your destination bucket's name, they can recreate the bucket in their Amazon Web Services account. Records of failed invocations will be sent to their bucket, potentially exposing data from your function.

**Warning**  
To ensure that invocation records from your function can't be sent to an S3 bucket in another Amazon Web Services account, add a condition to your function's execution role that limits `s3:PutObject` permissions to buckets in your account. 

The following example shows an IAM policy that limits your function's `s3:PutObject` permissions to buckets in your account. This policy also gives Lambda the `s3:ListBucket` permission it needs to use an S3 bucket as a destination.

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "S3BucketResourceAccountWrite",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::*/*",
                "arn:aws:s3:::*"
            ],
            "Condition": {
                "StringEquals": {
                    "s3:ResourceAccount": "111122223333"
                }
            }
        }
    ]
}
```

To add a permissions policy to your function's execution role using the Amazon Web Services Management Console or Amazon CLI, refer to the instructions in the following procedures:

------
#### [ Console ]

**To add a permissions policy to a function's execution role (console)**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Select the Lambda function whose execution role you want to modify.

1. In the **Configuration** tab, select **Permissions**.

1. In the **Execution role** tab, select your function's **Role name** to open the role's IAM console page.

1. Add a permissions policy to the role by doing the following:

   1. In the **Permissions policies** pane, choose **Add permissions** and select **Create inline policy**.

   1. In **Policy editor**, select **JSON**.

   1. Paste the policy you want to add into the editor (replacing the existing JSON), and then choose **Next**.

   1. Under **Policy details**, enter a **Policy name**.

   1. Choose **Create policy**.

------
#### [ Amazon CLI ]

**To add a permissions policy to a function's execution role (CLI)**

1. Create a JSON policy document with the required permissions and save it in a local directory.

1. Use the IAM `put-role-policy` CLI command to add the permissions to your function's execution role. Run the following command from the directory you saved your JSON policy document in and replace the role name, policy name, and policy document with your own values.

   ```
   aws iam put-role-policy \
   --role-name my_lambda_role \
   --policy-name LambdaS3DestinationPolicy \
   --policy-document file://my_policy.json
   ```

------

### SNS and SQS example invocation record


The following example shows what Lambda sends to an SNS topic or SQS queue destination for a failed Kafka event source invocation. Each of the keys under `recordsInfo` contains both the Kafka topic and partition, separated by a hyphen. For example, for the key `"Topic-0"`, `Topic` is the Kafka topic, and `0` is the partition. For each topic and partition, you can use the offsets and timestamp data to find the original invocation records.

```
{
    "requestContext": {
        "requestId": "316aa6d0-8154-xmpl-9af7-85d5f4a6bc81",
        "functionArn": "arn:aws-cn:lambda:us-east-1:123456789012:function:myfunction",
        "condition": "RetryAttemptsExhausted" | "MaximumPayloadSizeExceeded",
        "approximateInvokeCount": 1
    },
    "responseContext": { // null if record is MaximumPayloadSizeExceeded
        "statusCode": 200,
        "executedVersion": "$LATEST",
        "functionError": "Unhandled"
    },
    "version": "1.0",
    "timestamp": "2019-11-14T00:38:06.021Z",
    "KafkaBatchInfo": {
        "batchSize": 500,
        "eventSourceArn": "arn:aws-cn:kafka:us-east-1:123456789012:cluster/vpc-2priv-2pub/751d2973-a626-431c-9d4e-d7975eb44dd7-2",
        "bootstrapServers": "...",
        "payloadSize": 2039086, // In bytes
        "recordsInfo": {
            "Topic-0": {
                "firstRecordOffset": "49601189658422359378836298521827638475320189012309704722",
                "lastRecordOffset": "49601189658422359378836298522902373528957594348623495186",
                "firstRecordTimestamp": "2019-11-14T00:38:04.835Z",
                "lastRecordTimestamp": "2019-11-14T00:38:05.580Z",
            },
            "Topic-1": {
                "firstRecordOffset": "49601189658422359378836298521827638475320189012309704722",
                "lastRecordOffset": "49601189658422359378836298522902373528957594348623495186",
                "firstRecordTimestamp": "2019-11-14T00:38:04.835Z",
                "lastRecordTimestamp": "2019-11-14T00:38:05.580Z",
            }
        }
    }
}
```

### S3 destination example invocation record


For S3 destinations, Lambda sends the entire invocation record along with the metadata to the destination. The following example shows that Lambda sends to an S3 bucket destination for a failed Kafka event source invocation. In addition to all of the fields from the previous example for SQS and SNS destinations, the `payload` field contains the original invocation record as an escaped JSON string.

```
{
    "requestContext": {
        "requestId": "316aa6d0-8154-xmpl-9af7-85d5f4a6bc81",
        "functionArn": "arn:aws-cn:lambda:us-east-1:123456789012:function:myfunction",
        "condition": "RetryAttemptsExhausted" | "MaximumPayloadSizeExceeded",
        "approximateInvokeCount": 1
    },
    "responseContext": { // null if record is MaximumPayloadSizeExceeded
        "statusCode": 200,
        "executedVersion": "$LATEST",
        "functionError": "Unhandled"
    },
    "version": "1.0",
    "timestamp": "2019-11-14T00:38:06.021Z",
    "KafkaBatchInfo": {
        "batchSize": 500,
        "eventSourceArn": "arn:aws-cn:kafka:us-east-1:123456789012:cluster/vpc-2priv-2pub/751d2973-a626-431c-9d4e-d7975eb44dd7-2",
        "bootstrapServers": "...",
        "payloadSize": 2039086, // In bytes
        "recordsInfo": {
            "Topic-0": {
                "firstRecordOffset": "49601189658422359378836298521827638475320189012309704722",
                "lastRecordOffset": "49601189658422359378836298522902373528957594348623495186",
                "firstRecordTimestamp": "2019-11-14T00:38:04.835Z",
                "lastRecordTimestamp": "2019-11-14T00:38:05.580Z",
            },
            "Topic-1": {
                "firstRecordOffset": "49601189658422359378836298521827638475320189012309704722",
                "lastRecordOffset": "49601189658422359378836298522902373528957594348623495186",
                "firstRecordTimestamp": "2019-11-14T00:38:04.835Z",
                "lastRecordTimestamp": "2019-11-14T00:38:05.580Z",
            }
        }
    },
    "payload": "<Whole Event>" // Only available in S3
}
```

**Tip**  
We recommend enabling S3 versioning on your destination bucket.

# Using a Kafka topic as an on-failure destination
Kafka on-failure destination

You can configure a Kafka topic as an on-failure destination for your Kafka event source mappings. When Lambda can't process records after exhausting retry attempts or when records exceed the maximum age, Lambda sends the failed records to the specified Kafka topic for later processing. When you configure both [infinite retries](kafka-retry-configurations.md) and an on-failure destination, Lambda automatically applies a maximum of 10 retry attempts.

## How a Kafka on-failure destination works


When you configure a Kafka topic as an on-failure destination, Lambda acts as a Kafka producer and writes failed records to the destination topic. This creates a dead letter topic (DLT) pattern within your Kafka infrastructure.
+ **Same cluster requirement** – The destination topic must exist in the same Kafka cluster as your source topics.
+ **Actual record content** – Kafka destinations receive the actual failed records along with failure metadata.
+ **Recursion prevention** – Lambda prevents infinite loops by blocking configurations where the source and destination topics are the same.

## Configuring a Kafka on-failure destination


You can configure a Kafka topic as an on-failure destination when creating or updating a Kafka event source mapping.

### Configuring a Kafka destination (console)


**To configure a Kafka topic as an on-failure destination (console)**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Choose your function name.

1. Do one of the following:
   + To add a new Kafka trigger, under **Function overview**, choose **Add trigger**.
   + To modify an existing Kafka trigger, choose the trigger and then choose **Edit**.

1. Under **Additional settings**, for **On-failure destination**, choose **Kafka topic**.

1. For **Topic name**, enter the name of the Kafka topic where you want to send failed records.

1. Choose **Add** or **Save**.

### Configuring a Kafka destination (Amazon CLI)


Use the `kafka://` prefix to specify a Kafka topic as an on-failure destination.

#### Creating an event source mapping with Kafka destination


The following example creates a Amazon MSK event source mapping with a Kafka topic as the on-failure destination:

```
aws lambda create-event-source-mapping \
  --function-name my-kafka-function \
  --topics AWSKafkaTopic \
  --event-source-arn arn:aws:kafka:us-east-1:123456789012:cluster/my-cluster/abc123 \
  --starting-position LATEST \
  --provisioned-poller-config MinimumPollers=1,MaximumPollers=3 \
  --destination-config '{"OnFailure":{"Destination":"kafka://failed-records-topic"}}'
```

For self-managed Kafka, use the same syntax:

```
aws lambda create-event-source-mapping \
  --function-name my-kafka-function \
  --topics AWSKafkaTopic \
  --self-managed-event-source '{"Endpoints":{"KAFKA_BOOTSTRAP_SERVERS":["abc.xyz.com:9092"]}}' \
  --starting-position LATEST \
  --provisioned-poller-config MinimumPollers=1,MaximumPollers=3 \
  --destination-config '{"OnFailure":{"Destination":"kafka://failed-records-topic"}}'
```

#### Updating a Kafka destination


Use the `update-event-source-mapping` command to add or modify a Kafka destination:

```
aws lambda update-event-source-mapping \
  --uuid 12345678-1234-1234-1234-123456789012 \
  --destination-config '{"OnFailure":{"Destination":"kafka://failed-records-topic"}}'
```

## Record format for a Kafka destination


When Lambda sends failed records to a Kafka topic, each message contains both metadata about the failure and the actual record content.

### Failure metadata


The metadata includes information about why the record failed and details about the original batch:

```
{
  "requestContext": {
    "requestId": "e4b46cbf-b738-xmpl-8880-a18cdf61200e",
    "functionArn": "arn:aws:lambda:us-east-1:123456789012:function:my-function:$LATEST",
    "condition": "RetriesExhausted",
    "approximateInvokeCount": 3
  },
  "responseContext": {
    "statusCode": 200,
    "executedVersion": "$LATEST",
    "functionError": "Unhandled"
  },
  "version": "1.0",
  "timestamp": "2019-11-14T18:16:05.568Z",
  "KafkaBatchInfo": {
    "batchSize": 1,
    "eventSourceArn": "arn:aws:kafka:us-east-1:123456789012:cluster/my-cluster/abc123",
    "bootstrapServers": "b-1.mycluster.abc123.kafka.us-east-1.amazonaws.com:9098",
    "payloadSize": 1162,
    "recordInfo": {
      "offset": "49601189658422359378836298521827638475320189012309704722",
      "timestamp": "2019-11-14T18:16:04.835Z"
    }
  },
  "payload": {
    "bootstrapServers": "b-1.mycluster.abc123.kafka.us-east-1.amazonaws.com:9098",
    "eventSource": "aws:kafka",
    "eventSourceArn": "arn:aws:kafka:us-east-1:123456789012:cluster/my-cluster/abc123",
    "records": {
      "my-topic-0": [
        {
          "headers": [],
          "key": "dGVzdC1rZXk=",
          "offset": 100,
          "partition": 0,
          "timestamp": 1749116692330,
          "timestampType": "CREATE_TIME",
          "topic": "my-topic",
          "value": "dGVzdC12YWx1ZQ=="
        }
      ]
    }
  }
}
```

### Partition key behavior


Lambda uses the same partition key from the original record when producing to the destination topic. If the original record had no key, Lambda uses Kafka's default round-robin partitioning across all available partitions in the destination topic.

## Requirements and limitations

+ **Provisioned mode required** – A Kafka on-failure destination is only available for event source mappings with provisioned mode enabled.
+ **Same cluster only** – The destination topic must exist in the same Kafka cluster as your source topics.
+ **Topic permissions** – Your event source mapping must have write permissions to the destination topic. Example:

  ```
  {
      "Version": "2012-10-17",		 	 	 
      "Statement": [
          {
              "Sid": "ClusterPermissions",
              "Effect": "Allow",
              "Action": [
                  "kafka-cluster:Connect",
                  "kafka-cluster:DescribeCluster",
                  "kafka-cluster:DescribeTopic",
                  "kafka-cluster:WriteData",
                  "kafka-cluster:ReadData"
              ],
              "Resource": [
                  "arn:aws:kafka:*:*:cluster/*"
              ]
          },
          {
              "Sid": "TopicPermissions",
              "Effect": "Allow",
              "Action": [
                  "kafka-cluster:DescribeTopic",
                  "kafka-cluster:WriteData",
                  "kafka-cluster:ReadData"
              ],
              "Resource": [
                  "arn:aws:kafka:*:*:topic/*/*"
              ]
          },
          {
              "Effect": "Allow",
              "Action": [
                  "kafka:DescribeCluster",
                  "kafka:GetBootstrapBrokers",
                  "kafka:Produce"
              ],
              "Resource": "arn:aws:kafka:*:*:cluster/*"
          },
          {
              "Effect": "Allow",
              "Action": [
                  "ec2:CreateNetworkInterface",
                  "ec2:DescribeNetworkInterfaces",
                  "ec2:DeleteNetworkInterface",
                  "ec2:DescribeSubnets",
                  "ec2:DescribeSecurityGroups"
              ],
              "Resource": "*"
          }
      ]
  }
  ```
+ **No recursion** – The destination topic name cannot be the same as any of your source topic names.

# Kafka event source mapping logging
Kafka ESM logging

You can configure the system-level logging for your Kafka event source mappings to enable and filter the system logs that Lambda event pollers send to CloudWatch. 

This feature is only available for Kafka event source mappings, and with [Provisioned mode](https://docs.amazonaws.cn/lambda/latest/dg/kafka-scaling-modes.html#kafka-provisioned-mode).

For event source mapping with logging config, you can also check the system logs from pre-built log queries in the **Monitor** tab from the page Console **Lambda** > **Additional resources** > **event source mappings** now.

## How the logging works


When you set the logging config with log level in your event source mapping, the Lambda event poller sends out corresponding logs (event source mapping system logs).

The event source mapping reuses the same [log destination](https://docs.amazonaws.cn/lambda/latest/dg/monitoring-logs.html#configuring-log-destinations) with your Lambda function. Make sure the execution role of your Lambda function has the necessary logging permissions.

The event source mapping will have its own log stream, with date and event source mapping UUID as the log stream name, like `2020/01/01/12345678-1234-1234-1234-12345678901`.

For event source mapping system logs, you can choose between the following log levels.


| Log level | Usage | 
| --- | --- | 
| DEBUG (most detail) | Detailed information for event source processing progress | 
| INFO | Messages about the normal operation of your event source mapping | 
| WARN (least detail) | Messages about potential warns and errors that may lead to unexpected behavior | 

When you select a log level, Lambda event poller sends logs at that level and lower. For example, if you set the event source mapping system log level to INFO, event poller doesn’t send log outputs at the DEBUG level.

## Configuring logging


You can set the logging configure when creating or updating a Kafka event source mapping.

### Configuring logging (console)


**To configure the logging (console)**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Choose your function name.

1. Do one of the following:
   + To add a new Kafka trigger, under **Function overview**, choose **Add trigger**.
   + To modify an existing Kafka trigger, choose the trigger and then choose **Edit**.

1. Under **Event poller configuration**, for **Provisioned mode**, enable the **Configure** checkbox. And the **Log level** setting would show up.

1.  Click **Log level** dropdown list and select a level for the event source mapping.

1. Choose **Add** or **Save** at the bottom to create or update the event source mapping.

### Configuring logging (Amazon CLI)


#### Creating an event source mapping with logging


The following example creates a Amazon MSK event source mapping with logging config:

```
aws lambda create-event-source-mapping \
  --function-name my-kafka-function \
  --topics AWSKafkaTopic \
  --event-source-arn arn:aws:kafka:us-east-1:123456789012:cluster/my-cluster/abc123 \
  --starting-position LATEST \
  --provisioned-poller-config MinimumPollers=1,MaximumPollers=3 \
  --logging-config '{"SystemLogLevel":"DEBUG"}'
```

For self-managed Kafka, use the same syntax:

```
aws lambda create-event-source-mapping \
  --function-name my-kafka-function \
  --topics AWSKafkaTopic \
  --self-managed-event-source '{"Endpoints":{"KAFKA_BOOTSTRAP_SERVERS":["abc.xyz.com:9092"]}}' \
  --starting-position LATEST \
  --provisioned-poller-config MinimumPollers=1,MaximumPollers=3 \
  --logging-config '{"SystemLogLevel":"DEBUG"}'
```

#### Updating logging config


Use the `update-event-source-mapping` command to add or modify logging config:

```
aws lambda update-event-source-mapping \
  --uuid 12345678-1234-1234-1234-123456789012 \
  --logging-config '{"SystemLogLevel":"WARN"}'
```

## Record format for a Kafka event source mapping system log


When Lambda event poller sends the log, each log entry contains general event source mapping metadata and also event specific content.

### WARN log record


WARN record contains errors or warnings from the event poller, and it's emitted when the event happened. For example:

```
{
    "eventType": "ESM_PROCESSING_EVENT",
    "timestamp": 1546347650000,
    "resourceArn": "arn:aws:lambda:us-east-1:123456789012:event-source-mapping:12345678-1234-1234-1234-123456789012",
    "eventSourceArn": "arn:aws:kafka:us-east-1:123456789012:cluster/tests-cluster/87654321-4321-4321-4321-876543221-s1",
    "eventProcessorId": "12345678-1234-1234-1234-123456789012/0",
    "logLevel": "WARN",
    "error": {
        "errorMessage": "Timeout expired while fetching topic metadata",
        "errorCode": "org.apache.kafka.common.errors.TimeoutException"
    }
}
```

### INFO log record


INFO record contains Kafka consumer client configurations in each event poller, and it's emitted on the event of a consumer being built or changed. For example:

```
{
    "eventType": "POLLER_STATUS_EVENT",
    "timestamp": 1546347660000,
    "resourceArn": "arn:aws:lambda:us-east-1:123456789012:event-source-mapping:12345678-1234-1234-1234-123456789012",
    "eventSourceArn": "arn:aws:kafka:us-east-1:123456789012:cluster/tests-cluster/87654321-4321-4321-4321-876543221-s1",
    "eventProcessorId": "12345678-1234-1234-1234-123456789012/0",
    "logLevel": "INFO",
    "kafkaEventSourceConnection": {
        "brokerEndpoints": "boot-abcd1234.c2.kafka-serverless.us-east-1.amazonaws.com:9098",
        "consumerId": "12345678-1234-1234-1234-123456789012-0",
        "topics": [
            "test"
        ],
        "consumerGroupId": "12345678-1234-1234-1234-123456789012",
        "securityProtocol": "SASL_SSL",
        "saslMechanism": "AWS_MSK_IAM",
        "totalPartitionCount": 2,
        "assignedPartitionCount": 2,
        "partitionsAssignmentGeneration": 5,
        "assignedPartitions": [
            "test-0",
            "test-1"
        ],
        "networkConfig": {
            "ipAddresses": [
                "10.100.141.1"
            ],
            "subnetCidrBlock": "10.100.128.0/20",
            "securityGroups": [
                "sg-abcdefabcdefabcdef"
            ]
        }
    }
}
```

### DEBUG log record


DEBUG log contains the Kafka offsets related info in event source mapping processing, and the offset info is emitted per minute. For example:

```
{
    "eventType": "KAFKA_STATUS_EVENT",
    "timestamp": 1546347670000,
    "resourceArn": "arn:aws:lambda:us-east-1:123456789012:event-source-mapping:12345678-1234-1234-1234-123456789012",
    "eventSourceArn": "arn:aws:kafka:us-east-1:123456789012:cluster/tests-cluster/87654321-4321-4321-4321-876543221-s1",
    "eventProcessorId": "12345678-1234-1234-1234-123456789012/0",
    "logLevel": "DEBUG",
    "kafkaPartitionOffsets": {
        "partition": "test-1",
        "endOffset": 5004,
        "consumedOffset": 5003,
        "processedOffset": 5003,
        "committedOffset": 5004
    }
}
```

# Troubleshooting Kafka event source mapping errors
Troubleshooting

The following topics provide troubleshooting advice for errors and issues that you might encounter when using Amazon MSK or self-managed Apache Kafka with Lambda.

For more help with troubleshooting, visit the [Amazon Knowledge Center](https://repost.aws/knowledge-center#AWS_Lambda).

## Authentication and authorization errors


If any of the permissions required to consume data from the Kafka cluster are missing, Lambda displays one of the following error messages in the event source mapping under **LastProcessingResult**.

**Topics**
+ [

### Cluster failed to authorize Lambda
](#kafka-authorize-errors)
+ [

### SASL authentication failed
](#kafka-sasl-errors)
+ [

### Server failed to authenticate Lambda
](#kafka-mtls-errors-server)
+ [

### Lambda failed to authenticate server
](#kafka-mtls-errors-lambda)
+ [

### Provided certificate or private key is invalid
](#kafka-key-errors)

### Cluster failed to authorize Lambda


For SASL/SCRAM or mTLS, this error indicates that the provided user doesn't have all of the following required Kafka access control list (ACL) permissions:
+ DescribeConfigs Cluster
+ Describe Group
+ Read Group
+ Describe Topic
+ Read Topic

When you create Kafka ACLs with the required `kafka-cluster` permissions, specify the topic and group as resources. The topic name must match the topic in the event source mapping. The group name must match the event source mapping's UUID.

After you add the required permissions to the execution role, it might take several minutes for the changes to take effect.

The following is an example ESM system-level log after enabling [Logging Config](esm-logging.md) for this issue:

```
{
    "eventType": "ESM_PROCESSING_EVENT",
    "timestamp": 1734567890123,
    "resourceArn": "arn:aws:lambda:us-east-1:123456789012:event-source-mapping:a1b2c3d4-5678-90ab-cdef-EXAMPLE11111",
    "eventSourceArn": "arn:aws:kafka:us-east-1:123456789012:cluster/my-kafka-cluster/12345678-abcd-1234-efgh-EXAMPLE11111-1",
    "eventProcessorId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE11111/0",
    "logLevel": "WARN",
    "error": {
        "errorMessage": "Not authorized to access topics: [my-topic]",
        "errorCode": "org.apache.kafka.common.errors.TopicAuthorizationException"
    }
}
```

### SASL authentication failed


For SASL/SCRAM or SASL/PLAIN, this error indicates that the provided sign-in credentials aren't valid.

For IAM access control, the execution role is missing the `kafka-cluster:Connect` permission for the cluster. Add this permission to the role and specify the cluster's Amazon Resource Name (ARN) as a resource.

You might see this error occurring intermittently. The cluster rejects connections after the number of TCP connections exceeds the service quota. Lambda backs off and retries until a connection is successful. After Lambda connects to the cluster and polls for records, the last processing result changes to `OK`.

The following is an example ESM system-level log after enabling [Logging Config](esm-logging.md) for this issue when using IAM authentication:

```
{
    "eventType": "ESM_PROCESSING_EVENT",
    "timestamp": 1734567890456,
    "resourceArn": "arn:aws:lambda:us-east-1:123456789012:event-source-mapping:a1b2c3d4-5678-90ab-cdef-EXAMPLE22222",
    "eventSourceArn": "arn:aws:kafka:us-east-1:123456789012:cluster/my-kafka-cluster/12345678-abcd-1234-efgh-EXAMPLE22222-1",
    "eventProcessorId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE22222/0",
    "logLevel": "WARN",
    "error": {
        "errorMessage": "[a1b2c3d4-5678-90ab-cdef-EXAMPLE22222]: Access denied",
        "errorCode": "org.apache.kafka.common.errors.SaslAuthenticationException"
    }
}
```

### Server failed to authenticate Lambda


This error indicates that the Kafka broker failed to authenticate Lambda. This can occur for any of the following reasons:
+ You didn't provide a client certificate for mTLS authentication.
+ You provided a client certificate, but the Kafka brokers aren't configured to use mTLS authentication.
+ A client certificate isn't trusted by the Kafka brokers.

### Lambda failed to authenticate server


This error indicates that Lambda failed to authenticate the Kafka broker. This can occur for any of the following reasons:
+ For self-managed Apache Kafka: The Kafka brokers use self-signed certificates or a private CA, but didn't provide the server root CA certificate.
+ For self-managed Apache Kafka: The server root CA certificate doesn't match the root CA that signed the broker's certificate.
+ Hostname validation failed because the broker's certificate doesn't contain the broker's DNS name or IP address as a subject alternative name.

### Provided certificate or private key is invalid


This error indicates that the Kafka consumer couldn't use the provided certificate or private key. Make sure that the certificate and key use PEM format, and that the private key encryption uses a PBES1 algorithm.

The following is an example ESM system-level log after enabling [Logging Config](esm-logging.md) for this issue:

```
{
    "eventType": "ESM_PROCESSING_EVENT",
    "timestamp": 1734567891234,
    "resourceArn": "arn:aws:lambda:us-east-1:123456789012:event-source-mapping:a1b2c3d4-5678-90ab-cdef-EXAMPLE44444",
    "eventSourceArn": "arn:aws:kafka:us-east-1:123456789012:cluster/my-kafka-cluster/12345678-abcd-1234-efgh-EXAMPLE44444-1",
    "eventProcessorId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE44444/0",
    "logLevel": "WARN",
    "error": {
        "errorMessage": "Invalid PEM keystore configs",
        "errorCode": "org.apache.kafka.common.errors.InvalidConfigurationException"
    }
}
```

## Network and connectivity errors


Network configuration issues can prevent Lambda from connecting to your Kafka cluster. The following topics describe common network-related errors.

**Topics**
+ [

### Connection timeout due to security group configuration
](#kafka-security-group-errors)
+ [

### Kafka broker endpoints cannot be resolved
](#kafka-cluster-deleted-errors)

### Connection timeout due to security group configuration


If the security group associated with your Kafka cluster doesn't allow inbound traffic from itself, Lambda can't connect to the cluster. Make sure that the security group's inbound rules allow traffic from the security group itself on the Kafka broker ports.

The following is an example ESM system-level log after enabling [Logging Config](esm-logging.md) for this issue:

```
{
    "eventType": "ESM_PROCESSING_EVENT",
    "timestamp": 1734567892345,
    "resourceArn": "arn:aws:lambda:us-east-1:123456789012:event-source-mapping:a1b2c3d4-5678-90ab-cdef-EXAMPLE55555",
    "eventSourceArn": "arn:aws:kafka:us-east-1:123456789012:cluster/my-kafka-cluster/12345678-abcd-1234-efgh-EXAMPLE55555-1",
    "eventProcessorId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE55555/0",
    "logLevel": "WARN",
    "error": {
        "errorMessage": "Timeout expired while fetching topic metadata",
        "errorCode": "org.apache.kafka.common.errors.TimeoutException"
    }
}
```

You can also check the Kafka consumer INFO log to verify the connection and network configuration. The `brokerEndpoints` field shows the Kafka broker addresses, `securityProtocol` and `saslMechanism` (if applicable) show the authentication method, and the `networkConfig` field shows the IP addresses, subnet CIDR block, and security groups used by the event source mapping. Verify that the security groups listed allow the required inbound traffic:

```
{
    "eventType": "POLLER_STATUS_EVENT",
    "timestamp": 1734567892456,
    "resourceArn": "arn:aws:lambda:us-east-1:123456789012:event-source-mapping:a1b2c3d4-5678-90ab-cdef-11111EXAMPLE",
    "eventSourceArn": "arn:aws:kafka:us-east-1:123456789012:cluster/my-kafka-cluster/a1b2c3d4-5678-90ab-cdef-11111EXAMPLE-1",
    "eventProcessorId": "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE/0",
    "logLevel": "INFO",
    "kafkaEventSourceConnection": {
        "brokerEndpoints": "boot-abcd1234.c2.kafka-serverless.us-east-1.amazonaws.com:9098",
        "consumerId": "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE-0",
        "topics": [
            "my-topic"
        ],
        "consumerGroupId": "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE",
        "securityProtocol": "SASL_SSL",
        "saslMechanism": "AWS_MSK_IAM",
        "totalPartitionCount": 2,
        "assignedPartitionCount": 2,
        "partitionsAssignmentGeneration": 1,
        "assignedPartitions": [
            "my-topic-0",
            "my-topic-1"
        ],
        "networkConfig": {
            "ipAddresses": [
                "10.0.0.37"
            ],
            "subnetCidrBlock": "10.0.0.32/28",
            "securityGroups": [
                "sg-0123456789abcdef0"
            ]
        }
    }
}
```

### Kafka broker endpoints cannot be resolved


This error indicates that the Kafka cluster doesn't exist or has been deleted. Verify that the cluster specified in the event source mapping exists and is in an active state.

The following is an example ESM system-level log after enabling [Logging Config](esm-logging.md) for this issue:

```
{
    "eventType": "ESM_PROCESSING_EVENT",
    "timestamp": 1734567893456,
    "resourceArn": "arn:aws:lambda:us-east-1:123456789012:event-source-mapping:a1b2c3d4-5678-90ab-cdef-EXAMPLE66666",
    "eventSourceArn": "arn:aws:kafka:us-east-1:123456789012:cluster/my-kafka-cluster/12345678-abcd-1234-efgh-EXAMPLE66666-1",
    "eventProcessorId": "a1b2c3d4-5678-90ab-cdef-EXAMPLE66666/0",
    "logLevel": "WARN",
    "error": {
        "errorMessage": "No resolvable bootstrap urls given in bootstrap.servers",
        "errorCode": "org.apache.kafka.common.config.ConfigException"
    }
}
```

## Event source mapping errors
Event source errors

When you add your Apache Kafka cluster as an [event source](invocation-eventsourcemapping.md) for your Lambda function, if your function encounters an error, your Kafka consumer stops processing records. Consumers of a topic partition are those that subscribe to, read, and process your records. Your other Kafka consumers can continue processing records, provided they don't encounter the same error.

To determine the cause of a stopped consumer, check the `StateTransitionReason` field in the response of `EventSourceMapping`. The following list describes the event source errors that you can receive:

**`ESM_CONFIG_NOT_VALID`**  
The event source mapping configuration isn't valid.

**`EVENT_SOURCE_AUTHN_ERROR`**  
Lambda couldn't authenticate the event source.

**`EVENT_SOURCE_AUTHZ_ERROR`**  
Lambda doesn't have the required permissions to access the event source.

**`FUNCTION_CONFIG_NOT_VALID`**  
The function configuration isn't valid.

**Note**  
If your Lambda event records exceed the allowed size limit of 6 MB, they can go unprocessed.

# Invoking a Lambda function using an Amazon API Gateway endpoint
API Gateway

You can create a web API with an HTTP endpoint for your Lambda function by using Amazon API Gateway. API Gateway provides tools for creating and documenting web APIs that route HTTP requests to Lambda functions. You can secure access to your API with authentication and authorization controls. Your APIs can serve traffic over the internet or can be accessible only within your VPC.

**Tip**  
Lambda offers two ways to invoke your function through an HTTP endpoint: API Gateway and Lambda function URLs. If you're not sure which is the best method for your use case, see [Select a method to invoke your Lambda function using an HTTP request](apig-http-invoke-decision.md).

Resources in your API define one or more methods, such as GET or POST. Methods have an integration that routes requests to a Lambda function or another integration type. You can define each resource and method individually, or use special resource and method types to match all requests that fit a pattern. A [proxy resource](https://docs.amazonaws.cn/apigateway/latest/developerguide/set-up-lambda-proxy-integrations.html) catches all paths beneath a resource. The `ANY` method catches all HTTP methods.

**Topics**
+ [

## Choosing an API type
](#services-apigateway-apitypes)
+ [

## Adding an endpoint to your Lambda function
](#apigateway-add)
+ [

## Proxy integration
](#apigateway-proxy)
+ [

## Event format
](#apigateway-example-event)
+ [

## Response format
](#apigateway-types-transforms)
+ [

## Permissions
](#apigateway-permissions)
+ [

## Sample application
](#services-apigateway-samples)
+ [

## The event handler from Powertools for Amazon Lambda
](#services-apigateway-powertools)
+ [

# Tutorial: Using Lambda with API Gateway
](services-apigateway-tutorial.md)
+ [

# Handling Lambda errors with an API Gateway API
](services-apigateway-errors.md)
+ [

# Select a method to invoke your Lambda function using an HTTP request
](apig-http-invoke-decision.md)

## Choosing an API type


API Gateway supports three types of APIs that invoke Lambda functions:
+ [HTTP API](https://docs.amazonaws.cn/apigateway/latest/developerguide/http-api.html): A lightweight, low-latency RESTful API.
+ [REST API](https://docs.amazonaws.cn/apigateway/latest/developerguide/apigateway-rest-api.html): A customizable, feature-rich RESTful API.
+ [WebSocket API](https://docs.amazonaws.cn/apigateway/latest/developerguide/apigateway-websocket-api.html): A web API that maintains persistent connections with clients for full-duplex communication.

HTTP APIs and REST APIs are both RESTful APIs that process HTTP requests and return responses. HTTP APIs are newer and are built with the API Gateway version 2 API. The following features are new for HTTP APIs:

**HTTP API features**
+ **Automatic deployments** – When you modify routes or integrations, changes deploy automatically to stages that have automatic deployment enabled.
+ **Default stage** – You can create a default stage (`$default`) to serve requests at the root path of your API's URL. For named stages, you must include the stage name at the beginning of the path.
+ **CORS configuration** – You can configure your API to add CORS headers to outgoing responses, instead of adding them manually in your function code.

REST APIs are the classic RESTful APIs that API Gateway has supported since launch. REST APIs currently have more customization, integration, and management features.

**REST API features**
+ **Integration types** – REST APIs support custom Lambda integrations. With a custom integration, you can send just the body of the request to the function, or apply a transform template to the request body before sending it to the function.
+ **Access control** – REST APIs support more options for authentication and authorization.
+ **Monitoring and tracing** – REST APIs support Amazon X-Ray tracing and additional logging options.

For a detailed comparison, see [Choose between HTTP APIs and REST APIs](https://docs.amazonaws.cn/apigateway/latest/developerguide/http-api-vs-rest.html) in the *API Gateway Developer Guide*.

WebSocket APIs also use the API Gateway version 2 API and support a similar feature set. Use a WebSocket API for applications that benefit from a persistent connection between the client and API. WebSocket APIs provide full-duplex communication, which means that both the client and the API can send messages continuously without waiting for a response.

HTTP APIs support a simplified event format (version 2.0). For an example of an event from an HTTP API, see [Create Amazon Lambda proxy integrations for HTTP APIs in API Gateway](https://docs.amazonaws.cn//apigateway/latest/developerguide/http-api-develop-integrations-lambda.html).

For more information, see [Create Amazon Lambda proxy integrations for HTTP APIs in API Gateway](https://docs.amazonaws.cn/apigateway/latest/developerguide/http-api-develop-integrations-lambda.html).

## Adding an endpoint to your Lambda function


**To add a public endpoint to your Lambda function**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Choose a function.

1. Under **Function overview**, choose **Add trigger**.

1. Select **API Gateway**.

1. Choose **Create an API** or **Use an existing API**.

   1. **New API:** For **API type**, choose **HTTP API**. For more information, see [Choosing an API type](#services-apigateway-apitypes).

   1. **Existing API:** Select the API from the dropdown list or enter the API ID (for example, r3pmxmplak).

1. For **Security**, choose **Open**.

1. Choose **Add**.

## Proxy integration


API Gateway APIs are comprised of stages, resources, methods, and integrations. The stage and resource determine the path of the endpoint:

**API path format**
+ `/prod/` – The `prod` stage and root resource.
+ `/prod/user` – The `prod` stage and `user` resource.
+ `/dev/{proxy+}` – Any route in the `dev` stage.
+ `/` – (HTTP APIs) The default stage and root resource.

A Lambda integration maps a path and HTTP method combination to a Lambda function. You can configure API Gateway to pass the body of the HTTP request as-is (custom integration), or to encapsulate the request body in a document that includes all of the request information including headers, resource, path, and method.

For more information, see [Lambda proxy integrations in API Gateway](https://docs.amazonaws.cn/apigateway/latest/developerguide/set-up-lambda-proxy-integrations.html).

## Event format


Amazon API Gateway invokes your function [synchronously](invocation-sync.md) with an event that contains a JSON representation of the HTTP request. For a custom integration, the event is the body of the request. For a proxy integration, the event has a defined structure. For an example of a proxy event from an API Gateway REST API, see [Input format of a Lambda function for proxy integration](https://docs.amazonaws.cn/apigateway/latest/developerguide/set-up-lambda-proxy-integrations.html#api-gateway-simple-proxy-for-lambda-input-format) in the *API Gateway Developer Guide*.

## Response format


API Gateway waits for a response from your function and relays the result to the caller. For a custom integration, you define an integration response and a method response to convert the output from the function to an HTTP response. For a proxy integration, the function must respond with a representation of the response in a specific format.

The following example shows a response object from a Node.js function. The response object represents a successful HTTP response that contains a JSON document.

**Example index.mjs – Proxy integration response object (Node.js)**  

```
var response = {
      "statusCode": 200,
      "headers": {
        "Content-Type": "application/json"
      },
      "isBase64Encoded": false,
      "multiValueHeaders": { 
        "X-Custom-Header": ["My value", "My other value"],
      },
      "body": "{\n  \"TotalCodeSize\": 104330022,\n  \"FunctionCount\": 26\n}"
    }
```

The Lambda runtime serializes the response object into JSON and sends it to the API. The API parses the response and uses it to create an HTTP response, which it then sends to the client that made the original request.

**Example HTTP response**  

```
< HTTP/1.1 200 OK
  < Content-Type: application/json
  < Content-Length: 55
  < Connection: keep-alive
  < x-amzn-RequestId: 32998fea-xmpl-4268-8c72-16138d629356
  < X-Custom-Header: My value
  < X-Custom-Header: My other value
  < X-Amzn-Trace-Id: Root=1-5e6aa925-ccecxmplbae116148e52f036
  <
  {
    "TotalCodeSize": 104330022,
    "FunctionCount": 26
  }
```

## Permissions


Amazon API Gateway gets permission to invoke your function from the function's [resource-based policy](access-control-resource-based.md). You can grant invoke permission to an entire API, or grant limited access to a stage, resource, or method.

When you add an API to your function by using the Lambda console, using the API Gateway console, or in an Amazon SAM template, the function's resource-based policy is updated automatically. The following is an example function policy.

**Example function policy**    
****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Id": "default",
  "Statement": [
    {
      "Sid": "nodejs-apig-functiongetEndpointPermissionProd-BWDBXMPLXE2F",
      "Effect": "Allow",
      "Principal": {
        "Service": "apigateway.amazonaws.com"
      },
      "Action": "lambda:InvokeFunction",
      "Resource": "arn:aws-cn:lambda:cn-north-1:111122223333:function:nodejs-apig-function-1G3MXMPLXVXYI",
      "Condition": {
        "StringEquals": {
          "aws:SourceAccount": "111122223333"
        },
        "ArnLike": {
          "aws:SourceArn": "arn:aws-cn:execute-api:cn-north-1:111122223333:ktyvxmpls1/*/GET/"
        }
      }
    }
  ]
}
```

You can manage function policy permissions manually with the following API operations:
+ [AddPermission](https://docs.amazonaws.cn/lambda/latest/api/API_AddPermission.html)
+ [RemovePermission](https://docs.amazonaws.cn/lambda/latest/api/API_RemovePermission.html)
+ [GetPolicy](https://docs.amazonaws.cn/lambda/latest/api/API_GetPolicy.html)

To grant invocation permission to an existing API, use the `add-permission` command. Example:

```
aws lambda add-permission \
  --function-name my-function \
  --statement-id apigateway-get --action lambda:InvokeFunction \
  --principal apigateway.amazonaws.com.cn \
  --source-arn "arn:aws-cn:execute-api:cn-north-1:123456789012:mnh1xmpli7/default/GET/"
```

You should see the following output:

```
{
    "Statement": "{\"Sid\":\"apigateway-test-2\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"apigateway.amazonaws.com.cn\"},\"Action\":\"lambda:InvokeFunction\",\"Resource\":\"arn:aws-cn:lambda:cn-north-1:123456789012:function:my-function\",\"Condition\":{\"ArnLike\":{\"AWS:SourceArn\":\"arn:aws-cn:execute-api:cn-north-1:123456789012:mnh1xmpli7/default/GET\"}}}"
}
```

**Note**  
If your function and API are in different Amazon Web Services Regions, the Region identifier in the source ARN must match the Region of the function, not the Region of the API. When API Gateway invokes a function, it uses a resource ARN that is based on the ARN of the API, but modified to match the function's Region.

The source ARN in this example grants permission to an integration on the GET method of the root resource in the default stage of an API, with ID `mnh1xmpli7`. You can use an asterisk in the source ARN to grant permissions to multiple stages, methods, or resources.

**Resource patterns**
+ `mnh1xmpli7/*/GET/*` – GET method on all resources in all stages.
+ `mnh1xmpli7/prod/ANY/user` – ANY method on the `user` resource in the `prod` stage.
+ `mnh1xmpli7/*/*/*` – Any method on all resources in all stages.

For details on viewing the policy and removing statements, see [Viewing resource-based IAM policies in Lambda](access-control-resource-based.md).

## Sample application


The [API Gateway with Node.js](https://github.com/awsdocs/aws-lambda-developer-guide/tree/main/sample-apps/nodejs-apig) sample app includes a function with an Amazon SAM template that creates a REST API that has Amazon X-Ray tracing enabled. It also includes scripts for deploying, invoking the function, testing the API, and cleanup.

## The event handler from Powertools for Amazon Lambda


The event handler from the Powertools for Amazon Lambda toolkit provides routing, middleware, CORS configuration, OpenAPI spec generation, request validation, error handling, and other useful features when writing Lambda functions invoked by an API Gateway endpoint (HTTP or REST). The event handler utility is available for Python and TypeScript/JavaScript. For more information, see [Event Handler REST API](https://docs.powertools.aws.dev/lambda/python/latest/core/event_handler/api_gateway/) in the *Powertools for Amazon Lambda (Python) documentation* and [Event Handler HTTP API](https://docs.aws.amazon.com/powertools/typescript/latest/features/event-handler/http/) in the *Powertools for Amazon Lambda (TypeScript) documentation*.

### Python


```
from aws_lambda_powertools import Logger
from aws_lambda_powertools.event_handler import APIGatewayRestResolver
from aws_lambda_powertools.logging import correlation_paths
from aws_lambda_powertools.utilities.typing.lambda_context import LambdaContext

app = APIGatewayRestResolver()
logger = Logger()

@app.get("/healthz")
def ping():
    return {"message": "health status ok"}

@logger.inject_lambda_context(correlation_id_path=correlation_paths.API_GATEWAY_REST)  
def lambda_handler(event: dict, context: LambdaContext) -> dict:
    return app.resolve(event, context)
```

### Typescript


```
import { Router } from '@aws-lambda-powertools/event-handler/experimental-rest';
import { Logger } from '@aws-lambda-powertools/logger';
import {
  correlationPaths,
  search,
} from '@aws-lambda-powertools/logger/correlationId';
import type { Context } from 'aws-lambda/handler';

const logger = new Logger({
  correlationIdSearchFn: search,
});

const app = new Router({ logger });

app.get("/healthz", async () => {
  return { message: "health status ok" };
});

export const handler = async (event: unknown, context: Context) => {
  // You can continue using other utilities just as before
  logger.addContext(context);
  logger.setCorrelationId(event, correlationPaths.API_GATEWAY_REST);
  return app.resolve(event, context);
};
```

# Tutorial: Using Lambda with API Gateway
Tutorial

In this tutorial, you create a REST API through which you invoke a Lambda function using an HTTP request. Your Lambda function will perform create, read, update, and delete (CRUD) operations on a DynamoDB table. This function is provided here for demonstration, but you will learn to configure an API Gateway REST API that can invoke any Lambda function.

![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/APIG_tut_resources.png)


Using API Gateway provides users with a secure HTTP endpoint to invoke your Lambda function and can help manage large volumes of calls to your function by throttling traffic and automatically validating and authorizing API calls. API Gateway also provides flexible security controls using Amazon Identity and Access Management (IAM) and Amazon Cognito. This is useful for use cases where advance authorization is required for calls to your application.

**Tip**  
Lambda offers two ways to invoke your function through an HTTP endpoint: API Gateway and Lambda function URLs. If you're not sure which is the best method for your use case, see [Select a method to invoke your Lambda function using an HTTP request](apig-http-invoke-decision.md).

To complete this tutorial, you will go through the following stages:

1. Create and configure a Lambda function in Python or Node.js to perform operations on a DynamoDB table.

1. Create a REST API in API Gateway to connect to your Lambda function.

1. Create a DynamoDB table and test it with your Lambda function in the console.

1. Deploy your API and test the full setup using curl in a terminal.

By completing these stages, you will learn how to use API Gateway to create an HTTP endpoint that can securely invoke a Lambda function at any scale. You will also learn how to deploy your API, and how to test it in the console and by sending an HTTP request using a terminal.

## Create a permissions policy


Before you can create an [execution role](lambda-intro-execution-role.md) for your Lambda function, you first need to create a permissions policy to give your function permission to access the required Amazon resources. For this tutorial, the policy allows Lambda to perform CRUD operations on a DynamoDB table and write to Amazon CloudWatch Logs.

**To create the policy**

1. Open the [Policies page](https://console.amazonaws.cn/iam/home#/policies) of the IAM console.

1. Choose **Create Policy**.

1. Choose the **JSON** tab, and then paste the following custom policy into the JSON editor.

------
#### [ JSON ]

****  

   ```
   {
     "Version":"2012-10-17",		 	 	 
     "Statement": [
       {
         "Sid": "Stmt1428341300017",
         "Action": [
           "dynamodb:DeleteItem",
           "dynamodb:GetItem",
           "dynamodb:PutItem",
           "dynamodb:Query",
           "dynamodb:Scan",
           "dynamodb:UpdateItem"
         ],
         "Effect": "Allow",
         "Resource": "*"
       },
       {
         "Sid": "",
         "Resource": "*",
         "Action": [
           "logs:CreateLogGroup",
           "logs:CreateLogStream",
           "logs:PutLogEvents"
         ],
         "Effect": "Allow"
       }
     ]
   }
   ```

------

1. Choose **Next: Tags**.

1. Choose **Next: Review**.

1. Under **Review policy**, for the policy **Name**, enter **lambda-apigateway-policy**.

1. Choose **Create policy**.

## Create an execution role


An [execution role](lambda-intro-execution-role.md) is an Amazon Identity and Access Management (IAM) role that grants a Lambda function permission to access Amazon Web Services services and resources. To enable your function to perform operations on a DynamoDB table, you attach the permissions policy you created in the previous step.

**To create an execution role and attach your custom permissions policy**

1. Open the [Roles page](https://console.amazonaws.cn/iam/home#/roles) of the IAM console.

1. Choose **Create role**.

1. For the type of trusted entity, choose **Amazon service**, then for the use case, choose **Lambda**.

1. Choose **Next**.

1. In the policy search box, enter **lambda-apigateway-policy**.

1. In the search results, select the policy that you created (`lambda-apigateway-policy`), and then choose **Next**.

1. Under **Role details**, for the **Role name**, enter **lambda-apigateway-role**, then choose **Create role**.

## Create the Lambda function


1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console and choose **Create Function**.

1. Choose **Author from scratch**.

1. For **Function name**, enter `LambdaFunctionOverHttps`.

1. For **Runtime**, choose the latest Node.js or Python runtime.

1. Under **Permissions**, expand **Change default execution role**.

1. Choose **Use an existing role**, and then select the **lambda-apigateway-role** role that you created earlier.

1. Choose **Create function**.

1. In the **Code source** pane, replace the default code with the following Node.js or Python code.

------
#### [ Node.js ]

   The `region` setting must match the Amazon Web Services Region where you deploy the function and [create the DynamoDB table](#services-apigateway-tutorial-table).

**Example index.mjs**  

   ```
   import { DynamoDBDocumentClient, PutCommand, GetCommand, 
            UpdateCommand, DeleteCommand} from "@aws-sdk/lib-dynamodb";
   import { DynamoDBClient } from "@aws-sdk/client-dynamodb";
   
   const ddbClient = new DynamoDBClient({ region: "cn-north-1" });
   const ddbDocClient = DynamoDBDocumentClient.from(ddbClient);
   
   // Define the name of the DDB table to perform the CRUD operations on
   const tablename = "lambda-apigateway";
   
   /**
    * Provide an event that contains the following keys:
    *
    *   - operation: one of 'create,' 'read,' 'update,' 'delete,' or 'echo'
    *   - payload: a JSON object containing the parameters for the table item
    *     to perform the operation on
    */
   export const handler = async (event, context) => {
      
        const operation = event.operation;
      
        if (operation == 'echo'){
             return(event.payload);
        }
        
       else { 
           event.payload.TableName = tablename;
           let response;
           
           switch (operation) {
             case 'create':
                  response = await ddbDocClient.send(new PutCommand(event.payload));
                  break;
             case 'read':
                  response = await ddbDocClient.send(new GetCommand(event.payload));
                  break;
             case 'update':
                  response = ddbDocClient.send(new UpdateCommand(event.payload));
                  break;
             case 'delete':
                  response = ddbDocClient.send(new DeleteCommand(event.payload));
                  break;
             default:
               response = 'Unknown operation: ${operation}';
             }
           console.log(response);
           return response;
       }
   };
   ```

------
#### [ Python ]

**Example lambda\$1function.py**  

   ```
   import boto3
   
   # Define the DynamoDB table that Lambda will connect to
   table_name = "lambda-apigateway"
   
   # Create the DynamoDB resource
   dynamo = boto3.resource('dynamodb').Table(table_name)
   
   # Define some functions to perform the CRUD operations
   def create(payload):
       return dynamo.put_item(Item=payload['Item'])
   
   def read(payload):
       return dynamo.get_item(Key=payload['Key'])
   
   def update(payload):
       return dynamo.update_item(**{k: payload[k] for k in ['Key', 'UpdateExpression', 
       'ExpressionAttributeNames', 'ExpressionAttributeValues'] if k in payload})
   
   def delete(payload):
       return dynamo.delete_item(Key=payload['Key'])
   
   def echo(payload):
       return payload
   
   operations = {
       'create': create,
       'read': read,
       'update': update,
       'delete': delete,
       'echo': echo,
   }
   
   def lambda_handler(event, context):
       '''Provide an event that contains the following keys:
         - operation: one of the operations in the operations dict below
         - payload: a JSON object containing parameters to pass to the 
           operation being performed
       '''
       
       operation = event['operation']
       payload = event['payload']
       
       if operation in operations:
           return operations[operation](payload)
           
       else:
           raise ValueError(f'Unrecognized operation "{operation}"')
   ```

------
**Note**  
In this example, the name of the DynamoDB table is defined as a variable in your function code. In a real application, best practice is to pass this parameter as an environment variable and to avoid hardcoding the table name. For more information see [Using Amazon Lambda environment variables](https://docs.amazonaws.cn/lambda/latest/dg/configuration-envvars.html).

1. In the **DEPLOY** section, choose **Deploy** to update your function's code:  
![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/getting-started-tutorial/deploy-console.png)

## Test the function


Before integrating your function with API Gateway, confirm that you have deployed the function successfully. Use the Lambda console to send a test event to your function.

1. On the Lambda console page for your function, choose the **Test** tab.  
![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/test-tab.png)

1. Scroll down to the **Event JSON** section and replace the default event with the following. This event matches the structure expected by the Lambda function.

   ```
   {
       "operation": "echo",
       "payload": {
           "somekey1": "somevalue1",
           "somekey2": "somevalue2"
       }
   }
   ```

1. Choose **Test**.

1. Under **Executing function: succeeded**, expand **Details**. You should see the following response:

   ```
   {
     "somekey1": "somevalue1",
     "somekey2": "somevalue2"
   }
   ```

## Create a REST API using API Gateway


In this step, you create the API Gateway REST API you will use to invoke your Lambda function.

**To create the API**

1. Open the [API Gateway console](https://console.amazonaws.cn/apigateway).

1. Choose **Create API**.

1. In the **REST API** box, choose **Build**.

1. Under **API details**, leave **New API** selected, and for **API Name**, enter **DynamoDBOperations**.

1. Choose **Create API**.

## Create a resource on your REST API


To add an HTTP method to your API, you first need to create a resource for that method to operate on. Here you create the resource to manage your DynamoDB table.

**To create the resource**

1. In the [API Gateway console](https://console.amazonaws.cn/apigateway), on the **Resources** page for your API, choose **Create resource**.

1. In **Resource details**, for **Resource name** enter **DynamoDBManager**.

1. Choose **Create Resource**.

## Create an HTTP POST method


In this step, you create a method (`POST`) for your `DynamoDBManager` resource. You link this `POST` method to your Lambda function so that when the method receives an HTTP request, API Gateway invokes your Lambda function.

**Note**  
 For the purpose of this tutorial, one HTTP method (`POST`) is used to invoke a single Lambda function which carries out all of the operations on your DynamoDB table. In a real application, best practice is to use a different Lambda function and HTTP method for each operation. For more information, see [The Lambda monolith](https://serverlessland.com/content/service/lambda/guides/aws-lambda-operator-guide/monolith) in Serverless Land. 

**To create the POST method**

1. On the **Resources** page for your API, ensure that the `/DynamoDBManager` resource is highlighted. Then, in the **Methods** pane, choose **Create method**.

1. For **Method type**, choose **POST**.

1. For **Integration type**, leave **Lambda function** selected.

1. For **Lambda function**, choose the Amazon Resource Name (ARN) for your function (`LambdaFunctionOverHttps`).

1. Choose **Create method**.

## Create a DynamoDB table


Create an empty DynamoDB table that your Lambda function will perform CRUD operations on.

**To create the DynamoDB table**

1. Open the [Tables page](https://console.amazonaws.cn/dynamodbv2#tables) of the DynamoDB console.

1. Choose **Create table**.

1. Under **Table details**, do the following:

   1. For **Table name**, enter **lambda-apigateway**.

   1. For **Partition key**, enter **id**, and keep the data type set as **String**.

1. Under **Table settings**, keep the **Default settings**.

1. Choose **Create table**.

## Test the integration of API Gateway, Lambda, and DynamoDB


You're now ready to test the integration of your API Gateway API method with your Lambda function and your DynamoDB table. Using the API Gateway console, you send requests directly to your `POST` method using the console's test function. In this step, you first use a `create` operation to add a new item to your DynamoDB table, then you use an `update` operation to modify the item.

**Test 1: To create a new item in your DynamoDB table**

1. In the [API Gateway console](https://console.amazonaws.cn/apigateway), choose your API (`DynamoDBOperations`).

1. Choose the **POST** method under the `DynamoDBManager` resource.

1. Choose the **Test** tab. You might need to choose the right arrow button to show the tab.

1. Under **Test method**, leave **Query strings** and **Headers** empty. For **Request body**, paste the following JSON:

   ```
   {
     "operation": "create",
     "payload": {
       "Item": {
         "id": "1234ABCD",
         "number": 5
       }
     }
   }
   ```

1. Choose **Test**.

   The results that are displayed when the test completes should show status `200`. This status code indicates that the `create` operation was successful.

    To confirm, check that your DynamoDB table now contains the new item.

1. Open the [Tables page](https://console.amazonaws.cn/dynamodbv2#tables) of the DynamoDB console and choose the `lambda-apigateway` table.

1. Chose **Explore table items**. In the **Items returned** pane, you should see one item with the **id** `1234ABCD` and the **number** `5`. Example:  
![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/items-returned.png)

**Test 2: To update the item in your DynamoDB table**

1. In the [API Gateway console](https://console.amazonaws.cn/apigateway), return to your POST method's **Test** tab.

1. Under **Test method**, leave **Query strings** and **Headers** empty. For **Request body**, paste the following JSON:

   ```
   {
       "operation": "update",
       "payload": {
           "Key": {
               "id": "1234ABCD"
           },
           "UpdateExpression": "SET #num = :newNum",
           "ExpressionAttributeNames": {
               "#num": "number"
           },
           "ExpressionAttributeValues": {
               ":newNum": 10
           }
       }
   }
   ```

1. Choose **Test**.

   The results which are displayed when the test completes should show status `200`. This status code indicates that the `update` operation was successful.

    To confirm, check that the item in your DynamoDB table has been modified.

1. Open the [Tables page](https://console.amazonaws.cn/dynamodbv2#tables) of the DynamoDB console and choose the `lambda-apigateway` table.

1. Chose **Explore table items**. In the **Items returned** pane, you should see one item with the **id** `1234ABCD` and the **number** `10`.  
![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/items-returned-2.png)

## Deploy the API


For a client to call the API, you must create a deployment and an associated stage. A stage represents a snapshot of your API including its methods and integrations.

**To deploy the API**

1. Open the **APIs** page of the [API Gateway console](https://console.amazonaws.cn/apigateway) and choose the `DynamoDBOperations` API.

1. On the **Resources** page for your API choose **Deploy API**.

1. For **Stage**, choose **\$1New stage\$1**, then for **Stage name**, enter **test**.

1. Choose **Deploy**.

1. In the **Stage details** pane, copy the **Invoke URL**. You will use this in the next step to invoke your function using an HTTP request.

## Use curl to invoke your function using HTTP requests


You can now invoke your Lambda function by issuing an HTTP request to your API. In this step, you will create a new item in your DynamoDB table and then perform read, update, and delete operations on that item.

**To create an item in your DynamoDB table using curl**

1. Open a terminal or command prompt on your local machine and run the following `curl` command using the invoke URL you copied in the previous step. This command uses the following options:
   + `-H`: Adds a custom header to the request. Here, it specifies the content type as JSON.
   + `-d`: Sends data in the request body. This option uses an HTTP POST method by default.

------
#### [ Linux/macOS ]

   ```
   curl https://l8togsqxd8.execute-api.cn-north-1.amazonaws.com.cn/test/DynamoDBManager \
   -H "Content-Type: application/json" \
   -d '{"operation": "create", "payload": {"Item": {"id": "5678EFGH", "number": 15}}}'
   ```

------
#### [ PowerShell ]

   ```
   curl.exe 'https://l8togsqxd8.execute-api.cn-north-1.amazonaws.com.cn/test/DynamoDBManager' -H 'Content-Type: application/json' -d '{\"operation\": \"create\", \"payload\": {\"Item\": {\"id\": \"5678EFGH\", \"number\": 15}}}'
   ```

------

   If the operation was successful, you should see a response returned with an HTTP status code of 200.

1. You can also use the DynamoDB console to verify that the new item is in your table by doing the following:

   1. Open the [Tables page](https://console.amazonaws.cn/dynamodbv2#tables) of the DynamoDB console and choose the `lambda-apigateway` table.

   1. Choose **Explore table items**. In the **Items returned** pane, you should see an item with the **id** `5678EFGH` and the **number** `15`.

**To read the item in your DynamoDB table using curl**
+ In your terminal or command prompt, run the following `curl` command to read the value of the item you just created. Use your own invoke URL.

------
#### [ Linux/macOS ]

  ```
  curl https://l8togsqxd8.execute-api.cn-north-1.amazonaws.com.cn/test/DynamoDBManager \
  -H "Content-Type: application/json" \
  -d '{"operation": "read", "payload": {"Key": {"id": "5678EFGH"}}}'
  ```

------
#### [ PowerShell ]

  ```
  curl.exe 'https://l8togsqxd8.execute-api.cn-north-1.amazonaws.com.cn/test/DynamoDBManager' -H 'Content-Type: application/json' -d '{\"operation\": \"read\", \"payload\": {\"Key\": {\"id\": \"5678EFGH\"}}}'
  ```

------

  You should see output like one of the following depending on whether you chose the Node.js or Python function code:

------
#### [ Node.js ]

  ```
  {"$metadata":{"httpStatusCode":200,"requestId":"7BP3G5Q0C0O1E50FBQI9NS099JVV4KQNSO5AEMVJF66Q9ASUAAJG",
  "attempts":1,"totalRetryDelay":0},"Item":{"id":"5678EFGH","number":15}}
  ```

------
#### [ Python ]

  ```
  {"Item":{"id":"5678EFGH","number":15},"ResponseMetadata":{"RequestId":"QNDJICE52E86B82VETR6RKBE5BVV4KQNSO5AEMVJF66Q9ASUAAJG",
  "HTTPStatusCode":200,"HTTPHeaders":{"server":"Server","date":"Wed, 31 Jul 2024 00:37:01 GMT","content-type":"application/x-amz-json-1.0",
  "content-length":"52","connection":"keep-alive","x-amzn-requestid":"QNDJICE52E86B82VETR6RKBE5BVV4KQNSO5AEMVJF66Q9ASUAAJG","x-amz-crc32":"2589610852"},
  "RetryAttempts":0}}
  ```

------

**To update the item in your DynamoDB table using curl**

1. In your terminal or command prompt, run the following `curl` command to update the item you just created by changing the `number` value. Use your own invoke URL.

------
#### [ Linux/macOS ]

   ```
   curl https://l8togsqxd8.execute-api.cn-north-1.amazonaws.com.cn/test/DynamoDBManager \
   -H "Content-Type: application/json" \
   -d '{"operation": "update", "payload": {"Key": {"id": "5678EFGH"}, "UpdateExpression": "SET #num = :new_value", "ExpressionAttributeNames": {"#num": "number"}, "ExpressionAttributeValues": {":new_value": 42}}}'
   ```

------
#### [ PowerShell ]

   ```
   curl.exe 'https://l8togsqxd8.execute-api.cn-north-1.amazonaws.com.cn/test/DynamoDBManager' -H 'Content-Type: application/json' -d '{\"operation\": \"update\", \"payload\": {\"Key\": {\"id\": \"5678EFGH\"}, \"UpdateExpression\": \"SET #num = :new_value\", \"ExpressionAttributeNames\": {\"#num\": \"number\"}, \"ExpressionAttributeValues\": {\":new_value\": 42}}}'
   ```

------

1. To confirm that the value of `number` for the item has been updated, run another read command:

------
#### [ Linux/macOS ]

   ```
   curl https://l8togsqxd8.execute-api.cn-north-1.amazonaws.com.cn/test/DynamoDBManager \
   -H "Content-Type: application/json" \
   -d '{"operation": "read", "payload": {"Key": {"id": "5678EFGH"}}}'
   ```

------
#### [ PowerShell ]

   ```
   curl.exe 'https://l8togsqxd8.execute-api.cn-north-1.amazonaws.com.cn/test/DynamoDBManager' -H 'Content-Type: application/json' -d '{\"operation\": \"read\", \"payload\": {\"Key\": {\"id\": \"5678EFGH\"}}}'
   ```

------

**To delete the item in your DynamoDB table using curl**

1. In your terminal or command prompt, run the following `curl` command to delete the item you just created. Use your own invoke URL.

------
#### [ Linux/macOS ]

   ```
   curl https://l8togsqxd8.execute-api.cn-north-1.amazonaws.com.cn/test/DynamoDBManager \
   -H "Content-Type: application/json" \
   -d '{"operation": "delete", "payload": {"Key": {"id": "5678EFGH"}}}'
   ```

------
#### [ PowerShell ]

   ```
   curl.exe 'https://l8togsqxd8.execute-api.cn-north-1.amazonaws.com.cn/test/DynamoDBManager' -H 'Content-Type: application/json' -d '{\"operation\": \"delete\", \"payload\": {\"Key\": {\"id\": \"5678EFGH\"}}}'
   ```

------

1. Confirm that the delete operation was successful. In the **Items returned** pane of the DynamoDB console **Explore items** page, verify that the item with **id** `5678EFGH` is no longer in the table.

## Clean up your resources (optional)


You can now delete the resources that you created for this tutorial, unless you want to retain them. By deleting Amazon resources that you're no longer using, you prevent unnecessary charges to your Amazon Web Services account.

**To delete the Lambda function**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Select the function that you created.

1. Choose **Actions**, **Delete**.

1. Type **confirm** in the text input field and choose **Delete**.

**To delete the execution role**

1. Open the [Roles page](https://console.amazonaws.cn/iam/home#/roles) of the IAM console.

1. Select the execution role that you created.

1. Choose **Delete**.

1. Enter the name of the role in the text input field and choose **Delete**.

**To delete the API**

1. Open the [APIs page](https://console.amazonaws.cn/apigateway/main/apis) of the API Gateway console.

1. Select the API you created.

1. Choose **Actions**, **Delete**.

1. Choose **Delete**.

**To delete the DynamoDB table**

1. Open the [Tables page](https://console.amazonaws.cn//dynamodb/home#tables:) of the DynamoDB console.

1. Select the table you created.

1. Choose **Delete**.

1. Enter **delete** in the text box.

1. Choose **Delete table**.

# Handling Lambda errors with an API Gateway API
Errors

API Gateway treats all invocation and function errors as internal errors. If the Lambda API rejects the invocation request, API Gateway returns a 500 error code. If the function runs but returns an error, or returns a response in the wrong format, API Gateway returns a 502. In both cases, the body of the response from API Gateway is `{"message": "Internal server error"}`.

**Note**  
API Gateway does not retry any Lambda invocations. If Lambda returns an error, API Gateway returns an error response to the client.

The following example shows an X-Ray trace map for a request that resulted in a function error and a 502 from API Gateway. The client receives the generic error message.

![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/tracemap-apig-502.png)


To customize the error response, you must catch errors in your code and format a response in the required format.

**Example [index.mjs](https://github.com/awsdocs/aws-lambda-developer-guide/tree/main/sample-apps/nodejs-apig/function/index.mjs) – Error formatting**  

```
var formatError = function(error){
  var response = {
    "statusCode": error.statusCode,
    "headers": {
      "Content-Type": "text/plain",
      "x-amzn-ErrorType": error.code
    },
    "isBase64Encoded": false,
    "body": error.code + ": " + error.message
  }
  return response
}
```

API Gateway converts this response into an HTTP error with a custom status code and body. In the trace map, the function node is green because it handled the error.

![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/tracemap-apig-404.png)


# Select a method to invoke your Lambda function using an HTTP request
API Gateway vs function URLs

Many common use cases for Lambda involve invoking your function using an HTTP request. For example, you might want a web application to invoke your function through a browser request. Lambda functions can also be used to create full REST APIs, handle user interactions from mobile apps, process data from external services via HTTP calls, or create custom webhooks.

The following sections explain what your choices are for invoking Lambda through HTTP and provide information to help you make the right decision for your particular use case.

## What are your choices when selecting an HTTP invoke method?
What are your choices

Lambda offers two main methods to invoke a function using an HTTP request - [function URLs](urls-configuration.md) and [API Gateway](services-apigateway.md). The key differences between these two options are as follows:
+ **Lambda function URLs** provide a simple, direct HTTP endpoint for a Lambda function. They are optimized for simplicity and cost-effectiveness and provide the fastest path to expose a Lambda function via HTTP.
+ **API Gateway** is a more advanced service for building fully-featured APIs. API Gateway is optimized for building and managing productions APIs at scale and provides comprehensive tools for security, monitoring, and traffic management.

## Recommendations if you already know your requirements
Basic recommendations

If you're already clear on your requirements, here are our basic recommendations:

We recommend **[function URLs](urls-configuration.md)** for simple applications or prototyping where you only need basic authentication methods and request/response handling and where you want to keep costs and complexity to a minimum.

**[API Gateway](services-apigateway.md)** is a better choice for production applications at scale or for cases where you need more advanced features like [OpenAPI Description](https://www.openapis.org/) support, a choice of authentication options, custom domain names, or rich request/response handling including throttling, caching, and request/response transformation.

## What to consider when selecting a method to invoke your Lambda function
What to consider

When selecting between function URLs and API Gateway, you need to consider the following factors:
+ Your authentication needs, such as whether you require OAuth or Amazon Cognito to authenticate users
+ Your scaling requirements and the complexity of the API you want to implement
+ Whether you need advanced features such as request validation and request/response formatting
+ Your monitoring requirements
+ Your cost goals

By understanding these factors, you can select the option that best balances your security, complexity, and cost requirements.

The following information summarizes the main differences between the two options.

### Authentication

+ **Function URLs** provide basic authentication options through Amazon Identity and Access Management (IAM). You can configure your endpoints to be either public (no authentication) or to require IAM authentication. With IAM authentication, you can use standard Amazon credentials or IAM roles to control access. While straightforward to set up, this approach provides limited options compared with other authenticaton methods.
+ **API Gateway** provides access to a more comprehensive range of authentication options. As well as IAM authentication, you can use [Lambda authorizers](https://docs.amazonaws.cn/apigateway/latest/developerguide/apigateway-use-lambda-authorizer.html) (custom authentication logic), [Amazon Cognito](https://docs.amazonaws.cn/cognito/latest/developerguide/what-is-amazon-cognito.html) user pools, and OAuth2.0 flows. This flexibility allows you to implement complex authentication schemes, including third-party authentication providers, token-based authentication, and multi-factor authentication.

### Request/response handling

+ **Function URLs** provide basic HTTP request and response handling. They support standard HTTP methods and include built-in cross-origin resource sharing (CORS) support. While they can handle JSON payloads and query parameters naturally, they don't offer request transformation or validation capabilities. Response handling is similarly straightforward – the client receives the response from your Lambda function exactly as Lambda returns it.
+ **API Gateway** provides sophisticated request and response handling capabilities. You can define request validators, transform requests and responses using mapping templates, set up request/response headers, and implement response caching. API Gateway also supports binary payloads and custom domain names and can modify responses before they reach the client. You can set up models for request/response validation and transformation using JSON Schema.

### Scaling

+ **Function URLs** scale directly with your Lambda function's concurrency limits and handle traffic spikes by scaling your function up to its maximum configured concurrency limit. Once that limit is reached, Lambda responds to additional requests with HTTP 429 responses. There's no built-in queuing mechanism, so handling scaling is entirely dependent on your Lambda function's configuration. By default, Lambda functions have a limit of 1,000 concurrent executions per Amazon Web Services Region.
+ **API Gateway** provides additional scaling capabilities on top of Lambda's own scaling. It includes built-in request queuing and throttling controls, allowing you to manage traffic spikes more gracefully. API Gateway can handle up to 10,000 requests per second per region by default, with a burst capacity of 5,000 requests per second. It also provides tools to throttle requests at different levels (API, stage, or method) to protect your backend.

### Monitoring

+ **Function URLs** offer basic monitoring through Amazon CloudWatch metrics, including request count, latency, and error rates. You get access to standard Lambda metrics and logs, which show the raw requests coming into your function. While this provides essential operational visibility, the metrics are focused mainly on function execution.
+ **API Gateway** provides comprehensive monitoring capabilities including detailed metrics, logging, and tracing options. You can monitor API calls, latency, error rates, and cache hit/miss rates through CloudWatch. API Gateway also integrates with Amazon X-Ray for distributed tracing and provides customizable logging formats.

### Cost

+ **Function URLs** follow the standard Lambda pricing model – you only pay for function invocations and compute time. There are no additional charges for the URL endpoint itself. This makes it a cost-effective choice for simple APIs or low-traffic applications if you don't need the additional features of API Gateway.
+ **API Gateway** offers a [free tier](https://www.amazonaws.cn/api-gateway/pricing/#Free_Tier) that includes one million API calls received for REST APIs and one million API calls received for HTTP APIs. After this, API Gateway charges for API calls, data transfer, and caching (if enabled). Refer to the API Gateway [pricing page](https://www.amazonaws.cn/api-gateway/pricing/) to understand the costs for your own use case.

### Other features

+ **Function URLs** are designed for simplicity and direct Lambda integration. They support both HTTP and HTTPS endpoints, offer built-in CORS support, and provide dual-stack (IPv4 and IPv6) endpoints. While they lack advanced features, they excel in scenarios where you need a quick, straightforward way to expose Lambda functions via HTTP.
+ **API Gateway** includes numerous additional features such as API versioning, stage management, API keys for usage plans, API documentation through Swagger/OpenAPI, WebSocket APIs, private APIs within a VPC, and WAF integration for additional security. It also supports canary deployments, mock integrations for testing, and integration with other Amazon Web Services services beyond Lambda.

## Select a method to invoke your Lambda function
Select an invoke method

Now that you've read about the criteria for selecting between Lambda function URLs and API Gateway and the key differences between them, you can select the option that best meets your needs and use the following resources to help you get started using it.

------
#### [ Function URLs ]

**Get started with function URLs with the following resources**
+ Follow the tutorial [Creating a Lambda function with a function URL](urls-webhook-tutorial.md)
+ Learn more about function URLs in the [Creating and managing Lambda function URLs](urls-configuration.md) chapter of this guide
+ Try the in-console guided tutorial **Create a simple web app** by doing the following:

1. Open the [functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Open the help panel by choosing the icon in the top right corner of the screen.  
![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/console_help_screenshot.png)

1. Select **Tutorials**.

1. In **Create a simple web app**, choose **Start tutorial**.

------
#### [ API Gateway ]

**Get started with Lambda and API Gateway with the following resources**
+ Follow the tutorial [Using Lambda with API Gateway](services-apigateway-tutorial.md) to create a REST API integrated with a backend Lambda function.
+ Learn more about the different kinds of API offered by API Gateway in the following sections of the *Amazon API Gateway Developer Guide*:
  + [API Gateway REST APIs](https://docs.amazonaws.cn/apigateway/latest/developerguide/apigateway-rest-api.html)
  + [API Gateway HTTP APIs](https://docs.amazonaws.cn/apigateway/latest/developerguide/http-api.html)
  + [API Gateway WebSocket APIs](https://docs.amazonaws.cn/apigateway/latest/developerguide/apigateway-websocket-api.html)
+ Try one or more of the examples in the [Tutorials and workshops](https://docs.amazonaws.cn/apigateway/latest/developerguide/api-gateway-tutorials.html) section of the *Amazon API Gateway Developer Guide*.

------

# Using Amazon Lambda with Amazon Infrastructure Composer
Infrastructure Composer

Amazon Infrastructure Composer is a visual builder for desiging modern applications on Amazon. You design your application architecture by dragging, grouping, and connecting Amazon Web Services services in a visual canvas. Infrastructure Composer creates infrastructure as code (IaC) templates from your design that you can deploy using [Amazon SAM](https://docs.amazonaws.cn/serverless-application-model/latest/developerguide/what-is-sam.html) or [Amazon CloudFormation](https://docs.amazonaws.cn/AWSCloudFormation/latest/UserGuide/Welcome.html).

## Exporting a Lambda function to Infrastructure Composer


You can get started using Infrastructure Composer by creating a new project based on the configuration of an existing Lambda function using the Lambda console. To export your function's configuration and code to Infrastructure Composer to create a new project, do the following:

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Select the function you want to use as a basis for your Infrastructure Composer project.

1. In the **Function overview** pane, choose **Export to Infrastructure Composer**.

   To export your function's configuration and code to Infrastructure Composer, Lambda creates an Amazon S3 bucket in your account to temporarily store this data.

1. In the dialog box, choose **Confirm and create project** to accept the default name for this bucket and export your function's configuration and code to Infrastructure Composer.

1. (Optional) To choose another name for the Amazon S3 bucket that Lambda creates, enter a new name and choose **Confirm and create project**. Amazon S3 bucket names must be globally unique and follow the [bucket naming rules](https://docs.amazonaws.cn/AmazonS3/latest/userguide/bucketnamingrules.html).

1. To save your project and function files in Infrastructure Composer, activate [local sync mode](https://docs.amazonaws.cn/application-composer/latest/dg/reference-features-local-sync.html).

**Note**  
If you've used the **Export to Application Composer** feature before and created an Amazon S3 bucket using the default name, Lambda can re-use this bucket if it still exists. Accept the default bucket name in the dialog box to re-use the existing bucket.

### Amazon S3 transfer bucket configuration


The Amazon S3 bucket that Lambda creates to transfer your function's configuration automatically encrypts objects using the AES 256 encryption standard. Lambda also configures the bucket to use the [bucket owner condition](https://docs.amazonaws.cn/AmazonS3/latest/userguide/bucket-owner-condition.html) to ensure that only your Amazon Web Services account is able to add objects to the bucket.

Lambda configures the bucket to automatically delete objects 10 days after they are uploaded. However, Lambda doesn't automaticaly delete the bucket itself. To delete the bucket from your Amazon Web Services account, follow the instructions in [Deleting a bucket](https://docs.amazonaws.cn/AmazonS3/latest/userguide/delete-bucket.html). The default bucket name uses the prefix `lambdasam`, a 10-digit alphanumeric string, and the Amazon Web Services Region you created your function in:

```
lambdasam-06f22da95b-us-east-1
```

To avoid additional charges being added to your Amazon Web Services account, we recommend that you delete the Amazon S3 bucket as soon as you have finished exporting your function to Infrastructure Composer.

Standard [Amazon S3 pricing](https://www.amazonaws.cn/s3/pricing/) applies.

### Required permissions


To use the Lambda integration with Infrastructure Composer feature, you need certain permissions to download an Amazon SAM template and to write your function's configuration to Amazon S3.

To download an Amazon SAM template, you must have permission to use the following API actions:
+ [GetPolicy](https://docs.amazonaws.cn/lambda/latest/api/API_GetPolicy.html)
+ [iam:GetPolicyVersion](https://docs.amazonaws.cn/IAM/latest/APIReference/API_GetPolicyVersion.html)
+ [iam:GetRole](https://docs.amazonaws.cn/IAM/latest/APIReference/API_GetRole.html)
+ [iam:GetRolePolicy](https://docs.amazonaws.cn/IAM/latest/APIReference/API_GetRolePolicy.html)
+ [iam:ListAttachedRolePolicies](https://docs.amazonaws.cn/IAM/latest/APIReference/API_ListAttachedRolePolicies.html)
+ [iam:ListRolePolicies](https://docs.amazonaws.cn/IAM/latest/APIReference/API_ListRolePolicies.html)
+ [iam:ListRoles](https://docs.amazonaws.cn/IAM/latest/APIReference/API_ListRoles.html)

You can grant permission to use all of these actions by adding the [https://docs.amazonaws.cn/aws-managed-policy/latest/reference/AWSLambda_ReadOnlyAccess.html](https://docs.amazonaws.cn/aws-managed-policy/latest/reference/AWSLambda_ReadOnlyAccess.html) Amazon managed policy to your IAM user role.

For Lambda to write your function's configuration to Amazon S3, you must have permission to use the following API actions:
+ [S3:PutObject](https://docs.amazonaws.cn/AmazonS3/latest/API/API_PutObject.html)
+ [S3:CreateBucket](https://docs.amazonaws.cn/AmazonS3/latest/API/API_CreateBucket.html)
+ [S3:PutBucketEncryption](https://docs.amazonaws.cn/AmazonS3/latest/API/API_PutBucketEncryption.html)
+ [S3:PutBucketLifecycleConfiguration](https://docs.amazonaws.cn/AmazonS3/latest/API/API_PutBucketLifecycleConfiguration.html)

If you are unable to export your function's configuration to Infrastructure Composer, check that your account has the required permissions for these operations. If you have the required permissions, but still cannot export your function's configuration, check for any [resource-based policies](access-control-resource-based.md) that might limit access to Amazon S3.

## Other resources


For a more detailed tutorial on how to design a serverless application in Infrastructure Composer based on an existing Lambda function, see [Using Lambda with infrastructure as code (IaC)](foundation-iac.md).

To use Infrastructure Composer and Amazon SAM to design and deploy a complete serverless application using Lambda, you can also follow the [Amazon Infrastructure Composer tutorial](https://catalog.workshops.aws/serverless-patterns/en-US/dive-deeper/module1a) in the [Amazon Serverless Patterns Workshop](https://catalog.workshops.aws/serverless-patterns/en-US).

# Using Amazon Lambda with Amazon CloudFormation
CloudFormation

In an Amazon CloudFormation template, you can specify a Lambda function as the target of a custom resource. Use custom resources to process parameters, retrieve configuration values, or call other Amazon Web Services services during stack lifecycle events.

The following example invokes a function that's defined elsewhere in the template.

**Example – Custom resource definition**  

```
Resources:
  primerinvoke:
    Type: [AWS::CloudFormation::CustomResource](https://docs.amazonaws.cn/AWSCloudFormation/latest/UserGuide/aws-resource-cfn-customresource.html)
    Version: "1.0"
    Properties:
      ServiceToken: !GetAtt primer.Arn
      FunctionName: !Ref randomerror
```

The service token is the Amazon Resource Name (ARN) of the function that Amazon CloudFormation invokes when you create, update, or delete the stack. You can also include additional properties like `FunctionName`, which Amazon CloudFormation passes to your function as is.

Amazon CloudFormation invokes your Lambda function [asynchronously](invocation-async.md) with an event that includes a callback URL.

**Example – Amazon CloudFormation message event**  

```
{
    "RequestType": "Create",
    "ServiceToken": "arn:aws-cn:lambda:cn-north-1:123456789012:function:lambda-error-processor-primer-14ROR2T3JKU66",
    "ResponseURL": "https://cloudformation-custom-resource-response-cnnorth1.s3.cn-north-1.amazonaws.com.cn/arn%3Aaws%3Acloudformation%3Acn-north-1%3A123456789012%3Astack/lambda-error-processor/1134083a-2608-1e91-9897-022501a2c456%7Cprimerinvoke%7C5d478078-13e9-baf0-464a-7ef285ecc786?AWSAccessKeyId=AKIAIOSFODNN7EXAMPLE&Expires=1555451971&Signature=28UijZePE5I4dvukKQqM%2F9Rf1o4%3D",
    "StackId": "arn:aws-cn:cloudformation:cn-north-1:123456789012:stack/lambda-error-processor/1134083a-2608-1e91-9897-022501a2c456",
    "RequestId": "5d478078-13e9-baf0-464a-7ef285ecc786",
    "LogicalResourceId": "primerinvoke",
    "ResourceType": "AWS::CloudFormation::CustomResource",
    "ResourceProperties": {
        "ServiceToken": "arn:aws-cn:lambda:cn-north-1:123456789012:function:lambda-error-processor-primer-14ROR2T3JKU66",
        "FunctionName": "lambda-error-processor-randomerror-ZWUC391MQAJK"
    }
}
```

The function is responsible for returning a response to the callback URL that indicates success or failure. For the full response syntax, see [Custom resource response objects](https://docs.amazonaws.cn/AWSCloudFormation/latest/UserGuide/crpg-ref-responses.html).

**Example – Amazon CloudFormation custom resource response**  

```
{
    "Status": "SUCCESS",
    "PhysicalResourceId": "2019/04/18/[$LATEST]b3d1bfc65f19ec610654e4d9b9de47a0",
    "StackId": "arn:aws-cn:cloudformation:cn-north-1:123456789012:stack/lambda-error-processor/1134083a-2608-1e91-9897-022501a2c456",
    "RequestId": "5d478078-13e9-baf0-464a-7ef285ecc786",
    "LogicalResourceId": "primerinvoke"
}
```

Amazon CloudFormation provides a library called `cfn-response` that handles sending the response. If you define your function within a template, you can require the library by name. Amazon CloudFormation then adds the library to the deployment package that it creates for the function.

If your function that a Custom Resource uses has an [Elastic Network Interface](configuration-vpc.md#configuration-vpc-enis) attached to it, add the following resources to the VPC policy where **region** is the Region the function is in without the dashes. For example, `us-east-1` is `useast1`. This will allow the Custom Resource to respond to the callback URL that sends a signal back to the Amazon CloudFormation stack.

```
arn:aws:s3:::cloudformation-custom-resource-response-region",
"arn:aws:s3:::cloudformation-custom-resource-response-region/*",
```

The following example function invokes a second function. If the call succeeds, the function sends a success response to Amazon CloudFormation, and the stack update continues. The template uses the [AWS::Serverless::Function](https://docs.amazonaws.cn/serverless-application-model/latest/developerguide/sam-resource-function.html) resource type provided by Amazon Serverless Application Model.

**Example – Custom resource function**  

```
Transform: 'AWS::Serverless-2016-10-31'
Resources:
  primer:
    Type: [AWS::Serverless::Function](https://docs.amazonaws.cn/serverless-application-model/latest/developerguide/sam-resource-function.html)
    Properties:
      Handler: index.handler
      Runtime: nodejs16.x
      InlineCode: |
        var aws = require('aws-sdk');
        var response = require('cfn-response');
        exports.handler = function(event, context) {
            // For Delete requests, immediately send a SUCCESS response.
            if (event.RequestType == "Delete") {
                response.send(event, context, "SUCCESS");
                return;
            }
            var responseStatus = "FAILED";
            var responseData = {};
            var functionName = event.ResourceProperties.FunctionName
            var lambda = new aws.Lambda();
            lambda.invoke({ FunctionName: functionName }, function(err, invokeResult) {
                if (err) {
                    responseData = {Error: "Invoke call failed"};
                    console.log(responseData.Error + ":\n", err);
                }
                else responseStatus = "SUCCESS";
                response.send(event, context, responseStatus, responseData);
            });
        };
      Description: Invoke a function to create a log stream.
      MemorySize: 128
      Timeout: 8
      Role: !GetAtt role.Arn
      Tracing: Active
```

If the function that the custom resource invokes isn't defined in a template, you can get the source code for `cfn-response` from [cfn-response module](https://docs.amazonaws.cn/AWSCloudFormation/latest/UserGuide/cfn-lambda-function-code-cfnresponsemodule.html) in the Amazon CloudFormation User Guide.

For more information about custom resources, see [Custom resources](https://docs.amazonaws.cn/AWSCloudFormation/latest/UserGuide/template-custom-resources.html) in the *Amazon CloudFormation User Guide*.

# Process Amazon DocumentDB events with Lambda
Amazon DocumentDB

You can use a Lambda function to process events in an [Amazon DocumentDB (with MongoDB compatibility) change stream](https://docs.amazonaws.cn/documentdb/latest/developerguide/change_streams.html) by configuring an Amazon DocumentDB cluster as an event source. Then, you can automate event-driven workloads by invoking your Lambda function each time that data changes with your Amazon DocumentDB cluster.

**Note**  
Lambda supports version 4.0 and 5.0 of Amazon DocumentDB only. Lambda doesn't support version 3.6.  
Also, for event source mappings, Lambda supports instance-based clusters and regional clusters only. Lambda doesn't support [ elastic clusters](https://docs.amazonaws.cn/documentdb/latest/developerguide/docdb-using-elastic-clusters.html) or [ global clusters](https://docs.amazonaws.cn/documentdb/latest/developerguide/global-clusters.html). This limitation doesn't apply when using Lambda as a client to connect to Amazon DocumentDB. Lambda can connect to all cluster types to perform CRUD operations.

Lambda processes events from Amazon DocumentDB change streams sequentially in the order in which they arrive. Because of this, your function can handle only one concurrent invocation from Amazon DocumentDB at a time. To monitor your function, you can track its [concurrency metrics](https://docs.amazonaws.cn/lambda/latest/dg/monitoring-concurrency.html).

**Warning**  
Lambda event source mappings process each event at least once, and duplicate processing of records can occur. To avoid potential issues related to duplicate events, we strongly recommend that you make your function code idempotent. To learn more, see [How do I make my Lambda function idempotent](https://repost.aws/knowledge-center/lambda-function-idempotent) in the Amazon Knowledge Center.

**Topics**
+ [

## Example Amazon DocumentDB event
](#docdb-sample-event)
+ [

## Prerequisites and permissions
](#docdb-prereqs)
+ [

## Configure network security
](#docdb-network)
+ [

## Creating an Amazon DocumentDB event source mapping (console)
](#docdb-configuration)
+ [

## Creating an Amazon DocumentDB event source mapping (SDK or CLI)
](#docdb-api)
+ [

## Polling and stream starting positions
](#docdb-stream-polling)
+ [

## Monitoring your Amazon DocumentDB event source
](#docdb-monitoring)
+ [

# Tutorial: Using Amazon Lambda with Amazon DocumentDB Streams
](with-documentdb-tutorial.md)

## Example Amazon DocumentDB event


```
{
    "eventSourceArn": "arn:aws:rds:us-east-1:123456789012:cluster:canaryclusterb2a659a2-qo5tcmqkcl03",
    "events": [
        {
            "event": {
                "_id": {
                    "_data": "0163eeb6e7000000090100000009000041e1"
                },
                "clusterTime": {
                    "$timestamp": {
                        "t": 1676588775,
                        "i": 9
                    }
                },
                "documentKey": {
                    "_id": {
                        "$oid": "63eeb6e7d418cd98afb1c1d7"
                    }
                },
                "fullDocument": {
                    "_id": {
                        "$oid": "63eeb6e7d418cd98afb1c1d7"
                    },
                    "anyField": "sampleValue"
                },
                "ns": {
                    "db": "test_database",
                    "coll": "test_collection"
                },
                "operationType": "insert"
            }
        }
    ],
    "eventSource": "aws:docdb"
}
```

For more information about the events in this example and their shapes, see [Change Events](https://www.mongodb.com/docs/manual/reference/change-events/) on the MongoDB Documentation website.

## Prerequisites and permissions


Before you can use Amazon DocumentDB as an event source for your Lambda function, note the following prerequisites. You must:
+ **Have an existing Amazon DocumentDB cluster in the same Amazon Web Services account and Amazon Web Services Region as your function.** If you don't have an existing cluster, you can create one by following the steps in [Get Started with Amazon DocumentDB](https://docs.amazonaws.cn/documentdb/latest/developerguide/get-started-guide.html) in the *Amazon DocumentDB Developer Guide*. Alternatively, the first set of steps in [Tutorial: Using Amazon Lambda with Amazon DocumentDB Streams](with-documentdb-tutorial.md) guide you through creating an Amazon DocumentDB cluster with all the necessary prerequisites.
+ **Allow Lambda to access the Amazon Virtual Private Cloud (Amazon VPC) resources associated with your Amazon DocumentDB cluster.** For more information, see [Configure network security](#docdb-network).
+ **Enable TLS on your Amazon DocumentDB cluster.** This is the default setting. If you disable TLS, then Lambda cannot communicate with your cluster.
+ **Activate change streams on your Amazon DocumentDB cluster.** For more information, see [Using Change Streams with Amazon DocumentDB](https://docs.amazonaws.cn/documentdb/latest/developerguide/change_streams.html) in the *Amazon DocumentDB Developer Guide*.
+ **Provide Lambda with credentials to access your Amazon DocumentDB cluster.** When setting up the event source, provide the [Amazon Secrets Manager](https://docs.amazonaws.cn/secretsmanager/latest/userguide/intro.html) key that contains the authentication details (username and password) required to access your cluster. To provide this key during setup, do either of the following:
  + If you're using the Lambda console for setup, then provide the key in the **Secrets manager key** field.
  + If you're using the Amazon Command Line Interface (Amazon CLI) for setup, then provide this key in the `source-access-configurations` option. You can include this option with either the [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/create-event-source-mapping.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/create-event-source-mapping.html) command or the [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-event-source-mapping.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-event-source-mapping.html) command. For example:

    ```
    aws lambda create-event-source-mapping \
        ...
        --source-access-configurations  '[{"Type":"BASIC_AUTH","URI":"arn:aws:secretsmanager:us-west-2:123456789012:secret:DocDBSecret-AbC4E6"}]' \
        ...
    ```
+ **Grant Lambda permissions to manage resources related to your Amazon DocumentDB stream.** Manually add the following permissions to your function's [execution role](lambda-intro-execution-role.md):
  + [rds:DescribeDBClusters](https://docs.amazonaws.cn/AmazonRDS/latest/APIReference/API_DescribeDBClusters.html)
  + [rds:DescribeDBClusterParameters](https://docs.amazonaws.cn/AmazonRDS/latest/APIReference/API_DescribeDBClusterParameters.html)
  + [rds:DescribeDBSubnetGroups](https://docs.amazonaws.cn/AmazonRDS/latest/APIReference/API_DescribeDBSubnetGroups.html)
  + [ec2:CreateNetworkInterface](https://docs.amazonaws.cn/AWSEC2/latest/APIReference/API_CreateNetworkInterface.html)
  + [ec2:DescribeNetworkInterfaces](https://docs.amazonaws.cn/AWSEC2/latest/APIReference/API_DescribeNetworkInterfaces.html)
  + [ec2:DescribeVpcs](https://docs.amazonaws.cn/AWSEC2/latest/APIReference/API_DescribeVpcs.html)
  + [ec2:DeleteNetworkInterface](https://docs.amazonaws.cn/AWSEC2/latest/APIReference/API_DeleteNetworkInterface.html)
  + [ec2:DescribeSubnets](https://docs.amazonaws.cn/AWSEC2/latest/APIReference/API_DescribeSubnets.html)
  + [ec2:DescribeSecurityGroups](https://docs.amazonaws.cn/AWSEC2/latest/APIReference/API_DescribeSecurityGroups.html)
  + [kms:Decrypt](https://docs.amazonaws.cn/kms/latest/APIReference/API_Decrypt.html)
  + [secretsmanager:GetSecretValue](https://docs.amazonaws.cn/secretsmanager/latest/apireference/API_GetSecretValue.html)
+ **Keep the size of Amazon DocumentDB change stream events that you send to Lambda under 6 MB.** Lambda supports payload sizes of up to 6 MB. If your change stream tries to send Lambda an event larger than 6 MB, then Lambda drops the message and emits the `OversizedRecordCount` metric. Lambda emits all metrics on a best-effort basis.

**Note**  
While Lambda functions typically have a maximum timeout limit of 15 minutes, event source mappings for Amazon MSK, self-managed Apache Kafka, Amazon DocumentDB, and Amazon MQ for ActiveMQ and RabbitMQ only support functions with maximum timeout limits of 14 minutes. This constraint ensures that the event source mapping can properly handle function errors and retries.

## Configure network security


To give Lambda full access to Amazon DocumentDB through your event source mapping, either your cluster must use a public endpoint (public IP address), or you must provide access to the Amazon VPC you created the cluster in.

When you use Amazon DocumentDB with Lambda, create [Amazon PrivateLink VPC endpoints](https://docs.amazonaws.cn/vpc/latest/privatelink/create-interface-endpoint.html) that provide your function access to the resources in your Amazon VPC.

**Note**  
Amazon PrivateLink VPC endpoints are required for functions with event source mappings that use the default (on-demand) mode for event pollers. If your event source mapping uses [ provisioned mode](invocation-eventsourcemapping.md#invocation-eventsourcemapping-provisioned-mode), you don't need to configure Amazon PrivateLink VPC endpoints.

Create an endpoint to provide access to the following resources:
+  Lambda — Create an endpoint for the Lambda service principal. 
+  Amazon STS — Create an endpoint for the Amazon STS in order for a service principal to assume a role on your behalf. 
+  Secrets Manager — If your cluster uses Secrets Manager to store credentials, create an endpoint for Secrets Manager. 

Alternatively, configure a NAT gateway on each public subnet in the Amazon VPC. For more information, see [Enable internet access for VPC-connected Lambda functions](configuration-vpc-internet.md).

When you create an event source mapping for Amazon DocumentDB, Lambda checks whether Elastic Network Interfaces (ENIs) are already present for the subnets and security groups configured for your Amazon VPC. If Lambda finds existing ENIs, it attempts to re-use them. Otherwise, Lambda creates new ENIs to connect to the event source and invoke your function.

**Note**  
Lambda functions always run inside VPCs owned by the Lambda service. Your function's VPC configuration does not affect the event source mapping. Only the networking configuration of the event source's determines how Lambda connects to your event source.

Configure the security groups for the Amazon VPC containing your cluster. By default, Amazon DocumentDB uses the following ports: `27017`.
+ Inbound rules – Allow all traffic on the default broker port for the security group associated with your event source. Alternatively, you can use a self-referencing security group rule to allow access from instances within the same security group.
+ Outbound rules – Allow all traffic on port `443` for external destinations if your function needs to communicate with Amazon services. Alternatively, you can also use a self-referencing security group rule to limit access to the broker if you don't need to communicate with other Amazon services.
+ Amazon VPC endpoint inbound rules — If you are using an Amazon VPC endpoint, the security group associated with your Amazon VPC endpoint must allow inbound traffic on port `443` from the cluster security group.

If your cluster uses authentication, you can also restrict the endpoint policy for the Secrets Manager endpoint. To call the Secrets Manager API, Lambda uses your function role, not the Lambda service principal.

**Example VPC endpoint policy — Secrets Manager endpoint**  

```
{
      "Statement": [
          {
              "Action": "secretsmanager:GetSecretValue",
              "Effect": "Allow",
              "Principal": {
                  "AWS": [
                      "arn:aws-cn::iam::123456789012:role/my-role"
                  ]
              },
              "Resource": "arn:aws-cn::secretsmanager:us-west-2:123456789012:secret:my-secret"
          }
      ]
  }
```

When you use Amazon VPC endpoints, Amazon routes your API calls to invoke your function using the endpoint's Elastic Network Interface (ENI). The Lambda service principal needs to call `lambda:InvokeFunction` on any roles and functions that use those ENIs.

By default, Amazon VPC endpoints have open IAM policies that allow broad access to resources. Best practice is to restrict these policies to perform the needed actions using that endpoint. To ensure that your event source mapping is able to invoke your Lambda function, the VPC endpoint policy must allow the Lambda service principal to call `sts:AssumeRole` and `lambda:InvokeFunction`. Restricting your VPC endpoint policies to allow only API calls originating within your organization prevents the event source mapping from functioning properly, so `"Resource": "*"` is required in these policies.

The following example VPC endpoint policies show how to grant the required access to the Lambda service principal for the Amazon STS and Lambda endpoints.

**Example VPC Endpoint policy — Amazon STS endpoint**  

```
{
      "Statement": [
          {
              "Action": "sts:AssumeRole",
              "Effect": "Allow",
              "Principal": {
                  "Service": [
                      "lambda.amazonaws.com"
                  ]
              },
              "Resource": "*"
          }
      ]
    }
```

**Example VPC Endpoint policy — Lambda endpoint**  

```
{
      "Statement": [
          {
              "Action": "lambda:InvokeFunction",
              "Effect": "Allow",
              "Principal": {
                  "Service": [
                      "lambda.amazonaws.com"
                  ]
              },
              "Resource": "*"
          }
      ]
  }
```

## Creating an Amazon DocumentDB event source mapping (console)


For a Lambda function to read from an Amazon DocumentDB cluster's change stream, create an [event source mapping](invocation-eventsourcemapping.md). This section describes how to do this from the Lambda console. For Amazon SDK and Amazon CLI instructions, see [Creating an Amazon DocumentDB event source mapping (SDK or CLI)](#docdb-api).

**To create an Amazon DocumentDB event source mapping (console)**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Choose the name of a function.

1. Under **Function overview**, choose **Add trigger**.

1. Under **Trigger configuration**, in the dropdown list, choose **DocumentDB**.

1. Configure the required options, and then choose **Add**.

Lambda supports the following options for Amazon DocumentDB event sources:
+ **DocumentDB cluster** – Select an Amazon DocumentDB cluster.
+ **Activate trigger** – Choose whether you want to activate the trigger immediately. If you select this check box, then your function immediately starts receiving traffic from the specified Amazon DocumentDB change stream upon creation of the event source mapping. We recommend that you clear the check box to create the event source mapping in a deactivated state for testing. After creation, you can activate the event source mapping at any time.
+ **Database name** – Enter the name of a database within the cluster to consume.
+ (Optional) **Collection name** – Enter the name of a collection within the database to consume. If you don't specify a collection, then Lambda listens to all events from each collection in the database.
+ **Batch size** – Set the maximum number of messages to retrieve in a single batch, up to 10,000. The default batch size is 100.
+ **Starting position** – Choose the position in the stream to start reading records from.
  + **Latest** – Process only new records that are added to the stream. Your function starts processing records only after Lambda finishes creating your event source. This means that some records may be dropped until your event source is created successfully.
  + **Trim horizon** – Process all records in the stream. Lambda uses the log retention duration of your cluster to determine where to start reading events from. Specifically, Lambda starts reading from `current_time - log_retention_duration`. Your change stream must already be active before this timestamp for Lambda to read all events properly.
  + **At timestamp** – Process records starting from a specific time. Your change stream must already be active before the specified timestamp for Lambda to read all events properly.
+ **Authentication** – Choose the authentication method for accessing the brokers in your cluster.
  + **BASIC\$1AUTH** – With basic authentication, you must provide the Secrets Manager key that contains the credentials to access your cluster.
+ **Secrets Manager key** – Choose the Secrets Manager key that contains the authentication details (username and password) required to access your Amazon DocumentDB cluster.
+ (Optional) **Batch window** – Set the maximum amount of time in seconds to gather records before invoking your function, up to 300.
+ (Optional) **Full document configuration** – For document update operations, choose what you want to send to the stream. The default value is `Default`, which means that for each change stream event, Amazon DocumentDB sends only a delta describing the changes made. For more information about this field, see [FullDocument](https://mongodb.github.io/mongo-java-driver/3.9/javadoc/com/mongodb/client/model/changestream/FullDocument.html#DEFAULT) in the MongoDB Javadoc API documentation.
  + **Default** – Lambda sends only a partial document describing the changes made.
  + **UpdateLookup** – Lambda sends a delta describing the changes, along with a copy of the entire document.

## Creating an Amazon DocumentDB event source mapping (SDK or CLI)


To create or manage an Amazon DocumentDB event source mapping with an [Amazon SDK](https://aws.amazon.com/developer/tools/), you can use the following API operations:
+ [CreateEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html)
+ [ListEventSourceMappings](https://docs.amazonaws.cn/lambda/latest/api/API_ListEventSourceMappings.html)
+ [GetEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_GetEventSourceMapping.html)
+ [UpdateEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_UpdateEventSourceMapping.html)
+ [DeleteEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_DeleteEventSourceMapping.html)

To create the event source mapping with the Amazon CLI, use the [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/create-event-source-mapping.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/create-event-source-mapping.html) command. The following example uses this command to map a function named `my-function` to an Amazon DocumentDB change stream. The event source is specified by an Amazon Resource Name (ARN), with a batch size of 500, starting from the timestamp in Unix time. The command also specifies the Secrets Manager key that Lambda uses to connect to Amazon DocumentDB. Additionally, it includes `document-db-event-source-config` parameters that specify the database and the collection to read from.

```
aws lambda create-event-source-mapping --function-name my-function \
    --event-source-arn arn:aws:rds:us-west-2:123456789012:cluster:privatecluster7de2-epzcyvu4pjoy
    --batch-size 500 \
    --starting-position AT_TIMESTAMP \
    --starting-position-timestamp 1541139109 \
    --source-access-configurations '[{"Type":"BASIC_AUTH","URI":"arn:aws:secretsmanager:us-east-1:123456789012:secret:DocDBSecret-BAtjxi"}]' \
    --document-db-event-source-config '{"DatabaseName":"test_database", "CollectionName": "test_collection"}' \
```

You should see output that looks like this:

```
{
    "UUID": "2b733gdc-8ac3-cdf5-af3a-1827b3b11284",
    "BatchSize": 500,
    "DocumentDBEventSourceConfig": {
        "CollectionName": "test_collection",
        "DatabaseName": "test_database",
        "FullDocument": "Default"
    },
    "MaximumBatchingWindowInSeconds": 0,
    "EventSourceArn": "arn:aws:rds:us-west-2:123456789012:cluster:privatecluster7de2-epzcyvu4pjoy",
    "FunctionArn": "arn:aws:lambda:us-west-2:123456789012:function:my-function",
    "LastModified": 1541348195.412,
    "LastProcessingResult": "No records processed",
    "State": "Creating",
    "StateTransitionReason": "User action"
}
```

After creation, you can use the [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-event-source-mapping.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-event-source-mapping.html) command to update the settings for your Amazon DocumentDB event source. The following example updates the batch size to 1,000 and the batch window to 10 seconds. For this command, you need the UUID of your event source mapping, which you can retrieve using the `list-event-source-mapping` command or the Lambda console.

```
aws lambda update-event-source-mapping --function-name my-function \
    --uuid f89f8514-cdd9-4602-9e1f-01a5b77d449b \
    --batch-size 1000 \
    --batch-window 10
```

You should see this output that looks like this:

```
{
    "UUID": "2b733gdc-8ac3-cdf5-af3a-1827b3b11284",
    "BatchSize": 500,
    "DocumentDBEventSourceConfig": {
        "CollectionName": "test_collection",
        "DatabaseName": "test_database",
        "FullDocument": "Default"
    },
    "MaximumBatchingWindowInSeconds": 0,
    "EventSourceArn": "arn:aws:rds:us-west-2:123456789012:cluster:privatecluster7de2-epzcyvu4pjoy",
    "FunctionArn": "arn:aws:lambda:us-west-2:123456789012:function:my-function",
    "LastModified": 1541359182.919,
    "LastProcessingResult": "OK",
    "State": "Updating",
    "StateTransitionReason": "User action"
}
```

Lambda updates settings asynchronously, so you may not see these changes in the output until the process completes. To view the current settings of your event source mapping, use the [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/get-event-source-mapping.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/get-event-source-mapping.html) command.

```
aws lambda get-event-source-mapping --uuid f89f8514-cdd9-4602-9e1f-01a5b77d449b
```

You should see this output that looks like this:

```
{
    "UUID": "2b733gdc-8ac3-cdf5-af3a-1827b3b11284",
    "DocumentDBEventSourceConfig": {
        "CollectionName": "test_collection",
        "DatabaseName": "test_database",
        "FullDocument": "Default"
    },
    "BatchSize": 1000,
    "MaximumBatchingWindowInSeconds": 10,
    "EventSourceArn": "arn:aws:rds:us-west-2:123456789012:cluster:privatecluster7de2-epzcyvu4pjoy",
    "FunctionArn": "arn:aws:lambda:us-west-2:123456789012:function:my-function",
    "LastModified": 1541359182.919,
    "LastProcessingResult": "OK",
    "State": "Enabled",
    "StateTransitionReason": "User action"
}
```

To delete your Amazon DocumentDB event source mapping, use the [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/delete-event-source-mapping.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/delete-event-source-mapping.html) command.

```
aws lambda delete-event-source-mapping \
    --uuid 2b733gdc-8ac3-cdf5-af3a-1827b3b11284
```

## Polling and stream starting positions


Be aware that stream polling during event source mapping creation and updates is eventually consistent.
+ During event source mapping creation, it may take several minutes to start polling events from the stream.
+ During event source mapping updates, it may take several minutes to stop and restart polling events from the stream.

This behavior means that if you specify `LATEST` as the starting position for the stream, the event source mapping could miss events during creation or updates. To ensure that no events are missed, specify the stream starting position as `TRIM_HORIZON` or `AT_TIMESTAMP`.

## Monitoring your Amazon DocumentDB event source


To help you monitor your Amazon DocumentDB event source, Lambda emits the `IteratorAge` metric when your function finishes processing a batch of records. *Iterator age* is the difference between the timestamp of the most recent event and the current timestamp. Essentially, the `IteratorAge` metric indicates how old the last processed record in the batch is. If your function is currently processing new events, then you can use the iterator age to estimate the latency between when a record is added and when your function processes it. An increasing trend in `IteratorAge` can indicate issues with your function. For more information, see [Using CloudWatch metrics with Lambda](monitoring-metrics.md).

Amazon DocumentDB change streams aren't optimized to handle large time gaps between events. If your Amazon DocumentDB event source doesn't receive any events for an extended period of time, Lambda may disable the event source mapping. The length of this time period can vary from a few weeks to a few months depending on cluster size and other workloads.

Lambda supports payloads of up to 6 MB. However, Amazon DocumentDB change stream events can be up to 16 MB in size. If your change stream tries to send Lambda a change stream event larger than 6 MB, then Lambda drops the message and emits the `OversizedRecordCount` metric. Lambda emits all metrics on a best-effort basis.

# Tutorial: Using Amazon Lambda with Amazon DocumentDB Streams
Tutorial

 In this tutorial, you create a basic Lambda function that consumes events from an Amazon DocumentDB (with MongoDB compatibility) change stream. To complete this tutorial, you will go through the following stages: 
+ Set up your Amazon DocumentDB cluster, connect to it, and activate change streams on it.
+ Create your Lambda function, and configure your Amazon DocumentDB cluster as an event source for your function.
+ Test the setup by inserting items into your Amazon DocumentDB database.

## Create the Amazon DocumentDB cluster
Create the cluster

1. Open the [Amazon DocumentDB console](https://console.aws.amazon.com/docdb/home#). Under **Clusters**, choose **Create**.

1. Create a cluster with the following configuration:
   + For **Cluster type**, choose **Instance-based cluster**. This is the default option.
   + Under **Cluster configuration**, make sure that **Engine version** 5.0.0 is selected. This is the default option.
   + Under **Instance configuration**:
     + For **DB instance class**, select **Memory optimized classes**. This is the default option.
     + For **Number of regular replica instances**, choose 1.
     + For **Instance class**, use the default selection.
   + Under **Authentication**, enter a username for the primary user, and then choose **Self managed**. Enter a password, then confirm it.
   + Keep all other default settings.

1. Choose **Create cluster**.

## Create the secret in Secrets Manager


While Amazon DocumentDB is creating your cluster, create an Amazon Secrets Manager secret to store your database credentials. You'll provide this secret when you create the Lambda event source mapping in a later step.

**To create the secret in Secrets Manager**

1. Open the [Secrets Manager](https://console.aws.amazon.com/secretsmanager/home#) console and choose **Store a new secret**.

1. For **Choose secret type**, choose the following options:
   + Under **Basic details**:
     + **Secret type**: Credentials for your Amazon DocumentDB database
     + Under **Credentials**, enter the same username and password that you used to create your Amazon DocumentDB cluster.
     + **Database**: Choose your Amazon DocumentDB cluster.
     + Choose **Next**.

1. For **Configure secret**, choose the following options:
   + **Secret name**: `DocumentDBSecret`
   + Choose **Next**.

1. Choose **Next**.

1. Choose **Store**.

1. Refresh the console to verify that you successfully stored the `DocumentDBSecret` secret.

Note the **Secret ARN**. You’ll need it in a later step.

## Connect to the cluster


**Connect to your Amazon DocumentDB cluster using Amazon CloudShell**

1. On the Amazon DocumentDB management console, under **Clusters**, locate the cluster you created. Choose your cluster by clicking the check box next to it.

1. Choose **Connect to cluster**. The CloudShell **Run command** screen appears.

1. In the **New environment name** field, enter a unique name, such as "test" and choose **Create and run**.

1. When prompted, enter your password. When the prompt becomes `rs0 [direct: primary] <env-name>>`, you are successfully connected to your Amazon DocumentDB cluster.

## Activate change streams


For this tutorial, you’ll track changes to the `products` collection of the `docdbdemo` database in your Amazon DocumentDB cluster. You do this by activating [change streams](https://docs.aws.amazon.com/documentdb/latest/developerguide/change_streams.html).

**To create a new database within your cluster**

1. Run the following command to create a new database called `docdbdemo`:

   ```
   use docdbdemo
   ```

1. In the terminal window, use the following command to insert a record into `docdbdemo`:

   ```
   db.products.insertOne({"hello":"world"})
   ```

   You should see an output like this:

   ```
   {
     acknowledged: true,
     insertedId: ObjectId('67f85066ca526410fd531d59')
   }
   ```

1. Next, activate change streams on the `products` collection of the `docdbdemo` database using the following command:

   ```
   db.adminCommand({modifyChangeStreams: 1,
       database: "docdbdemo",
       collection: "products", 
       enable: true});
   ```

    You should see output that looks like this: 

   ```
   { "ok" : 1, "operationTime" : Timestamp(1680126165, 1) }
   ```

## Create interface VPC endpoints


Next, create [interface VPC endpoints](https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html#create-interface-endpoint-aws) to ensure that Lambda and Secrets Manager (used later to store our cluster access credentials) can connect to your default VPC.

**To create interface VPC endpoints**

1. Open the [VPC console](https://console.aws.amazon.com/vpc/home#). In the left menu, under **Virtual private cloud**, choose **Endpoints**.

1. Choose **Create endpoint**. Create an endpoint with the following configuration:
   + For **Name tag**, enter `lambda-default-vpc`.
   + For **Service category**, choose Amazon services.
   + For **Services**, enter `lambda` in the search box. Choose the service with format `com.amazonaws.<region>.lambda`.
   + For **VPC**, choose the VPC that your Amazon DocumentDB cluster is in. This is typically the [default VPC](https://docs.aws.amazon.com/vpc/latest/userguide/default-vpc.html).
   + For **Subnets**, check the boxes next to each availability zone. Choose the correct subnet ID for each availability zone.
   + For **IP address type**, select IPv4.
   + For **Security groups**, choose the security group that your Amazon DocumentDB cluster uses. This is typically the `default` security group.
   + Keep all other default settings.
   + Choose **Create endpoint**.

1. Again, choose **Create endpoint**. Create an endpoint with the following configuration:
   + For **Name tag**, enter `secretsmanager-default-vpc`.
   + For **Service category**, choose Amazon services.
   + For **Services**, enter `secretsmanager` in the search box. Choose the service with format `com.amazonaws.<region>.secretsmanager`.
   + For **VPC**, choose the VPC that your Amazon DocumentDB cluster is in. This is typically the [default VPC](https://docs.aws.amazon.com/vpc/latest/userguide/default-vpc.html).
   + For **Subnets**, check the boxes next to each availability zone. Choose the correct subnet ID for each availability zone.
   + For **IP address type**, select IPv4.
   + For **Security groups**, choose the security group that your Amazon DocumentDB cluster uses. This is typically the `default` security group.
   + Keep all other default settings.
   + Choose **Create endpoint**.

 This completes the cluster setup portion of this tutorial. 

## Create the execution role


 In the next set of steps, you’ll create your Lambda function. First, you need to create the execution role that gives your function permission to access your cluster. You do this by creating an IAM policy first, then attaching this policy to an IAM role. 

**To create IAM policy**

1. Open the [Policies page](https://console.aws.amazon.com/iam/home#/policies) in the IAM console and choose **Create policy**.

1. Choose the **JSON** tab. In the following policy, replace the Secrets Manager resource ARN in the final line of the statement with your secret ARN from earlier, and copy the policy into the editor.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "LambdaESMNetworkingAccess",
               "Effect": "Allow",
               "Action": [
                   "ec2:CreateNetworkInterface",
                   "ec2:DescribeNetworkInterfaces",
                   "ec2:DescribeVpcs",
                   "ec2:DeleteNetworkInterface",
                   "ec2:DescribeSubnets",
                   "ec2:DescribeSecurityGroups",
                   "kms:Decrypt"
               ],
               "Resource": "*"
           },
           {
               "Sid": "LambdaDocDBESMAccess",
               "Effect": "Allow",
               "Action": [
                   "rds:DescribeDBClusters",
                   "rds:DescribeDBClusterParameters",
                   "rds:DescribeDBSubnetGroups"
               ],
               "Resource": "*"
           },
           {
               "Sid": "LambdaDocDBESMGetSecretValueAccess",
               "Effect": "Allow",
               "Action": [
                   "secretsmanager:GetSecretValue"
               ],
               "Resource": "arn:aws-cn:secretsmanager:us-east-1:123456789012:secret:DocumentDBSecret"
           }
       ]
   }
   ```

------

1. Choose **Next: Tags**, then choose **Next: Review**.

1. For **Name**, enter `AWSDocumentDBLambdaPolicy`.

1. Choose **Create policy**.

**To create the IAM role**

1. Open the [Roles page](https://console.aws.amazon.com/iam/home#/roles) in the IAM console and choose **Create role**.

1. For **Select trusted entity**, choose the following options:
   + **Trusted entity type**: Amazon service
   + **Service or use case**: Lambda
   + Choose **Next**.

1. For **Add permissions**, choose the `AWSDocumentDBLambdaPolicy` policy you just created, as well as the `AWSLambdaBasicExecutionRole` to give your function permissions to write to Amazon CloudWatch Logs.

1. Choose **Next**.

1. For **Role name**, enter `AWSDocumentDBLambdaExecutionRole`.

1. Choose **Create role**.

## Create the Lambda function


This tutorial uses the Python 3.14 runtime, but we’ve also provided example code files for other runtimes. You can select the tab in the following box to see the code for the runtime you’re interested in.

The code receives an Amazon DocumentDB event input and processes the message that it contains.

**To create the Lambda function**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Choose **Create function**.

1. Choose **Author from scratch**

1. Under **Basic information**, do the following:

   1. For **Function name**, enter `ProcessDocumentDBRecords`

   1. For **Runtime**, choose **Python 3.14**.

   1. For **Architecture**, choose **x86\$164**.

1. In the **Change default execution role** tab, do the following:

   1. Expand the tab, then choose **Use an existing role**.

   1. Select the `AWSDocumentDBLambdaExecutionRole` you created earlier.

1. Choose **Create function**.

**To deploy the function code**

1. Choose the **Python** tab in the following box and copy the code.

------
#### [ .NET ]

**Amazon SDK for .NET**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-docdb-to-lambda) repository. 
Consuming a Amazon DocumentDB event with Lambda using .NET.  

   ```
   using Amazon.Lambda.Core;
   using System.Text.Json;
   using System;
   using System.Collections.Generic;
   using System.Text.Json.Serialization;
   //Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
   [assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))]
   
   namespace LambdaDocDb;
   
   public class Function
   {
       
        /// <summary>
       /// Lambda function entry point to process Amazon DocumentDB events.
       /// </summary>
       /// <param name="event">The Amazon DocumentDB event.</param>
       /// <param name="context">The Lambda context object.</param>
       /// <returns>A string to indicate successful processing.</returns>
       public string FunctionHandler(Event evnt, ILambdaContext context)
       {
           
           foreach (var record in evnt.Events)
           {
               ProcessDocumentDBEvent(record, context);
           }
   
           return "OK";
       }
   
        private void ProcessDocumentDBEvent(DocumentDBEventRecord record, ILambdaContext context)
       {
           
           var eventData = record.Event;
           var operationType = eventData.OperationType;
           var databaseName = eventData.Ns.Db;
           var collectionName = eventData.Ns.Coll;
           var fullDocument = JsonSerializer.Serialize(eventData.FullDocument, new JsonSerializerOptions { WriteIndented = true });
   
           context.Logger.LogLine($"Operation type: {operationType}");
           context.Logger.LogLine($"Database: {databaseName}");
           context.Logger.LogLine($"Collection: {collectionName}");
           context.Logger.LogLine($"Full document:\n{fullDocument}");
       }
   
   
   
       public class Event
       {
           [JsonPropertyName("eventSourceArn")]
           public string EventSourceArn { get; set; }
   
           [JsonPropertyName("events")]
           public List<DocumentDBEventRecord> Events { get; set; }
   
           [JsonPropertyName("eventSource")]
           public string EventSource { get; set; }
       }
   
       public class DocumentDBEventRecord
       {
           [JsonPropertyName("event")]
           public EventData Event { get; set; }
       }
   
       public class EventData
       {
           [JsonPropertyName("_id")]
           public IdData Id { get; set; }
   
           [JsonPropertyName("clusterTime")]
           public ClusterTime ClusterTime { get; set; }
   
           [JsonPropertyName("documentKey")]
           public DocumentKey DocumentKey { get; set; }
   
           [JsonPropertyName("fullDocument")]
           public Dictionary<string, object> FullDocument { get; set; }
   
           [JsonPropertyName("ns")]
           public Namespace Ns { get; set; }
   
           [JsonPropertyName("operationType")]
           public string OperationType { get; set; }
       }
   
       public class IdData
       {
           [JsonPropertyName("_data")]
           public string Data { get; set; }
       }
   
       public class ClusterTime
       {
           [JsonPropertyName("$timestamp")]
           public Timestamp Timestamp { get; set; }
       }
   
       public class Timestamp
       {
           [JsonPropertyName("t")]
           public long T { get; set; }
   
           [JsonPropertyName("i")]
           public int I { get; set; }
       }
   
       public class DocumentKey
       {
           [JsonPropertyName("_id")]
           public Id Id { get; set; }
       }
   
       public class Id
       {
           [JsonPropertyName("$oid")]
           public string Oid { get; set; }
       }
   
       public class Namespace
       {
           [JsonPropertyName("db")]
           public string Db { get; set; }
   
           [JsonPropertyName("coll")]
           public string Coll { get; set; }
       }
   }
   ```

------
#### [ Go ]

**SDK for Go V2**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-docdb-to-lambda) repository. 
Consuming a Amazon DocumentDB event with Lambda using Go.  

   ```
   package main
   
   import (
   	"context"
   	"encoding/json"
   	"fmt"
   
   	"github.com/aws/aws-lambda-go/lambda"
   )
   
   type Event struct {
   	Events []Record `json:"events"`
   }
   
   type Record struct {
   	Event struct {
   		OperationType string `json:"operationType"`
   		NS            struct {
   			DB   string `json:"db"`
   			Coll string `json:"coll"`
   		} `json:"ns"`
   		FullDocument interface{} `json:"fullDocument"`
   	} `json:"event"`
   }
   
   func main() {
   	lambda.Start(handler)
   }
   
   func handler(ctx context.Context, event Event) (string, error) {
   	fmt.Println("Loading function")
   	for _, record := range event.Events {
   		logDocumentDBEvent(record)
   	}
   
   	return "OK", nil
   }
   
   func logDocumentDBEvent(record Record) {
   	fmt.Printf("Operation type: %s\n", record.Event.OperationType)
   	fmt.Printf("db: %s\n", record.Event.NS.DB)
   	fmt.Printf("collection: %s\n", record.Event.NS.Coll)
   	docBytes, _ := json.MarshalIndent(record.Event.FullDocument, "", "  ")
   	fmt.Printf("Full document: %s\n", string(docBytes))
   }
   ```

------
#### [ Java ]

**SDK for Java 2.x**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-docdb-to-lambda) repository. 
Consuming a Amazon DocumentDB event with Lambda using Java.  

   ```
   import java.util.List;
   import java.util.Map;
   
   import com.amazonaws.services.lambda.runtime.Context;
   import com.amazonaws.services.lambda.runtime.RequestHandler;
   
   public class Example implements RequestHandler<Map<String, Object>, String> {
   
       @SuppressWarnings("unchecked")
       @Override
       public String handleRequest(Map<String, Object> event, Context context) {
           List<Map<String, Object>> events = (List<Map<String, Object>>) event.get("events");
           for (Map<String, Object> record : events) {
               Map<String, Object> eventData = (Map<String, Object>) record.get("event");
               processEventData(eventData);
           }
   
           return "OK";
       }
   
       @SuppressWarnings("unchecked")
       private void processEventData(Map<String, Object> eventData) {
           String operationType = (String) eventData.get("operationType");
           System.out.println("operationType: %s".formatted(operationType));
   
           Map<String, Object> ns = (Map<String, Object>) eventData.get("ns");
   
           String db = (String) ns.get("db");
           System.out.println("db: %s".formatted(db));
           String coll = (String) ns.get("coll");
           System.out.println("coll: %s".formatted(coll));
   
           Map<String, Object> fullDocument = (Map<String, Object>) eventData.get("fullDocument");
           System.out.println("fullDocument: %s".formatted(fullDocument));
       }
   
   }
   ```

------
#### [ JavaScript ]

**SDK for JavaScript (v3)**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-docdb-to-lambda) repository. 
Consuming a Amazon DocumentDB event with Lambda using JavaScript.  

   ```
   console.log('Loading function');
   exports.handler = async (event, context) => {
       event.events.forEach(record => {
           logDocumentDBEvent(record);
       });
       return 'OK';
   };
   
   const logDocumentDBEvent = (record) => {
       console.log('Operation type: ' + record.event.operationType);
       console.log('db: ' + record.event.ns.db);
       console.log('collection: ' + record.event.ns.coll);
       console.log('Full document:', JSON.stringify(record.event.fullDocument, null, 2));
   };
   ```
Consuming a Amazon DocumentDB event with Lambda using TypeScript  

   ```
   import { DocumentDBEventRecord, DocumentDBEventSubscriptionContext } from 'aws-lambda';
   
   console.log('Loading function');
   
   export const handler = async (
     event: DocumentDBEventSubscriptionContext,
     context: any
   ): Promise<string> => {
     event.events.forEach((record: DocumentDBEventRecord) => {
       logDocumentDBEvent(record);
     });
     return 'OK';
   };
   
   const logDocumentDBEvent = (record: DocumentDBEventRecord): void => {
     console.log('Operation type: ' + record.event.operationType);
     console.log('db: ' + record.event.ns.db);
     console.log('collection: ' + record.event.ns.coll);
     console.log('Full document:', JSON.stringify(record.event.fullDocument, null, 2));
   };
   ```

------
#### [ PHP ]

**SDK for PHP**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-docdb-to-lambda) repository. 
Consuming a Amazon DocumentDB event with Lambda using PHP.  

   ```
   <?php
   
   require __DIR__.'/vendor/autoload.php';
   
   use Bref\Context\Context;
   use Bref\Event\Handler;
   
   class DocumentDBEventHandler implements Handler
   {
       public function handle($event, Context $context): string
       {
   
           $events = $event['events'] ?? [];
           foreach ($events as $record) {
               $this->logDocumentDBEvent($record['event']);
           }
           return 'OK';
       }
   
       private function logDocumentDBEvent($event): void
       {
           // Extract information from the event record
   
           $operationType = $event['operationType'] ?? 'Unknown';
           $db = $event['ns']['db'] ?? 'Unknown';
           $collection = $event['ns']['coll'] ?? 'Unknown';
           $fullDocument = $event['fullDocument'] ?? [];
   
           // Log the event details
   
           echo "Operation type: $operationType\n";
           echo "Database: $db\n";
           echo "Collection: $collection\n";
           echo "Full document: " . json_encode($fullDocument, JSON_PRETTY_PRINT) . "\n";
       }
   }
   return new DocumentDBEventHandler();
   ```

------
#### [ Python ]

**SDK for Python (Boto3)**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-docdb-to-lambda) repository. 
Consuming a Amazon DocumentDB event with Lambda using Python.  

   ```
   import json
   
   def lambda_handler(event, context):
       for record in event.get('events', []):
           log_document_db_event(record)
       return 'OK'
   
   def log_document_db_event(record):
       event_data = record.get('event', {})
       operation_type = event_data.get('operationType', 'Unknown')
       db = event_data.get('ns', {}).get('db', 'Unknown')
       collection = event_data.get('ns', {}).get('coll', 'Unknown')
       full_document = event_data.get('fullDocument', {})
   
       print(f"Operation type: {operation_type}")
       print(f"db: {db}")
       print(f"collection: {collection}")
       print("Full document:", json.dumps(full_document, indent=2))
   ```

------
#### [ Ruby ]

**SDK for Ruby**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-docdb-to-lambda) repository. 
Consuming a Amazon DocumentDB event with Lambda using Ruby.  

   ```
   require 'json'
   
   def lambda_handler(event:, context:)
     event['events'].each do |record|
       log_document_db_event(record)
     end
     'OK'
   end
   
   def log_document_db_event(record)
     event_data = record['event'] || {}
     operation_type = event_data['operationType'] || 'Unknown'
     db = event_data.dig('ns', 'db') || 'Unknown'
     collection = event_data.dig('ns', 'coll') || 'Unknown'
     full_document = event_data['fullDocument'] || {}
   
     puts "Operation type: #{operation_type}"
     puts "db: #{db}"
     puts "collection: #{collection}"
     puts "Full document: #{JSON.pretty_generate(full_document)}"
   end
   ```

------
#### [ Rust ]

**SDK for Rust**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-docdb-to-lambda) repository. 
Consuming a Amazon DocumentDB event with Lambda using Rust.  

   ```
   use lambda_runtime::{service_fn, tracing, Error, LambdaEvent};
   use aws_lambda_events::{
       event::documentdb::{DocumentDbEvent, DocumentDbInnerEvent},
      };
   
   
   // Built with the following dependencies:
   //lambda_runtime = "0.11.1"
   //serde_json = "1.0"
   //tokio = { version = "1", features = ["macros"] }
   //tracing = { version = "0.1", features = ["log"] }
   //tracing-subscriber = { version = "0.3", default-features = false, features = ["fmt"] }
   //aws_lambda_events = "0.15.0"
   
   async fn function_handler(event: LambdaEvent<DocumentDbEvent>) ->Result<(), Error> {
       
       tracing::info!("Event Source ARN: {:?}", event.payload.event_source_arn);
       tracing::info!("Event Source: {:?}", event.payload.event_source);
     
       let records = &event.payload.events;
      
       if records.is_empty() {
           tracing::info!("No records found. Exiting.");
           return Ok(());
       }
   
       for record in records{
           log_document_db_event(record);
       }
   
       tracing::info!("Document db records processed");
   
       // Prepare the response
       Ok(())
   
   }
   
   fn log_document_db_event(record: &DocumentDbInnerEvent)-> Result<(), Error>{
       tracing::info!("Change Event: {:?}", record.event);
       
       Ok(())
   
   }
   
   #[tokio::main]
   async fn main() -> Result<(), Error> {
       tracing_subscriber::fmt()
       .with_max_level(tracing::Level::INFO)
       .with_target(false)
       .without_time()
       .init();
   
       let func = service_fn(function_handler);
       lambda_runtime::run(func).await?;
       Ok(())
       
   }
   ```

------

1. In the **Code source** pane on the Lambda console, paste the code into the code editor, replacing the code that Lambda created.

1. In the **DEPLOY** section, choose **Deploy** to update your function's code:  
![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/getting-started-tutorial/deploy-console.png)

## Create the Lambda event source mapping


 Create the event source mapping that associates your Amazon DocumentDB change stream with your Lambda function. After you create this event source mapping, Amazon Lambda immediately starts polling the stream. 

**To create the event source mapping**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) in the Lambda console.

1. Choose the `ProcessDocumentDBRecords` function you created earlier.

1. Choose the **Configuration**tab, then choose **Triggers** in the left menu.

1. Choose **Add trigger**.

1. Under **Trigger configuration**, for the source, select **Amazon DocumentDB**.

1. Create the event source mapping with the following configuration:
   + **Amazon DocumentDB cluster**: Choose the cluster you created earlier.
   + **Database name**: docdbdemo
   + **Collection name**: products
   + **Batch size**: 1
   + **Starting position**: Latest
   + **Authentication**: BASIC\$1AUTH
   + **Secrets Manager key**: Choose the secret for your Amazon DocumentDB cluster. It will be called something like `rds!cluster-12345678-a6f0-52c0-b290-db4aga89274f`.
   + **Batch window**: 1
   + **Full document configuration**: UpdateLookup

1. Choose **Add**. Creating your event source mapping can take a few minutes.

## Test your function


Wait for the event source mapping to reach the **Enabled** state. This can take several minutes. Then, test the end-to-end setup by inserting, updating, and deleting database records. Before you begin:

1. [Reconnect to your Amazon DocumentDB cluster](#docdb-connect-to-cluster) in your CloudShell environment.

1. Run the following command to ensure that you’re using the `docdbdemo` database:

   ```
   use docdbdemo
   ```

### Insert a record


Insert a record into the `products` collection of the `docdbdemo` database:

```
db.products.insertOne({"name":"Pencil", "price": 1.00})
```

Verify that your function successfully processed this event by [checking CloudWatch Logs](monitoring-cloudwatchlogs-view.md#monitoring-cloudwatchlogs-console). You should see a log entry like this:

![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/documentdb-insert-log.png)


### Update a record


Update the record you just inserted with the following command:

```
db.products.updateOne(
    { "name": "Pencil" },
    { $set: { "price": 0.50 }}
)
```

Verify that your function successfully processed this event by [checking CloudWatch Logs](monitoring-cloudwatchlogs-view.md#monitoring-cloudwatchlogs-console). You should see a log entry like this:

![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/documentdb-update-log.png)


### Delete a record


Delete the record that you just updated with the following command:

```
db.products.deleteOne( { "name": "Pencil" } )
```

Verify that your function successfully processed this event by [checking CloudWatch Logs](monitoring-cloudwatchlogs-view.md#monitoring-cloudwatchlogs-console). You should see a log entry like this:

![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/documentdb-delete-log.png)


## Troubleshooting


If you don't see any database events in your function's CloudWatch logs, check the following:
+ Make sure that the Lambda event source mapping (also known as a trigger) is in the **Enabled** state. Event source mappings can take several minutes to create.
+ If the event source mapping is **Enabled** but you still don't see database events in CloudWatch:
  + Make sure that the **Database name** in the event source mapping is set to `docdbdemo`.  
![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/documentdb-trigger.png)
  + Check the event source mapping **Last processing result** field for the following message "PROBLEM: Connection error. Your VPC must be able to connect to Lambda and STS, as well as Secrets Manager if authentication is required." If you see this error, make sure that you [created the Lambda and Secrets Manager VPC interface endpoints](#docdb-create-interface-vpc-endpoints), and that the endpoints use the same VPC and subnets that your Amazon DocumentDB cluster uses.  
![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/documentdb-lastprocessingresult.png)

## Clean up your resources


 You can now delete the resources that you created for this tutorial, unless you want to retain them. By deleting Amazon resources that you're no longer using, you prevent unnecessary charges to your Amazon Web Services account. 

**To delete the Lambda function**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Select the function that you created.

1. Choose **Actions**, **Delete**.

1. Type **confirm** in the text input field and choose **Delete**.

**To delete the execution role**

1. Open the [Roles page](https://console.amazonaws.cn/iam/home#/roles) of the IAM console.

1. Select the execution role that you created.

1. Choose **Delete**.

1. Enter the name of the role in the text input field and choose **Delete**.

**To delete the VPC endpoints**

1. Open the [VPC console](https://console.aws.amazon.com/vpc/home#). In the left menu, under **Virtual private cloud**, choose **Endpoints**.

1. Select the endpoints you created.

1. Choose **Actions**, **Delete VPC endpoints**.

1. Enter **delete** in the text input field.

1. Choose **Delete**.

**To delete the Amazon DocumentDB cluster**

1. Open the [Amazon DocumentDB console](https://console.aws.amazon.com/docdb/home#).

1. Choose the Amazon DocumentDB cluster you created for this tutorial, and disable deletion protection.

1. In the main **Clusters** page, choose your Amazon DocumentDB cluster again.

1. Choose **Actions**, **Delete**.

1. For **Create final cluster snapshot**, select **No**.

1. Enter **delete** in the text input field.

1. Choose **Delete**.

**To delete the secret in Secrets Manager**

1. Open the [Secrets Manager](https://console.aws.amazon.com/secretsmanager/home#) console.

1. Choose the secret you created for this tutorial.

1. Choose **Actions**, **Delete secret**.

1. Choose **Schedule deletion**.

# Using Amazon Lambda with Amazon DynamoDB
DynamoDB

**Note**  
If you want to send data to a target other than a Lambda function or enrich the data before sending it, see [ Amazon EventBridge Pipes](https://docs.amazonaws.cn/eventbridge/latest/userguide/eb-pipes.html).

You can use an Amazon Lambda function to process records in an [Amazon DynamoDB stream](https://docs.amazonaws.cn/amazondynamodb/latest/developerguide/Streams.html). With DynamoDB Streams, you can trigger a Lambda function to perform additional work each time a DynamoDB table is updated.

When processing DynamoDB streams, you need to implement partial batch response logic to prevent successfully processed records from being retried when some records in a batch fail. The [Batch Processor utility](https://docs.powertools.aws.dev/lambda/python/latest/utilities/batch/) from Powertools for Amazon Lambda is available in Python, TypeScript, .NET, and Java and simplifies this implementation by automatically handling partial batch response logic, reducing development time and improving reliability.

**Topics**
+ [

## Polling and batching streams
](#dynamodb-polling-and-batching)
+ [

## Polling and stream starting positions
](#dyanmo-db-stream-poll)
+ [

## Simultaneous readers of a shard in DynamoDB Streams
](#events-dynamodb-simultaneous-readers)
+ [

## Example event
](#events-sample-dynamodb)
+ [

# Process DynamoDB records with Lambda
](services-dynamodb-eventsourcemapping.md)
+ [

# Configuring partial batch response with DynamoDB and Lambda
](services-ddb-batchfailurereporting.md)
+ [

# Retain discarded records for a DynamoDB event source in Lambda
](services-dynamodb-errors.md)
+ [

# Implementing stateful DynamoDB stream processing in Lambda
](services-ddb-windows.md)
+ [

# Lambda parameters for Amazon DynamoDB event source mappings
](services-ddb-params.md)
+ [

# Using event filtering with a DynamoDB event source
](with-ddb-filtering.md)
+ [

# Tutorial: Using Amazon Lambda with Amazon DynamoDB streams
](with-ddb-example.md)

## Polling and batching streams


Lambda polls shards in your DynamoDB stream for records at a base rate of 4 times per second. When records are available, Lambda invokes your function and waits for the result. If processing succeeds, Lambda resumes polling until it receives more records.

By default, Lambda invokes your function as soon as records are available. If the batch that Lambda reads from the event source has only one record in it, Lambda sends only one record to the function. To avoid invoking the function with a small number of records, you can tell the event source to buffer records for up to 5 minutes by configuring a *batching window*. Before invoking the function, Lambda continues to read records from the event source until it has gathered a full batch, the batching window expires, or the batch reaches the payload limit of 6 MB. For more information, see [Batching behavior](invocation-eventsourcemapping.md#invocation-eventsourcemapping-batching).

**Warning**  
Lambda event source mappings process each event at least once, and duplicate processing of records can occur. To avoid potential issues related to duplicate events, we strongly recommend that you make your function code idempotent. To learn more, see [How do I make my Lambda function idempotent](https://repost.aws/knowledge-center/lambda-function-idempotent) in the Amazon Knowledge Center.

Lambda doesn't wait for any configured [extensions](lambda-extensions.md) to complete before sending the next batch for processing. In other words, your extensions may continue to run as Lambda processes the next batch of records. This can cause throttling issues if you breach any of your account's [concurrency](lambda-concurrency.md) settings or limits. To detect whether this is a potential issue, monitor your functions and check whether you're seeing higher [concurrency metrics](monitoring-concurrency.md#general-concurrency-metrics) than expected for your event source mapping. Due to short times in between invokes, Lambda may briefly report higher concurrency usage than the number of shards. This can be true even for Lambda functions without extensions.

Configure the [ ParallelizationFactor](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html#lambda-CreateEventSourceMapping-request-ParallelizationFactor) setting to process one shard of a DynamoDB stream with more than one Lambda invocation simultaneously. You can specify the number of concurrent batches that Lambda polls from a shard via a parallelization factor from 1 (default) to 10. For example, when you set `ParallelizationFactor` to 2, you can have 200 concurrent Lambda invocations at maximum to process 100 DynamoDB stream shards (though in practice, you may see different values for the `ConcurrentExecutions` metric). This helps scale up the processing throughput when the data volume is volatile and the [IteratorAge](monitoring-metrics-types.md#performance-metrics) is high. When you increase the number of concurrent batches per shard, Lambda still ensures in-order processing at the item (partition and sort key) level.

## Polling and stream starting positions


Be aware that stream polling during event source mapping creation and updates is eventually consistent.
+ During event source mapping creation, it may take several minutes to start polling events from the stream.
+ During event source mapping updates, it may take several minutes to stop and restart polling events from the stream.

This behavior means that if you specify `LATEST` as the starting position for the stream, the event source mapping could miss events during creation or updates. To ensure that no events are missed, specify the stream starting position as `TRIM_HORIZON`.

## Simultaneous readers of a shard in DynamoDB Streams
Simultaneous readers

For single-Region tables that are not global tables, you can design for up to two Lambda functions to read from the same DynamoDB Streams shard at the same time. Exceeding this limit can result in request throttling. For global tables, we recommend you limit the number of simultaneous functions to one to avoid request throttling.

## Example event


**Example**  

```
{
  "Records": [
    {
      "eventID": "1",
      "eventVersion": "1.0",
      "dynamodb": {
        "Keys": {
          "Id": {
            "N": "101"
          }
        },
        "NewImage": {
          "Message": {
            "S": "New item!"
          },
          "Id": {
            "N": "101"
          }
        },
        "StreamViewType": "NEW_AND_OLD_IMAGES",
        "SequenceNumber": "111",
        "SizeBytes": 26
      },
      "awsRegion": "us-west-2",
      "eventName": "INSERT",
      "eventSourceARN": "arn:aws-cn:dynamodb:us-west-2:123456789012:table/my-table/stream/2024-06-10T19:26:16.525",
      "eventSource": "aws:dynamodb"
    },
    {
      "eventID": "2",
      "eventVersion": "1.0",
      "dynamodb": {
        "OldImage": {
          "Message": {
            "S": "New item!"
          },
          "Id": {
            "N": "101"
          }
        },
        "SequenceNumber": "222",
        "Keys": {
          "Id": {
            "N": "101"
          }
        },
        "SizeBytes": 59,
        "NewImage": {
          "Message": {
            "S": "This item has changed"
          },
          "Id": {
            "N": "101"
          }
        },
        "StreamViewType": "NEW_AND_OLD_IMAGES"
      },
      "awsRegion": "us-west-2",
      "eventName": "MODIFY",
      "eventSourceARN": "arn:aws-cn:dynamodb:us-west-2:123456789012:table/my-table/stream/2024-06-10T19:26:16.525",
      "eventSource": "aws:dynamodb"
    }
  ]}
```

# Process DynamoDB records with Lambda
Create mapping

Create an event source mapping to tell Lambda to send records from your stream to a Lambda function. You can create multiple event source mappings to process the same data with multiple Lambda functions, or to process items from multiple streams with a single function.

You can configure event source mappings to process records from a stream in a different Amazon Web Services account. To learn more, see [Creating a cross-account event source mapping](#services-dynamodb-eventsourcemapping-cross-account).

To configure your function to read from DynamoDB Streams, attach the [AWSLambdaDynamoDBExecutionRole](https://docs.amazonaws.cn/aws-managed-policy/latest/reference/AWSLambdaDynamoDBExecutionRole.html) Amazon managed policy to your execution role and then create a **DynamoDB** trigger.

**To add permissions and create a trigger**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Choose the name of a function.

1. Choose the **Configuration** tab, and then choose **Permissions**.

1. Under **Role name**, choose the link to your execution role. This link opens the role in the IAM console.  
![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/execution-role.png)

1. Choose **Add permissions**, and then choose **Attach policies**.  
![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/attach-policies.png)

1. In the search field, enter `AWSLambdaDynamoDBExecutionRole`. Add this policy to your execution role. This is an Amazon managed policy that contains the permissions your function needs to read from the DynamoDB stream. For more information about this policy, see [AWSLambdaDynamoDBExecutionRole](https://docs.amazonaws.cn/aws-managed-policy/latest/reference/AWSLambdaDynamoDBExecutionRole.html) in the *Amazon Managed Policy Reference*.

1. Go back to your function in the Lambda console. Under **Function overview**, choose **Add trigger**.  
![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/add-trigger.png)

1. Choose a trigger type.

1. Configure the required options, and then choose **Add**.

Lambda supports the following options for DynamoDB event sources:

**Event source options**
+ **DynamoDB table** – The DynamoDB table to read records from.
+ **Batch size** – The number of records to send to the function in each batch, up to 10,000. Lambda passes all of the records in the batch to the function in a single call, as long as the total size of the events doesn't exceed the [payload limit](gettingstarted-limits.md) for synchronous invocation (6 MB).
+ **Batch window** – Specify the maximum amount of time to gather records before invoking the function, in seconds.
+ **Starting position** – Process only new records, or all existing records.
  + **Latest** – Process new records that are added to the stream.
  + **Trim horizon** – Process all records in the stream.

  After processing any existing records, the function is caught up and continues to process new records.
+ **On-failure destination** – A standard SQS queue or standard SNS topic for records that can't be processed. When Lambda discards a batch of records that's too old or has exhausted all retries, Lambda sends details about the batch to the queue or topic.
+ **Retry attempts** – The maximum number of times that Lambda retries when the function returns an error. This doesn't apply to service errors or throttles where the batch didn't reach the function.
+ **Maximum age of record** – The maximum age of a record that Lambda sends to your function.
+ **Split batch on error** – When the function returns an error, split the batch into two before retrying. Your original batch size setting remains unchanged.
+ **Concurrent batches per shard** – Concurrently process multiple batches from the same shard.
+ **Enabled** – Set to true to enable the event source mapping. Set to false to stop processing records. Lambda keeps track of the last record processed and resumes processing from that point when the mapping is reenabled.

**Note**  
You are not charged for GetRecords API calls invoked by Lambda as part of DynamoDB triggers.

To manage the event source configuration later, choose the trigger in the designer.

## Creating a cross-account event source mapping
Cross-account mappings

Amazon DynamoDB now supports [resource-based policies](https://docs.amazonaws.cn/amazondynamodb/latest/developerguide/access-control-resource-based.html). With this capability, you can process data from a DynamoDB stream in one Amazon Web Services account with a Lambda function in another account.

To create an event source mapping for your Lambda function using a DynamoDB stream in a different Amazon Web Services account, you must configure the stream using a resource-based policy to give your Lambda function permission to read records. To learn how to configure your stream for cross-account access, see [Share access with cross-account Lambda functions](https://docs.amazonaws.cn/amazondynamodb/latest/developerguide/rbac-cross-account-access.html#rbac-analyze-cross-account-lambda-access) in the *Amazon DynamoDB Developer Guide*.

Once you have configured your stream with a resource-based policy that gives your Lambda function the required permissions, create the event source mapping with your cross-account stream ARN. You can find the stream ARN under the table's **Exports and streams** tab in the cross-account DynamoDB console. 

When using the Lambda console, paste the stream ARN directly into the DynamoDB table input field in the event source mapping creation page.

 **Note:** Cross-region triggers are not supported. 

# Configuring partial batch response with DynamoDB and Lambda
Batch item failures

When consuming and processing streaming data from an event source, by default Lambda checkpoints to the highest sequence number of a batch only when the batch is a complete success. Lambda treats all other results as a complete failure and retries processing the batch up to the retry limit. To allow for partial successes while processing batches from a stream, turn on `ReportBatchItemFailures`. Allowing partial successes can help to reduce the number of retries on a record, though it doesn’t entirely prevent the possibility of retries in a successful record.

To turn on `ReportBatchItemFailures`, include the enum value **ReportBatchItemFailures** in the [FunctionResponseTypes](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html#lambda-CreateEventSourceMapping-request-FunctionResponseTypes) list. This list indicates which response types are enabled for your function. You can configure this list when you [create](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html) or [update](https://docs.amazonaws.cn/lambda/latest/api/API_UpdateEventSourceMapping.html) an event source mapping.

**Note**  
Even when your function code returns partial batch failure responses, these responses will not be processed by Lambda unless the `ReportBatchItemFailures` feature is explicitly turned on for your event source mapping.

## Report syntax


When configuring reporting on batch item failures, the `StreamsEventResponse` class is returned with a list of batch item failures. You can use a `StreamsEventResponse` object to return the sequence number of the first failed record in the batch. You can also create your own custom class using the correct response syntax. The following JSON structure shows the required response syntax:

```
{ 
  "batchItemFailures": [ 
        {
            "itemIdentifier": "<SequenceNumber>"
        }
    ]
}
```

**Note**  
If the `batchItemFailures` array contains multiple items, Lambda uses the record with the lowest sequence number as the checkpoint. Lambda then retries all records starting from that checkpoint.

## Success and failure conditions


Lambda treats a batch as a complete success if you return any of the following:
+ An empty `batchItemFailure` list
+ A null `batchItemFailure` list
+ An empty `EventResponse`
+ A null `EventResponse`

Lambda treats a batch as a complete failure if you return any of the following:
+ An empty string `itemIdentifier`
+ A null `itemIdentifier`
+ An `itemIdentifier` with a bad key name

Lambda retries failures based on your retry strategy.

## Bisecting a batch


If your invocation fails and `BisectBatchOnFunctionError` is turned on, the batch is bisected regardless of your `ReportBatchItemFailures` setting.

When a partial batch success response is received and both `BisectBatchOnFunctionError` and `ReportBatchItemFailures` are turned on, the batch is bisected at the returned sequence number and Lambda retries only the remaining records.

To simplify the implementation of partial batch response logic, consider using the [Batch Processor utility](https://docs.powertools.aws.dev/lambda/python/latest/utilities/batch/) from Powertools for Amazon Lambda, which automatically handles these complexities for you.

Here are some examples of function code that return the list of failed message IDs in the batch:

------
#### [ .NET ]

**Amazon SDK for .NET**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-ddb-to-lambda-with-batch-item-handling) repository. 
Reporting DynamoDB batch item failures with Lambda using .NET.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
using System.Text.Json;
using System.Text;
using Amazon.Lambda.Core;
using Amazon.Lambda.DynamoDBEvents;

// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))]

namespace AWSLambda_DDB;

public class Function
{
    public StreamsEventResponse FunctionHandler(DynamoDBEvent dynamoEvent, ILambdaContext context)

    {
        context.Logger.LogInformation($"Beginning to process {dynamoEvent.Records.Count} records...");
        List<StreamsEventResponse.BatchItemFailure> batchItemFailures = new List<StreamsEventResponse.BatchItemFailure>();
        StreamsEventResponse streamsEventResponse = new StreamsEventResponse();

        foreach (var record in dynamoEvent.Records)
        {
            try
            {
                var sequenceNumber = record.Dynamodb.SequenceNumber;
                context.Logger.LogInformation(sequenceNumber);
            }
            catch (Exception ex)
            {
                context.Logger.LogError(ex.Message);
                batchItemFailures.Add(new StreamsEventResponse.BatchItemFailure() { ItemIdentifier = record.Dynamodb.SequenceNumber });
            }
        }

        if (batchItemFailures.Count > 0)
        {
            streamsEventResponse.BatchItemFailures = batchItemFailures;
        }

        context.Logger.LogInformation("Stream processing complete.");
        return streamsEventResponse;
    }
}
```

------
#### [ Go ]

**SDK for Go V2**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-ddb-to-lambda-with-batch-item-handling) repository. 
Reporting DynamoDB batch item failures with Lambda using Go.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
package main

import (
	"context"
	"github.com/aws/aws-lambda-go/events"
	"github.com/aws/aws-lambda-go/lambda"
)

type BatchItemFailure struct {
	ItemIdentifier string `json:"ItemIdentifier"`
}

type BatchResult struct {
	BatchItemFailures []BatchItemFailure `json:"BatchItemFailures"`
}

func HandleRequest(ctx context.Context, event events.DynamoDBEvent) (*BatchResult, error) {
	var batchItemFailures []BatchItemFailure
	curRecordSequenceNumber := ""

	for _, record := range event.Records {
		// Process your record
		curRecordSequenceNumber = record.Change.SequenceNumber
	}

	if curRecordSequenceNumber != "" {
		batchItemFailures = append(batchItemFailures, BatchItemFailure{ItemIdentifier: curRecordSequenceNumber})
	}
	
	batchResult := BatchResult{
		BatchItemFailures: batchItemFailures,
	}

	return &batchResult, nil
}

func main() {
	lambda.Start(HandleRequest)
}
```

------
#### [ Java ]

**SDK for Java 2.x**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-ddb-to-lambda-with-batch-item-handling) repository. 
Reporting DynamoDB batch item failures with Lambda using Java.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.amazonaws.services.lambda.runtime.events.DynamodbEvent;
import com.amazonaws.services.lambda.runtime.events.StreamsEventResponse;
import com.amazonaws.services.lambda.runtime.events.models.dynamodb.StreamRecord;

import java.util.ArrayList;
import java.util.List;

public class ProcessDynamodbRecords implements RequestHandler<DynamodbEvent, StreamsEventResponse> {

    @Override
    public StreamsEventResponse handleRequest(DynamodbEvent input, Context context) {

        List<StreamsEventResponse.BatchItemFailure> batchItemFailures = new ArrayList<>();
        String curRecordSequenceNumber = "";

        for (DynamodbEvent.DynamodbStreamRecord dynamodbStreamRecord : input.getRecords()) {
          try {
                //Process your record
                StreamRecord dynamodbRecord = dynamodbStreamRecord.getDynamodb();
                curRecordSequenceNumber = dynamodbRecord.getSequenceNumber();
                
            } catch (Exception e) {
                /* Since we are working with streams, we can return the failed item immediately.
                   Lambda will immediately begin to retry processing from this failed item onwards. */
                batchItemFailures.add(new StreamsEventResponse.BatchItemFailure(curRecordSequenceNumber));
                return new StreamsEventResponse(batchItemFailures);
            }
        }
       
       return new StreamsEventResponse();   
    }
}
```

------
#### [ JavaScript ]

**SDK for JavaScript (v3)**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-ddb-to-lambda-with-batch-item-handling) repository. 
Reporting DynamoDB batch item failures with Lambda using JavaScript.  

```
export const handler = async (event) => {
  const records = event.Records;
  let curRecordSequenceNumber = "";

  for (const record of records) {
    try {
      // Process your record
      curRecordSequenceNumber = record.dynamodb.SequenceNumber;
    } catch (e) {
      // Return failed record's sequence number
      return { batchItemFailures: [{ itemIdentifier: curRecordSequenceNumber }] };
    }
  }

  return { batchItemFailures: [] };
};
```
Reporting DynamoDB batch item failures with Lambda using TypeScript.  

```
import {
  DynamoDBBatchResponse,
  DynamoDBBatchItemFailure,
  DynamoDBStreamEvent,
} from "aws-lambda";

export const handler = async (
  event: DynamoDBStreamEvent
): Promise<DynamoDBBatchResponse> => {
  const batchItemFailures: DynamoDBBatchItemFailure[] = [];
  let curRecordSequenceNumber;

  for (const record of event.Records) {
    curRecordSequenceNumber = record.dynamodb?.SequenceNumber;

    if (curRecordSequenceNumber) {
      batchItemFailures.push({
        itemIdentifier: curRecordSequenceNumber,
      });
    }
  }

  return { batchItemFailures: batchItemFailures };
};
```

------
#### [ PHP ]

**SDK for PHP**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-ddb-to-lambda-with-batch-item-handling) repository. 
Reporting DynamoDB batch item failures with Lambda using PHP.  

```
<?php

# using bref/bref and bref/logger for simplicity

use Bref\Context\Context;
use Bref\Event\DynamoDb\DynamoDbEvent;
use Bref\Event\Handler as StdHandler;
use Bref\Logger\StderrLogger;

require __DIR__ . '/vendor/autoload.php';

class Handler implements StdHandler
{
    private StderrLogger $logger;
    public function __construct(StderrLogger $logger)
    {
        $this->logger = $logger;
    }

    /**
     * @throws JsonException
     * @throws \Bref\Event\InvalidLambdaEvent
     */
    public function handle(mixed $event, Context $context): array
    {
        $dynamoDbEvent = new DynamoDbEvent($event);
        $this->logger->info("Processing records");

        $records = $dynamoDbEvent->getRecords();
        $failedRecords = [];
        foreach ($records as $record) {
            try {
                $data = $record->getData();
                $this->logger->info(json_encode($data));
                // TODO: Do interesting work based on the new data
            } catch (Exception $e) {
                $this->logger->error($e->getMessage());
                // failed processing the record
                $failedRecords[] = $record->getSequenceNumber();
            }
        }
        $totalRecords = count($records);
        $this->logger->info("Successfully processed $totalRecords records");

        // change format for the response
        $failures = array_map(
            fn(string $sequenceNumber) => ['itemIdentifier' => $sequenceNumber],
            $failedRecords
        );

        return [
            'batchItemFailures' => $failures
        ];
    }
}

$logger = new StderrLogger();
return new Handler($logger);
```

------
#### [ Python ]

**SDK for Python (Boto3)**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-ddb-to-lambda-with-batch-item-handling) repository. 
Reporting DynamoDB batch item failures with Lambda using Python.  

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
def handler(event, context):
    records = event.get("Records")
    curRecordSequenceNumber = ""
    
    for record in records:
        try:
            # Process your record
            curRecordSequenceNumber = record["dynamodb"]["SequenceNumber"]
        except Exception as e:
            # Return failed record's sequence number
            return {"batchItemFailures":[{"itemIdentifier": curRecordSequenceNumber}]}

    return {"batchItemFailures":[]}
```

------
#### [ Ruby ]

**SDK for Ruby**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-ddb-to-lambda-with-batch-item-handling) repository. 
Reporting DynamoDB batch item failures with Lambda using Ruby.  

```
def lambda_handler(event:, context:)
    records = event["Records"]
    cur_record_sequence_number = ""
  
    records.each do |record|
      begin
        # Process your record
        cur_record_sequence_number = record["dynamodb"]["SequenceNumber"]
      rescue StandardError => e
        # Return failed record's sequence number
        return {"batchItemFailures" => [{"itemIdentifier" => cur_record_sequence_number}]}
      end
    end
  
    {"batchItemFailures" => []}
  end
```

------
#### [ Rust ]

**SDK for Rust**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-ddb-to-lambda-with-batch-item-handling) repository. 
Reporting DynamoDB batch item failures with Lambda using Rust.  

```
use aws_lambda_events::{
    event::dynamodb::{Event, EventRecord, StreamRecord},
    streams::{DynamoDbBatchItemFailure, DynamoDbEventResponse},
};
use lambda_runtime::{run, service_fn, Error, LambdaEvent};

/// Process the stream record
fn process_record(record: &EventRecord) -> Result<(), Error> {
    let stream_record: &StreamRecord = &record.change;

    // process your stream record here...
    tracing::info!("Data: {:?}", stream_record);

    Ok(())
}

/// Main Lambda handler here...
async fn function_handler(event: LambdaEvent<Event>) -> Result<DynamoDbEventResponse, Error> {
    let mut response = DynamoDbEventResponse {
        batch_item_failures: vec![],
    };

    let records = &event.payload.records;

    if records.is_empty() {
        tracing::info!("No records found. Exiting.");
        return Ok(response);
    }

    for record in records {
        tracing::info!("EventId: {}", record.event_id);

        // Couldn't find a sequence number
        if record.change.sequence_number.is_none() {
            response.batch_item_failures.push(DynamoDbBatchItemFailure {
                item_identifier: Some("".to_string()),
            });
            return Ok(response);
        }

        // Process your record here...
        if process_record(record).is_err() {
            response.batch_item_failures.push(DynamoDbBatchItemFailure {
                item_identifier: record.change.sequence_number.clone(),
            });
            /* Since we are working with streams, we can return the failed item immediately.
            Lambda will immediately begin to retry processing from this failed item onwards. */
            return Ok(response);
        }
    }

    tracing::info!("Successfully processed {} record(s)", records.len());

    Ok(response)
}

#[tokio::main]
async fn main() -> Result<(), Error> {
    tracing_subscriber::fmt()
        .with_max_level(tracing::Level::INFO)
        // disable printing the name of the module in every log line.
        .with_target(false)
        // disabling time is handy because CloudWatch will add the ingestion time.
        .without_time()
        .init();

    run(service_fn(function_handler)).await
}
```

------

## Using Powertools for Amazon Lambda batch processor


The batch processor utility from Powertools for Amazon Lambda automatically handles partial batch response logic, reducing the complexity of implementing batch failure reporting. Here are examples using the batch processor:

**Python**  
For complete examples and setup instructions, see the [batch processor documentation](https://docs.powertools.aws.dev/lambda/python/latest/utilities/batch/).
Processing DynamoDB stream records with Amazon Lambda batch processor.  

```
import json
from aws_lambda_powertools import Logger
from aws_lambda_powertools.utilities.batch import BatchProcessor, EventType, process_partial_response
from aws_lambda_powertools.utilities.data_classes import DynamoDBStreamEvent
from aws_lambda_powertools.utilities.typing import LambdaContext

processor = BatchProcessor(event_type=EventType.DynamoDBStreams)
logger = Logger()

def record_handler(record):
    logger.info(record)
    # Your business logic here
    # Raise an exception to mark this record as failed
    
def lambda_handler(event, context: LambdaContext):
    return process_partial_response(
        event=event, 
        record_handler=record_handler, 
        processor=processor,
        context=context
    )
```

**TypeScript**  
For complete examples and setup instructions, see the [batch processor documentation](https://docs.aws.amazon.com/powertools/typescript/latest/features/batch/).
Processing DynamoDB stream records with Amazon Lambda batch processor.  

```
import { BatchProcessor, EventType, processPartialResponse } from '@aws-lambda-powertools/batch';
import { Logger } from '@aws-lambda-powertools/logger';
import type { DynamoDBStreamEvent, Context } from 'aws-lambda';

const processor = new BatchProcessor(EventType.DynamoDBStreams);
const logger = new Logger();

const recordHandler = async (record: any): Promise<void> => {
    logger.info('Processing record', { record });
    // Your business logic here
    // Throw an error to mark this record as failed
};

export const handler = async (event: DynamoDBStreamEvent, context: Context) => {
    return processPartialResponse(event, recordHandler, processor, {
        context,
    });
};
```

**Java**  
For complete examples and setup instructions, see the [batch processor documentation](https://docs.powertools.aws.dev/lambda/java/latest/utilities/batch/).
Processing DynamoDB stream records with Amazon Lambda batch processor.  

```
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.amazonaws.services.lambda.runtime.events.DynamodbEvent;
import com.amazonaws.services.lambda.runtime.events.StreamsEventResponse;
import software.amazon.lambda.powertools.batch.BatchMessageHandlerBuilder;
import software.amazon.lambda.powertools.batch.handler.BatchMessageHandler;

public class DynamoDBStreamBatchHandler implements RequestHandler<DynamodbEvent, StreamsEventResponse> {

    private final BatchMessageHandler<DynamodbEvent, StreamsEventResponse> handler;

    public DynamoDBStreamBatchHandler() {
        handler = new BatchMessageHandlerBuilder()
                .withDynamoDbBatchHandler()
                .buildWithRawMessageHandler(this::processMessage);
    }

    @Override
    public StreamsEventResponse handleRequest(DynamodbEvent ddbEvent, Context context) {
        return handler.processBatch(ddbEvent, context);
    }

    private void processMessage(DynamodbEvent.DynamodbStreamRecord dynamodbStreamRecord, Context context) {
        // Process the change record
    }
}
```

**.NET**  
For complete examples and setup instructions, see the [batch processor documentation](https://docs.aws.amazon.com/powertools/dotnet/utilities/batch-processing/).
Processing DynamoDB stream records with Amazon Lambda batch processor.  

```
using System;
using System.Threading;
using System.Threading.Tasks;
using Amazon.Lambda.Core;
using Amazon.Lambda.DynamoDBEvents;
using Amazon.Lambda.Serialization.SystemTextJson;
using AWS.Lambda.Powertools.BatchProcessing;

[assembly: LambdaSerializer(typeof(DefaultLambdaJsonSerializer))]

namespace HelloWorld;

public class Customer
{
    public string? CustomerId { get; set; }
    public string? Name { get; set; }
    public string? Email { get; set; }
    public DateTime CreatedAt { get; set; }
}

internal class TypedDynamoDbRecordHandler : ITypedRecordHandler<Customer> 
{
    public async Task<RecordHandlerResult> HandleAsync(Customer customer, CancellationToken cancellationToken)
    {
        if (string.IsNullOrEmpty(customer.Email)) 
        {
            throw new ArgumentException("Customer email is required");
        }

        return await Task.FromResult(RecordHandlerResult.None); 
    }
}

public class Function
{
    [BatchProcessor(TypedRecordHandler = typeof(TypedDynamoDbRecordHandler))]
    public BatchItemFailuresResponse HandlerUsingTypedAttribute(DynamoDBEvent _)
    {
        return TypedDynamoDbStreamBatchProcessor.Result.BatchItemFailuresResponse; 
    }
}
```

# Retain discarded records for a DynamoDB event source in Lambda
Error handling

Error handling for DynamoDB event source mappings depends on whether the error occurs before the function is invoked or during function invocation:
+ **Before invocation:** If a Lambda event source mapping is unable to invoke the function due to throttling or other issues, it retries until the records expire or exceed the maximum age configured on the event source mapping ([MaximumRecordAgeInSeconds](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html#lambda-CreateEventSourceMapping-request-MaximumRecordAgeInSeconds)).
+ **During invocation:** If the function is invoked but returns an error, Lambda retries until the records expire, exceed the maximum age ([MaximumRecordAgeInSeconds](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html#lambda-CreateEventSourceMapping-request-MaximumRecordAgeInSeconds)), or reach the configured retry quota ([MaximumRetryAttempts](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html#lambda-CreateEventSourceMapping-request-MaximumRetryAttempts)). For function errors, you can also configure [BisectBatchOnFunctionError](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html#lambda-CreateEventSourceMapping-response-BisectBatchOnFunctionError), which splits a failed batch into two smaller batches, isolating bad records and avoiding timeouts. Splitting batches doesn't consume the retry quota.

If the error handling measures fail, Lambda discards the records and continues processing batches from the stream. With the default settings, this means that a bad record can block processing on the affected shard for up to one day. To avoid this, configure your function's event source mapping with a reasonable number of retries and a maximum record age that fits your use case.

## Configuring destinations for failed invocations
On-failure destinations

To retain records of failed event source mapping invocations, add a destination to your function's event source mapping. Each record sent to the destination is a JSON document containing metadata about the failed invocation. For Amazon S3 destinations, Lambda also sends the entire invocation record along with the metadata. You can configure any Amazon SNS topic, Amazon SQS queue, Amazon S3 bucket, or Kafka as a destination.

With Amazon S3 destinations, you can use the [Amazon S3 Event Notifications](https://docs.amazonaws.cn/) feature to receive notifications when objects are uploaded to your destination S3 bucket. You can also configure S3 Event Notifications to invoke another Lambda function to perform automated processing on failed batches.

Your execution role must have permissions for the destination:
+ **For an SQS destination:** [sqs:SendMessage](https://docs.amazonaws.cn/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html)
+ **For an SNS destination:** [sns:Publish](https://docs.amazonaws.cn/sns/latest/api/API_Publish.html)
+ **For an S3 destination:** [ s3:PutObject](https://docs.amazonaws.cn/AmazonS3/latest/API/API_PutObject.html) and [s3:ListBucket](https://docs.amazonaws.cn/AmazonS3/latest/API/ListObjectsV2.html)
+ **For a Kafka destination:** [kafka-cluster:WriteData](https://docs.aws.amazon.com/msk/latest/developerguide/kafka-actions.html)

You can configure a Kafka topic as an on-failure destination for your Kafka event source mappings. When Lambda can't process records after exhausting retry attempts or when records exceed the maximum age, Lambda sends the failed records to the specified Kafka topic for later processing. Refer to [Using a Kafka topic as an on-failure destination](kafka-on-failure-destination.md).

If you've enabled encryption with your own KMS key for an S3 destination, your function's execution role must also have permission to call [kms:GenerateDataKey](https://docs.amazonaws.cn/kms/latest/APIReference/API_GenerateDataKey.html). If the KMS key and S3 bucket destination are in a different account from your Lambda function and execution role, configure the KMS key to trust the execution role to allow kms:GenerateDataKey.

To configure an on-failure destination using the console, follow these steps:

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Choose a function.

1. Under **Function overview**, choose **Add destination**.

1. For **Source**, choose **Event source mapping invocation**.

1. For **Event source mapping**, choose an event source that's configured for this function.

1. For **Condition**, select **On failure**. For event source mapping invocations, this is the only accepted condition.

1. For **Destination type**, choose the destination type that Lambda sends invocation records to.

1. For **Destination**, choose a resource.

1. Choose **Save**.

You can also configure an on-failure destination using the Amazon Command Line Interface (Amazon CLI). For example, the following [create-event-source-mapping](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/create-event-source-mapping.html) command adds an event source mapping with an SQS on-failure destination to `MyFunction`:

```
aws lambda create-event-source-mapping \
--function-name "MyFunction" \
--event-source-arn arn:aws-cn:dynamodb:us-west-2:123456789012:table/my-table/stream/2024-06-10T19:26:16.525 \
--destination-config '{"OnFailure": {"Destination": "arn:aws-cn:sqs:us-east-1:123456789012:dest-queue"}}'
```

The following [update-event-source-mapping](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-event-source-mapping.html) command updates an event source mapping to send failed invocation records to an SNS destination after two retry attempts, or if the records are more than an hour old.

```
aws lambda update-event-source-mapping \
--uuid f89f8514-cdd9-4602-9e1f-01a5b77d449b \
--maximum-retry-attempts 2 \
--maximum-record-age-in-seconds 3600 \
--destination-config '{"OnFailure": {"Destination": "arn:aws-cn:sns:us-east-1:123456789012:dest-topic"}}'
```

Updated settings are applied asynchronously and aren't reflected in the output until the process completes. Use the [get-event-source-mapping](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/get-event-source-mapping.html) command to view the current status.

To remove a destination, supply an empty string as the argument to the `destination-config` parameter:

```
aws lambda update-event-source-mapping \
--uuid f89f8514-cdd9-4602-9e1f-01a5b77d449b \
--destination-config '{"OnFailure": {"Destination": ""}}'
```

### Security best practices for Amazon S3 destinations


Deleting an S3 bucket that's configured as a destination without removing the destination from your function's configuration can create a security risk. If another user knows your destination bucket's name, they can recreate the bucket in their Amazon Web Services account. Records of failed invocations will be sent to their bucket, potentially exposing data from your function.

**Warning**  
To ensure that invocation records from your function can't be sent to an S3 bucket in another Amazon Web Services account, add a condition to your function's execution role that limits `s3:PutObject` permissions to buckets in your account. 

The following example shows an IAM policy that limits your function's `s3:PutObject` permissions to buckets in your account. This policy also gives Lambda the `s3:ListBucket` permission it needs to use an S3 bucket as a destination.

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "S3BucketResourceAccountWrite",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::*/*",
                "arn:aws:s3:::*"
            ],
            "Condition": {
                "StringEquals": {
                    "s3:ResourceAccount": "111122223333"
                }
            }
        }
    ]
}
```

To add a permissions policy to your function's execution role using the Amazon Web Services Management Console or Amazon CLI, refer to the instructions in the following procedures:

------
#### [ Console ]

**To add a permissions policy to a function's execution role (console)**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Select the Lambda function whose execution role you want to modify.

1. In the **Configuration** tab, select **Permissions**.

1. In the **Execution role** tab, select your function's **Role name** to open the role's IAM console page.

1. Add a permissions policy to the role by doing the following:

   1. In the **Permissions policies** pane, choose **Add permissions** and select **Create inline policy**.

   1. In **Policy editor**, select **JSON**.

   1. Paste the policy you want to add into the editor (replacing the existing JSON), and then choose **Next**.

   1. Under **Policy details**, enter a **Policy name**.

   1. Choose **Create policy**.

------
#### [ Amazon CLI ]

**To add a permissions policy to a function's execution role (CLI)**

1. Create a JSON policy document with the required permissions and save it in a local directory.

1. Use the IAM `put-role-policy` CLI command to add the permissions to your function's execution role. Run the following command from the directory you saved your JSON policy document in and replace the role name, policy name, and policy document with your own values.

   ```
   aws iam put-role-policy \
   --role-name my_lambda_role \
   --policy-name LambdaS3DestinationPolicy \
   --policy-document file://my_policy.json
   ```

------

### Example Amazon SNS and Amazon SQS invocation record


The following example shows an invocation record Lambda sends to an SQS or SNS destination for a DynamoDB stream.

```
{
    "requestContext": {
        "requestId": "316aa6d0-8154-xmpl-9af7-85d5f4a6bc81",
        "functionArn": "arn:aws-cn:lambda:us-west-2:123456789012:function:myfunction",
        "condition": "RetryAttemptsExhausted",
        "approximateInvokeCount": 1
    },
    "responseContext": {
        "statusCode": 200,
        "executedVersion": "$LATEST",
        "functionError": "Unhandled"
    },
    "version": "1.0",
    "timestamp": "2019-11-14T00:13:49.717Z",
    "DDBStreamBatchInfo": {
        "shardId": "shardId-00000001573689847184-864758bb",
        "startSequenceNumber": "800000000003126276362",
        "endSequenceNumber": "800000000003126276362",
        "approximateArrivalOfFirstRecord": "2019-11-14T00:13:19Z",
        "approximateArrivalOfLastRecord": "2019-11-14T00:13:19Z",
        "batchSize": 1,
        "streamArn": "arn:aws-cn:dynamodb:us-west-2:123456789012:table/mytable/stream/2019-11-14T00:04:06.388"
    }
}
```

You can use this information to retrieve the affected records from the stream for troubleshooting. The actual records aren't included, so you must process this record and retrieve them from the stream before they expire and are lost.

### Example Amazon S3 invocation record


The following example shows an invocation record Lambda sends to an S3 bucket for a DynamoDB stream. In addition to all of the fields from the previous example for SQS and SNS destinations, the `payload` field contains the original invocation record as an escaped JSON string.

```
{
    "requestContext": {
        "requestId": "316aa6d0-8154-xmpl-9af7-85d5f4a6bc81",
        "functionArn": "arn:aws-cn:lambda:us-west-2:123456789012:function:myfunction",
        "condition": "RetryAttemptsExhausted",
        "approximateInvokeCount": 1
    },
    "responseContext": {
        "statusCode": 200,
        "executedVersion": "$LATEST",
        "functionError": "Unhandled"
    },
    "version": "1.0",
    "timestamp": "2019-11-14T00:13:49.717Z",
    "DDBStreamBatchInfo": {
        "shardId": "shardId-00000001573689847184-864758bb",
        "startSequenceNumber": "800000000003126276362",
        "endSequenceNumber": "800000000003126276362",
        "approximateArrivalOfFirstRecord": "2019-11-14T00:13:19Z",
        "approximateArrivalOfLastRecord": "2019-11-14T00:13:19Z",
        "batchSize": 1,
        "streamArn": "arn:aws-cn:dynamodb:us-west-2:123456789012:table/mytable/stream/2019-11-14T00:04:06.388"
    },
    "payload": "<Whole Event>" // Only available in S3
}
```

The S3 object containing the invocation record uses the following naming convention:

```
aws/lambda/<ESM-UUID>/<shardID>/YYYY/MM/DD/YYYY-MM-DDTHH.MM.SS-<Random UUID>
```

# Implementing stateful DynamoDB stream processing in Lambda
Stateful processing

Lambda functions can run continuous stream processing applications. A stream represents unbounded data that flows continuously through your application. To analyze information from this continuously updating input, you can bound the included records using a window defined in terms of time.

Tumbling windows are distinct time windows that open and close at regular intervals. By default, Lambda invocations are stateless—you cannot use them for processing data across multiple continuous invocations without an external database. However, with tumbling windows, you can maintain your state across invocations. This state contains the aggregate result of the messages previously processed for the current window. Your state can be a maximum of 1 MB per shard. If it exceeds that size, Lambda terminates the window early.

Each record in a stream belongs to a specific window. Lambda will process each record at least once, but doesn't guarantee that each record will be processed only once. In rare cases, such as error handling, some records might be processed more than once. Records are always processed in order the first time. If records are processed more than once, they might be processed out of order.

## Aggregation and processing


Your user managed function is invoked both for aggregation and for processing the final results of that aggregation. Lambda aggregates all records received in the window. You can receive these records in multiple batches, each as a separate invocation. Each invocation receives a state. Thus, when using tumbling windows, your Lambda function response must contain a `state` property. If the response does not contain a `state` property, Lambda considers this a failed invocation. To satisfy this condition, your function can return a `TimeWindowEventResponse` object, which has the following JSON shape:

**Example `TimeWindowEventResponse` values**  

```
{
    "state": {
        "1": 282,
        "2": 715
    },
    "batchItemFailures": []
}
```

**Note**  
For Java functions, we recommend using a `Map<String, String>` to represent the state.

At the end of the window, the flag `isFinalInvokeForWindow` is set to `true` to indicate that this is the final state and that it’s ready for processing. After processing, the window completes and your final invocation completes, and then the state is dropped.

At the end of your window, Lambda uses final processing for actions on the aggregation results. Your final processing is synchronously invoked. After successful invocation, your function checkpoints the sequence number and stream processing continues. If invocation is unsuccessful, your Lambda function suspends further processing until a successful invocation.

**Example DynamodbTimeWindowEvent**  

```
{
   "Records":[
      {
         "eventID":"1",
         "eventName":"INSERT",
         "eventVersion":"1.0",
         "eventSource":"aws:dynamodb",
         "awsRegion":"us-east-1",
         "dynamodb":{
            "Keys":{
               "Id":{
                  "N":"101"
               }
            },
            "NewImage":{
               "Message":{
                  "S":"New item!"
               },
               "Id":{
                  "N":"101"
               }
            },
            "SequenceNumber":"111",
            "SizeBytes":26,
            "StreamViewType":"NEW_AND_OLD_IMAGES"
         },
         "eventSourceARN":"stream-ARN"
      },
      {
         "eventID":"2",
         "eventName":"MODIFY",
         "eventVersion":"1.0",
         "eventSource":"aws:dynamodb",
         "awsRegion":"us-east-1",
         "dynamodb":{
            "Keys":{
               "Id":{
                  "N":"101"
               }
            },
            "NewImage":{
               "Message":{
                  "S":"This item has changed"
               },
               "Id":{
                  "N":"101"
               }
            },
            "OldImage":{
               "Message":{
                  "S":"New item!"
               },
               "Id":{
                  "N":"101"
               }
            },
            "SequenceNumber":"222",
            "SizeBytes":59,
            "StreamViewType":"NEW_AND_OLD_IMAGES"
         },
         "eventSourceARN":"stream-ARN"
      },
      {
         "eventID":"3",
         "eventName":"REMOVE",
         "eventVersion":"1.0",
         "eventSource":"aws:dynamodb",
         "awsRegion":"us-east-1",
         "dynamodb":{
            "Keys":{
               "Id":{
                  "N":"101"
               }
            },
            "OldImage":{
               "Message":{
                  "S":"This item has changed"
               },
               "Id":{
                  "N":"101"
               }
            },
            "SequenceNumber":"333",
            "SizeBytes":38,
            "StreamViewType":"NEW_AND_OLD_IMAGES"
         },
         "eventSourceARN":"stream-ARN"
      }
   ],
    "window": {
        "start": "2020-07-30T17:00:00Z",
        "end": "2020-07-30T17:05:00Z"
    },
    "state": {
        "1": "state1"
    },
    "shardId": "shard123456789",
    "eventSourceARN": "stream-ARN",
    "isFinalInvokeForWindow": false,
    "isWindowTerminatedEarly": false
}
```

## Configuration


You can configure tumbling windows when you create or update an event source mapping. To configure a tumbling window, specify the window in seconds ([TumblingWindowInSeconds](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html#lambda-CreateEventSourceMapping-request-TumblingWindowInSeconds)). The following example Amazon Command Line Interface (Amazon CLI) command creates a streaming event source mapping that has a tumbling window of 120 seconds. The Lambda function defined for aggregation and processing is named `tumbling-window-example-function`.

```
aws lambda create-event-source-mapping \
--event-source-arn arn:aws-cn:dynamodb:us-west-2:123456789012:table/my-table/stream/2024-06-10T19:26:16.525 \
--function-name tumbling-window-example-function \
--starting-position TRIM_HORIZON \
--tumbling-window-in-seconds 120
```

Lambda determines tumbling window boundaries based on the time when records were inserted into the stream. All records have an approximate timestamp available that Lambda uses in boundary determinations.

Tumbling window aggregations do not support resharding. When the shard ends, Lambda considers the window closed, and the child shards start their own window in a fresh state.

Tumbling windows fully support the existing retry policies `maxRetryAttempts` and `maxRecordAge`.

**Example Handler.py – Aggregation and processing**  
The following Python function demonstrates how to aggregate and then process your final state:  

```
def lambda_handler(event, context):
    print('Incoming event: ', event)
    print('Incoming state: ', event['state'])

#Check if this is the end of the window to either aggregate or process.
    if event['isFinalInvokeForWindow']:
        # logic to handle final state of the window
        print('Destination invoke')
    else:
        print('Aggregate invoke')

#Check for early terminations
    if event['isWindowTerminatedEarly']:
        print('Window terminated early')

    #Aggregation logic
    state = event['state']
    for record in event['Records']:
        state[record['dynamodb']['NewImage']['Id']] = state.get(record['dynamodb']['NewImage']['Id'], 0) + 1

    print('Returning state: ', state)
    return {'state': state}
```

# Lambda parameters for Amazon DynamoDB event source mappings
Parameters

All Lambda event source types share the same [CreateEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html) and [UpdateEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_UpdateEventSourceMapping.html) API operations. However, only some of the parameters apply to DynamoDB Streams.


| Parameter | Required | Default | Notes | 
| --- | --- | --- | --- | 
|  BatchSize  |  N  |  100  |  Maximum: 10,000  | 
|  BisectBatchOnFunctionError  |  N  |  false  | none  | 
|  DestinationConfig  |  N  | N/A  |  Standard Amazon SQS queue or standard Amazon SNS topic destination for discarded records  | 
|  Enabled  |  N  |  true  | none  | 
|  EventSourceArn  |  Y  | N/A |  ARN of the data stream or a stream consumer  | 
|  FilterCriteria  |  N  | N/A  |  [Control which events Lambda sends to your function](invocation-eventfiltering.md)  | 
|  FunctionName  |  Y  | N/A  | none  | 
|  FunctionResponseTypes  |  N  | N/A |  To let your function report specific failures in a batch, include the value `ReportBatchItemFailures` in `FunctionResponseTypes`. For more information, see [Configuring partial batch response with DynamoDB and Lambda](services-ddb-batchfailurereporting.md).  | 
|  MaximumBatchingWindowInSeconds  |  N  |  0  | none  | 
|  MaximumRecordAgeInSeconds  |  N  |  -1  |  -1 means infinite: failed records are retried until the record expires. The [data retention limit for DynamoDB Streams](https://docs.amazonaws.cn/amazondynamodb/latest/developerguide/Streams.html#Streams.DataRetention) is 24 hours. Minimum: -1 Maximum: 604,800  | 
|  MaximumRetryAttempts  |  N  |  -1  |  -1 means infinite: failed records are retried until the record expires Minimum: 0 Maximum: 10,000  | 
|  ParallelizationFactor  |  N  |  1  |  Maximum: 10  | 
|  StartingPosition  |  Y  | N/A  |  TRIM\$1HORIZON or LATEST  | 
|  TumblingWindowInSeconds  |  N  | N/A  |  Minimum: 0 Maximum: 900  | 

# Using event filtering with a DynamoDB event source
Event filtering

You can use event filtering to control which records from a stream or queue Lambda sends to your function. For general information about how event filtering works, see [Control which events Lambda sends to your function](invocation-eventfiltering.md).

This section focuses on event filtering for DynamoDB event sources.

**Note**  
DynamoDB event source mappings only support filtering on the `dynamodb` key.

**Topics**
+ [

## DynamoDB event
](#filtering-ddb)
+ [

## Filtering with table attributes
](#filtering-ddb-attributes)
+ [

## Filtering with Boolean expressions
](#filtering-ddb-boolean)
+ [

## Using the Exists operator
](#filtering-ddb-exists)
+ [

## JSON format for DynamoDB filtering
](#filtering-ddb-JSON-format)

## DynamoDB event


Suppose you have a DynamoDB table with the primary key `CustomerName` and attributes `AccountManager` and `PaymentTerms`. The following shows an example record from your DynamoDB table’s stream.

```
{
      "eventID": "1",
      "eventVersion": "1.0",
      "dynamodb": {
          "ApproximateCreationDateTime": "1678831218.0",
          "Keys": {
              "CustomerName": {
                  "S": "AnyCompany Industries"
              }
          },
          "NewImage": {
              "AccountManager": {
                  "S": "Pat Candella"
              },
              "PaymentTerms": {
                  "S": "60 days"
              },
              "CustomerName": {
                  "S": "AnyCompany Industries"
              }
          },
          "SequenceNumber": "111",
          "SizeBytes": 26,
          "StreamViewType": "NEW_IMAGE"
      }
  }
```

To filter based on the key and attribute values in your DynamoDB table, use the `dynamodb` key in the record. The following sections provide examples for different filter types.

### Filtering with table keys


Suppose you want your function to process only those records where the primary key `CustomerName` is “AnyCompany Industries.” The `FilterCriteria` object would be as follows.

```
{
     "Filters": [
          {
              "Pattern": "{ \"dynamodb\" : { \"Keys\" : { \"CustomerName\" : { \"S\" : [ \"AnyCompany Industries\" ] } } } }"
          }
      ]
 }
```

For added clarity, here is the value of the filter's `Pattern` expanded in plain JSON. 

```
{
     "dynamodb": {
          "Keys": {
              "CustomerName": {
                  "S": [ "AnyCompany Industries" ]
                  }
              }
          }
 }
```

You can add your filter using the console, Amazon CLI or an Amazon SAM template.

------
#### [ Console ]

To add this filter using the console, follow the instructions in [Attaching filter criteria to an event source mapping (console)](invocation-eventfiltering.md#filtering-console) and enter the following string for the **Filter criteria**.

```
{ "dynamodb" : { "Keys" : { "CustomerName" : { "S" : [ "AnyCompany Industries" ] } } } }
```

------
#### [ Amazon CLI ]

To create a new event source mapping with these filter criteria using the Amazon Command Line Interface (Amazon CLI), run the following command.

```
aws lambda create-event-source-mapping \
    --function-name my-function \
    --event-source-arn arn:aws:dynamodb:us-east-2:123456789012:table/my-table \
    --filter-criteria '{"Filters": [{"Pattern": "{ \"dynamodb\" : { \"Keys\" : { \"CustomerName\" : { \"S\" : [ \"AnyCompany Industries\" ] } } } }"}]}'
```

To add these filter criteria to an existing event source mapping, run the following command.

```
aws lambda update-event-source-mapping \
    --uuid "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE" \
    --filter-criteria '{"Filters": [{"Pattern": "{ \"dynamodb\" : { \"Keys\" : { \"CustomerName\" : { \"S\" : [ \"AnyCompany Industries\" ] } } } }"}]}'
```

------
#### [ Amazon SAM ]

To add this filter using Amazon SAM, add the following snippet to the YAML template for your event source.

```
FilterCriteria:
   Filters:
     - Pattern: '{ "dynamodb" : { "Keys" : { "CustomerName" : { "S" : [ "AnyCompany Industries" ] } } } }'
```

------

## Filtering with table attributes


With DynamoDB, you can also use the `NewImage` and `OldImage` keys to filter for attribute values. Suppose you want to filter records where the `AccountManager` attribute in the latest table image is “Pat Candella” or "Shirley Rodriguez." The `FilterCriteria` object would be as follows.

```
{
    "Filters": [
        {
            "Pattern": "{ \"dynamodb\" : { \"NewImage\" : { \"AccountManager\" : { \"S\" : [ \"Pat Candella\", \"Shirley Rodriguez\" ] } } } }"
        }
    ]
}
```

For added clarity, here is the value of the filter's `Pattern` expanded in plain JSON.

```
{
    "dynamodb": {
        "NewImage": {
            "AccountManager": {
                "S": [ "Pat Candella", "Shirley Rodriguez" ]
            }
        }
    }
}
```

You can add your filter using the console, Amazon CLI or an Amazon SAM template.

------
#### [ Console ]

To add this filter using the console, follow the instructions in [Attaching filter criteria to an event source mapping (console)](invocation-eventfiltering.md#filtering-console) and enter the following string for the **Filter criteria**.

```
{ "dynamodb" : { "NewImage" : { "AccountManager" : { "S" : [ "Pat Candella", "Shirley Rodriguez" ] } } } }
```

------
#### [ Amazon CLI ]

To create a new event source mapping with these filter criteria using the Amazon Command Line Interface (Amazon CLI), run the following command.

```
aws lambda create-event-source-mapping \
    --function-name my-function \
    --event-source-arn arn:aws:dynamodb:us-east-2:123456789012:table/my-table \
    --filter-criteria '{"Filters": [{"Pattern": "{ \"dynamodb\" : { \"NewImage\" : { \"AccountManager\" : { \"S\" : [ \"Pat Candella\", \"Shirley Rodriguez\" ] } } } }"}]}'
```

To add these filter criteria to an existing event source mapping, run the following command.

```
aws lambda update-event-source-mapping \
    --uuid "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE" \
    --filter-criteria '{"Filters": [{"Pattern": "{ \"dynamodb\" : { \"NewImage\" : { \"AccountManager\" : { \"S\" : [ \"Pat Candella\", \"Shirley Rodriguez\" ] } } } }"}]}'
```

------
#### [ Amazon SAM ]

To add this filter using Amazon SAM, add the following snippet to the YAML template for your event source.

```
FilterCriteria:
  Filters:
    - Pattern: '{ "dynamodb" : { "NewImage" : { "AccountManager" : { "S" : [ "Pat Candella", "Shirley Rodriguez" ] } } } }'
```

------

## Filtering with Boolean expressions


You can also create filters using Boolean AND expressions. These expressions can include both your table's key and attribute parameters. Suppose you want to filter records where the `NewImage` value of `AccountManager` is "Pat Candella" and the `OldImage` value is "Terry Whitlock". The `FilterCriteria` object would be as follows.

```
{
    "Filters": [
        {
            "Pattern": "{ \"dynamodb\" : { \"NewImage\" : { \"AccountManager\" : { \"S\" : [ \"Pat Candella\" ] } } } , \"dynamodb\" : { \"OldImage\" : { \"AccountManager\" : { \"S\" : [ \"Terry Whitlock\" ] } } } }"
        }
    ]
}
```

For added clarity, here is the value of the filter's `Pattern` expanded in plain JSON.

```
{ 
    "dynamodb" : { 
        "NewImage" : { 
            "AccountManager" : { 
                "S" : [ 
                    "Pat Candella" 
                ] 
            } 
        } 
    }, 
    "dynamodb": { 
        "OldImage": { 
            "AccountManager": { 
                "S": [ 
                    "Terry Whitlock" 
                ] 
            } 
        } 
    } 
}
```

You can add your filter using the console, Amazon CLI or an Amazon SAM template.

------
#### [ Console ]

To add this filter using the console, follow the instructions in [Attaching filter criteria to an event source mapping (console)](invocation-eventfiltering.md#filtering-console) and enter the following string for the **Filter criteria**.

```
{ "dynamodb" : { "NewImage" : { "AccountManager" : { "S" : [ "Pat Candella" ] } } } , "dynamodb" : { "OldImage" : { "AccountManager" : { "S" : [ "Terry Whitlock" ] } } } }
```

------
#### [ Amazon CLI ]

To create a new event source mapping with these filter criteria using the Amazon Command Line Interface (Amazon CLI), run the following command.

```
aws lambda create-event-source-mapping \
    --function-name my-function \
    --event-source-arn arn:aws:dynamodb:us-east-2:123456789012:table/my-table \
    --filter-criteria '{"Filters": [{"Pattern": "{ \"dynamodb\" : { \"NewImage\" : { \"AccountManager\" : { \"S\" : [ \"Pat Candella\" ] } } } , \"dynamodb\" : { \"OldImage\" : { \"AccountManager\" : { \"S\" : [ \"Terry Whitlock\" ] } } } } "}]}'
```

To add these filter criteria to an existing event source mapping, run the following command.

```
aws lambda update-event-source-mapping \
    --uuid "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE" \
    --filter-criteria '{"Filters": [{"Pattern": "{ \"dynamodb\" : { \"NewImage\" : { \"AccountManager\" : { \"S\" : [ \"Pat Candella\" ] } } } , \"dynamodb\" : { \"OldImage\" : { \"AccountManager\" : { \"S\" : [ \"Terry Whitlock\" ] } } } } "}]}'
```

------
#### [ Amazon SAM ]

To add this filter using Amazon SAM, add the following snippet to the YAML template for your event source.

```
FilterCriteria:
  Filters:
    - Pattern: '{ "dynamodb" : { "NewImage" : { "AccountManager" : { "S" : [ "Pat Candella" ] } } } , "dynamodb" : { "OldImage" : { "AccountManager" : { "S" : [ "Terry Whitlock" ] } } } }'
```

------

**Note**  
DynamoDB event filtering doesn’t support the use of numeric operators (numeric equals and numeric range). Even if items in your table are stored as numbers, these parameters are converted to strings in the JSON record object.

## Using the Exists operator


Because of the way that JSON event objects from DynamoDB are structured, using the Exists operator requires special care. The Exists operator only works on leaf nodes in the event JSON, so if your filter pattern uses Exists to test for an intermediate node, it won't work. Consider the following DynamoDB table item:

```
{
  "UserID": {"S": "12345"},
  "Name": {"S": "John Doe"},
  "Organizations": {"L": [
      {"S":"Sales"},
      {"S":"Marketing"},
      {"S":"Support"}
    ]
  }
}
```

You might want to create a filter pattern like the following that would test for events containing `"Organizations"`:

```
{ "dynamodb" : { "NewImage" : { "Organizations" : [ { "exists": true } ] } } }
```

However, this filter pattern would never return a match because `"Organizations"` is not a leaf node. The following example shows how to properly use the Exists operator to construct the desired filter pattern:

```
{ "dynamodb" : { "NewImage" : {"Organizations": {"L": {"S": [ {"exists": true } ] } } } } }
```

## JSON format for DynamoDB filtering


To properly filter events from DynamoDB sources, both the data field and your filter criteria for the data field (`dynamodb`) must be in valid JSON format. If either field isn't in a valid JSON format, Lambda drops the message or throws an exception. The following table summarizes the specific behavior: 


| Incoming data format | Filter pattern format for data properties | Resulting action | 
| --- | --- | --- | 
|  Valid JSON  |  Valid JSON  |  Lambda filters based on your filter criteria.  | 
|  Valid JSON  |  No filter pattern for data properties  |  Lambda filters (on the other metadata properties only) based on your filter criteria.  | 
|  Valid JSON  |  Non-JSON  |  Lambda throws an exception at the time of the event source mapping creation or update. The filter pattern for data properties must be in a valid JSON format.  | 
|  Non-JSON  |  Valid JSON  |  Lambda drops the record.  | 
|  Non-JSON  |  No filter pattern for data properties  |  Lambda filters (on the other metadata properties only) based on your filter criteria.  | 
|  Non-JSON  |  Non-JSON  |  Lambda throws an exception at the time of the event source mapping creation or update. The filter pattern for data properties must be in a valid JSON format.  | 

# Tutorial: Using Amazon Lambda with Amazon DynamoDB streams
Tutorial

 In this tutorial, you create a Lambda function to consume events from an Amazon DynamoDB stream.

## Prerequisites


### Install the Amazon Command Line Interface


If you have not yet installed the Amazon Command Line Interface, follow the steps at [Installing or updating the latest version of the Amazon CLI](https://docs.amazonaws.cn/cli/latest/userguide/getting-started-install.html) to install it.

The tutorial requires a command line terminal or shell to run commands. In Linux and macOS, use your preferred shell and package manager.

**Note**  
In Windows, some Bash CLI commands that you commonly use with Lambda (such as `zip`) are not supported by the operating system's built-in terminals. To get a Windows-integrated version of Ubuntu and Bash, [install the Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/install-win10). 

## Create the execution role


Create the [execution role](lambda-intro-execution-role.md) that gives your function permission to access Amazon resources.

**To create an execution role**

1. Open the [roles page](https://console.amazonaws.cn/iam/home#/roles) in the IAM console.

1. Choose **Create role**.

1. Create a role with the following properties.
   + **Trusted entity** – Lambda.
   + **Permissions** – **AWSLambdaDynamoDBExecutionRole**.
   + **Role name** – **lambda-dynamodb-role**.

The **AWSLambdaDynamoDBExecutionRole** has the permissions that the function needs to read items from DynamoDB and write logs to CloudWatch Logs.

## Create the function


Create a Lambda function that processes your DynamoDB events. The function code writes some of the incoming event data to CloudWatch Logs.

------
#### [ .NET ]

**Amazon SDK for .NET**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-ddb-to-lambda) repository. 
Consuming a DynamoDB event with Lambda using .NET.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
using System.Text.Json;
using System.Text;
using Amazon.Lambda.Core;
using Amazon.Lambda.DynamoDBEvents;

// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))]

namespace AWSLambda_DDB;

public class Function
{
    public void FunctionHandler(DynamoDBEvent dynamoEvent, ILambdaContext context)
    {
        context.Logger.LogInformation($"Beginning to process {dynamoEvent.Records.Count} records...");

        foreach (var record in dynamoEvent.Records)
        {
            context.Logger.LogInformation($"Event ID: {record.EventID}");
            context.Logger.LogInformation($"Event Name: {record.EventName}");

            context.Logger.LogInformation(JsonSerializer.Serialize(record));
        }

        context.Logger.LogInformation("Stream processing complete.");
    }
}
```

------
#### [ Go ]

**SDK for Go V2**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-ddb-to-lambda) repository. 
Consuming a DynamoDB event with Lambda using Go.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
package main

import (
	"context"
	"github.com/aws/aws-lambda-go/lambda"
	"github.com/aws/aws-lambda-go/events"
	"fmt"
)

func HandleRequest(ctx context.Context, event events.DynamoDBEvent) (*string, error) {
	if len(event.Records) == 0 {
		return nil, fmt.Errorf("received empty event")
	}

	for _, record := range event.Records {
	 	LogDynamoDBRecord(record)
	}

	message := fmt.Sprintf("Records processed: %d", len(event.Records))
	return &message, nil
}

func main() {
	lambda.Start(HandleRequest)
}

func LogDynamoDBRecord(record events.DynamoDBEventRecord){
	fmt.Println(record.EventID)
	fmt.Println(record.EventName)
	fmt.Printf("%+v\n", record.Change)
}
```

------
#### [ Java ]

**SDK for Java 2.x**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-ddb-to-lambda) repository. 
Consuming a DynamoDB event with Lambda using Java.  

```
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.amazonaws.services.lambda.runtime.events.DynamodbEvent;
import com.amazonaws.services.lambda.runtime.events.DynamodbEvent.DynamodbStreamRecord;
import com.google.gson.Gson;
import com.google.gson.GsonBuilder;

public class example implements RequestHandler<DynamodbEvent, Void> {

    private static final Gson GSON = new GsonBuilder().setPrettyPrinting().create();

    @Override
    public Void handleRequest(DynamodbEvent event, Context context) {
        System.out.println(GSON.toJson(event));
        event.getRecords().forEach(this::logDynamoDBRecord);
        return null;
    }

    private void logDynamoDBRecord(DynamodbStreamRecord record) {
        System.out.println(record.getEventID());
        System.out.println(record.getEventName());
        System.out.println("DynamoDB Record: " + GSON.toJson(record.getDynamodb()));
    }
}
```

------
#### [ JavaScript ]

**SDK for JavaScript (v3)**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-ddb-to-lambda) repository. 
Consuming a DynamoDB event with Lambda using JavaScript.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
exports.handler = async (event, context) => {
    console.log(JSON.stringify(event, null, 2));
    event.Records.forEach(record => {
        logDynamoDBRecord(record);
    });
};

const logDynamoDBRecord = (record) => {
    console.log(record.eventID);
    console.log(record.eventName);
    console.log(`DynamoDB Record: ${JSON.stringify(record.dynamodb)}`);
};
```
Consuming a DynamoDB event with Lambda using TypeScript.  

```
export const handler = async (event, context) => {
    console.log(JSON.stringify(event, null, 2));
    event.Records.forEach(record => {
        logDynamoDBRecord(record);
    });
}
const logDynamoDBRecord = (record) => {
    console.log(record.eventID);
    console.log(record.eventName);
    console.log(`DynamoDB Record: ${JSON.stringify(record.dynamodb)}`);
};
```

------
#### [ PHP ]

**SDK for PHP**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-ddb-to-lambda) repository. 
Consuming a DynamoDB event with Lambda using PHP.  

```
<?php

# using bref/bref and bref/logger for simplicity

use Bref\Context\Context;
use Bref\Event\DynamoDb\DynamoDbEvent;
use Bref\Event\DynamoDb\DynamoDbHandler;
use Bref\Logger\StderrLogger;

require __DIR__ . '/vendor/autoload.php';

class Handler extends DynamoDbHandler
{
    private StderrLogger $logger;

    public function __construct(StderrLogger $logger)
    {
        $this->logger = $logger;
    }

    /**
     * @throws JsonException
     * @throws \Bref\Event\InvalidLambdaEvent
     */
    public function handleDynamoDb(DynamoDbEvent $event, Context $context): void
    {
        $this->logger->info("Processing DynamoDb table items");
        $records = $event->getRecords();

        foreach ($records as $record) {
            $eventName = $record->getEventName();
            $keys = $record->getKeys();
            $old = $record->getOldImage();
            $new = $record->getNewImage();
            
            $this->logger->info("Event Name:".$eventName."\n");
            $this->logger->info("Keys:". json_encode($keys)."\n");
            $this->logger->info("Old Image:". json_encode($old)."\n");
            $this->logger->info("New Image:". json_encode($new));
            
            // TODO: Do interesting work based on the new data

            // Any exception thrown will be logged and the invocation will be marked as failed
        }

        $totalRecords = count($records);
        $this->logger->info("Successfully processed $totalRecords items");
    }
}

$logger = new StderrLogger();
return new Handler($logger);
```

------
#### [ Python ]

**SDK for Python (Boto3)**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-ddb-to-lambda) repository. 
Consuming a DynamoDB event with Lambda using Python.  

```
import json

def lambda_handler(event, context):
    print(json.dumps(event, indent=2))

    for record in event['Records']:
        log_dynamodb_record(record)

def log_dynamodb_record(record):
    print(record['eventID'])
    print(record['eventName'])
    print(f"DynamoDB Record: {json.dumps(record['dynamodb'])}")
```

------
#### [ Ruby ]

**SDK for Ruby**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-ddb-to-lambda) repository. 
Consuming a DynamoDB event with Lambda using Ruby.  

```
def lambda_handler(event:, context:)
    return 'received empty event' if event['Records'].empty?
  
    event['Records'].each do |record|
      log_dynamodb_record(record)
    end
  
    "Records processed: #{event['Records'].length}"
  end
  
  def log_dynamodb_record(record)
    puts record['eventID']
    puts record['eventName']
    puts "DynamoDB Record: #{JSON.generate(record['dynamodb'])}"
  end
```

------
#### [ Rust ]

**SDK for Rust**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-ddb-to-lambda) repository. 
Consuming a DynamoDB event with Lambda using Rust.  

```
use lambda_runtime::{service_fn, tracing, Error, LambdaEvent};
use aws_lambda_events::{
    event::dynamodb::{Event, EventRecord},
   };


// Built with the following dependencies:
//lambda_runtime = "0.11.1"
//serde_json = "1.0"
//tokio = { version = "1", features = ["macros"] }
//tracing = { version = "0.1", features = ["log"] }
//tracing-subscriber = { version = "0.3", default-features = false, features = ["fmt"] }
//aws_lambda_events = "0.15.0"

async fn function_handler(event: LambdaEvent<Event>) ->Result<(), Error> {
    
    let records = &event.payload.records;
    tracing::info!("event payload: {:?}",records);
    if records.is_empty() {
        tracing::info!("No records found. Exiting.");
        return Ok(());
    }

    for record in records{
        log_dynamo_dbrecord(record);
    }

    tracing::info!("Dynamo db records processed");

    // Prepare the response
    Ok(())

}

fn log_dynamo_dbrecord(record: &EventRecord)-> Result<(), Error>{
    tracing::info!("EventId: {}", record.event_id);
    tracing::info!("EventName: {}", record.event_name);
    tracing::info!("DynamoDB Record: {:?}", record.change );
    Ok(())

}

#[tokio::main]
async fn main() -> Result<(), Error> {
    tracing_subscriber::fmt()
    .with_max_level(tracing::Level::INFO)
    .with_target(false)
    .without_time()
    .init();

    let func = service_fn(function_handler);
    lambda_runtime::run(func).await?;
    Ok(())
    
}
```

------

**To create the function**

1. Copy the sample code into a file named `example.js`.

1. Create a deployment package.

   ```
   zip function.zip example.js
   ```

1. Create a Lambda function with the `create-function` command.

   ```
   aws lambda create-function --function-name ProcessDynamoDBRecords \
       --zip-file fileb://function.zip --handler example.handler --runtime nodejs24.x \
       --role arn:aws-cn:iam::111122223333:role/lambda-dynamodb-role
   ```

## Test the Lambda function


In this step, you invoke your Lambda function manually using the `invoke` Amazon Lambda CLI command and the following sample DynamoDB event. Copy the following into a file named `input.txt`.

**Example input.txt**  

```
{
   "Records":[
      {
         "eventID":"1",
         "eventName":"INSERT",
         "eventVersion":"1.0",
         "eventSource":"aws:dynamodb",
         "awsRegion":"us-east-1",
         "dynamodb":{
            "Keys":{
               "Id":{
                  "N":"101"
               }
            },
            "NewImage":{
               "Message":{
                  "S":"New item!"
               },
               "Id":{
                  "N":"101"
               }
            },
            "SequenceNumber":"111",
            "SizeBytes":26,
            "StreamViewType":"NEW_AND_OLD_IMAGES"
         },
         "eventSourceARN":"stream-ARN"
      },
      {
         "eventID":"2",
         "eventName":"MODIFY",
         "eventVersion":"1.0",
         "eventSource":"aws:dynamodb",
         "awsRegion":"us-east-1",
         "dynamodb":{
            "Keys":{
               "Id":{
                  "N":"101"
               }
            },
            "NewImage":{
               "Message":{
                  "S":"This item has changed"
               },
               "Id":{
                  "N":"101"
               }
            },
            "OldImage":{
               "Message":{
                  "S":"New item!"
               },
               "Id":{
                  "N":"101"
               }
            },
            "SequenceNumber":"222",
            "SizeBytes":59,
            "StreamViewType":"NEW_AND_OLD_IMAGES"
         },
         "eventSourceARN":"stream-ARN"
      },
      {
         "eventID":"3",
         "eventName":"REMOVE",
         "eventVersion":"1.0",
         "eventSource":"aws:dynamodb",
         "awsRegion":"us-east-1",
         "dynamodb":{
            "Keys":{
               "Id":{
                  "N":"101"
               }
            },
            "OldImage":{
               "Message":{
                  "S":"This item has changed"
               },
               "Id":{
                  "N":"101"
               }
            },
            "SequenceNumber":"333",
            "SizeBytes":38,
            "StreamViewType":"NEW_AND_OLD_IMAGES"
         },
         "eventSourceARN":"stream-ARN"
      }
   ]
}
```

Run the following `invoke` command. 

```
aws lambda invoke --function-name ProcessDynamoDBRecords \
    --cli-binary-format raw-in-base64-out \
    --payload file://input.txt outputfile.txt
```

The **cli-binary-format** option is required if you're using Amazon CLI version 2. To make this the default setting, run `aws configure set cli-binary-format raw-in-base64-out`. For more information, see [Amazon CLI supported global command line options](https://docs.amazonaws.cn/cli/latest/userguide/cli-configure-options.html#cli-configure-options-list) in the *Amazon Command Line Interface User Guide for Version 2*.

The function returns the string `message` in the response body. 

Verify the output in the `outputfile.txt` file.

## Create a DynamoDB table with a stream enabled


Create an Amazon DynamoDB table with a stream enabled.

**To create a DynamoDB table**

1. Open the [DynamoDB console](https://console.amazonaws.cn/dynamodb).

1. Choose **Create table**.

1. Create a table with the following settings.
   + **Table name** – **lambda-dynamodb-stream**
   + **Primary key** – **id** (string)

1. Choose **Create**.

**To enable streams**

1. Open the [DynamoDB console](https://console.amazonaws.cn/dynamodb).

1. Choose **Tables**.

1. Choose the **lambda-dynamodb-stream** table.

1. Under **Exports and streams**, choose **DynamoDB stream details**.

1. Choose **Turn on**.

1. For **View type**, choose **Key attributes only**.

1. Choose **Turn on stream**.

Write down the stream ARN. You need this in the next step when you associate the stream with your Lambda function. For more information on enabling streams, see [Capturing table activity with DynamoDB Streams](https://docs.amazonaws.cn/amazondynamodb/latest/developerguide/Streams.html).

## Add an event source in Amazon Lambda


Create an event source mapping in Amazon Lambda. This event source mapping associates the DynamoDB stream with your Lambda function. After you create this event source mapping, Amazon Lambda starts polling the stream.

Run the following Amazon CLI `create-event-source-mapping` command. After the command runs, note down the UUID. You'll need this UUID to refer to the event source mapping in any commands, for example, when deleting the event source mapping.

```
aws lambda create-event-source-mapping --function-name ProcessDynamoDBRecords \
    --batch-size 100 --starting-position LATEST --event-source DynamoDB-stream-arn
```

 This creates a mapping between the specified DynamoDB stream and the Lambda function. You can associate a DynamoDB stream with multiple Lambda functions, and associate the same Lambda function with multiple streams. However, the Lambda functions will share the read throughput for the stream they share. 

You can get the list of event source mappings by running the following command.

```
aws lambda list-event-source-mappings
```

The list returns all of the event source mappings you created, and for each mapping it shows the `LastProcessingResult`, among other things. This field is used to provide an informative message if there are any problems. Values such as `No records processed` (indicates that Amazon Lambda has not started polling or that there are no records in the stream) and `OK` (indicates Amazon Lambda successfully read records from the stream and invoked your Lambda function) indicate that there are no issues. If there are issues, you receive an error message.

If you have a lot of event source mappings, use the function name parameter to narrow down the results.

```
aws lambda list-event-source-mappings --function-name ProcessDynamoDBRecords
```

## Test the setup


Test the end-to-end experience. As you perform table updates, DynamoDB writes event records to the stream. As Amazon Lambda polls the stream, it detects new records in the stream and invokes your Lambda function on your behalf by passing events to the function. 

1. In the DynamoDB console, add, update, and delete items to the table. DynamoDB writes records of these actions to the stream.

1. Amazon Lambda polls the stream and when it detects updates to the stream, it invokes your Lambda function by passing in the event data it finds in the stream.

1. Your function runs and creates logs in Amazon CloudWatch. You can verify the logs reported in the Amazon CloudWatch console.

## Next steps


This tutorial showed you the basics of processing DynamoDB stream events with Lambda. For production workloads, consider implementing partial batch response logic to handle individual record failures more efficiently. The [batch processor utility](https://docs.powertools.aws.dev/lambda/python/latest/utilities/batch/) from Powertools for Amazon Lambda is available in Python, TypeScript, .NET, and Java and provides a robust solution for this, automatically handling the complexity of partial batch responses and reducing the number of retries for successfully processed records.

## Clean up your resources


You can now delete the resources that you created for this tutorial, unless you want to retain them. By deleting Amazon resources that you're no longer using, you prevent unnecessary charges to your Amazon Web Services account.

**To delete the Lambda function**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Select the function that you created.

1. Choose **Actions**, **Delete**.

1. Type **confirm** in the text input field and choose **Delete**.

**To delete the execution role**

1. Open the [Roles page](https://console.amazonaws.cn/iam/home#/roles) of the IAM console.

1. Select the execution role that you created.

1. Choose **Delete**.

1. Enter the name of the role in the text input field and choose **Delete**.

**To delete the DynamoDB table**

1. Open the [Tables page](https://console.amazonaws.cn//dynamodb/home#tables:) of the DynamoDB console.

1. Select the table you created.

1. Choose **Delete**.

1. Enter **delete** in the text box.

1. Choose **Delete table**.

# Process Amazon EC2 lifecycle events with a Lambda function
EC2

You can use Amazon Lambda to process lifecycle events from Amazon Elastic Compute Cloud and manage Amazon EC2 resources. Amazon EC2 sends events to [Amazon EventBridge (CloudWatch Events)](https://docs.amazonaws.cn/eventbridge/latest/userguide/eb-what-is.html) for [lifecycle events](https://docs.amazonaws.cn/autoscaling/ec2/userguide/ec2-auto-scaling-lifecycle.html) such as when an instance changes state, when an Amazon Elastic Block Store volume snapshot completes, or when a spot instance is scheduled to be terminated. You configure EventBridge (CloudWatch Events) to forward those events to a Lambda function for processing.

EventBridge (CloudWatch Events) invokes your Lambda function asynchronously with the event document from Amazon EC2.

**Example instance lifecycle event**  

```
{
    "version": "0",
    "id": "b6ba298a-7732-2226-xmpl-976312c1a050",
    "detail-type": "EC2 Instance State-change Notification",
    "source": "aws.ec2",
    "account": "111122223333",
    "time": "2019-10-02T17:59:30Z",
    "region": "us-east-1",
    "resources": [
        "arn:aws-cn:ec2:us-east-1:111122223333:instance/i-0c314xmplcd5b8173"
    ],
    "detail": {
        "instance-id": "i-0c314xmplcd5b8173",
        "state": "running"
    }
}
```

For details on configuring events, see [Invoke a Lambda function on a schedule](with-eventbridge-scheduler.md). For an example function that processes Amazon EBS snapshot notifications, see [EventBridge Scheduler for Amazon EBS](https://docs.amazonaws.cn/ebs/latest/userguide/ebs-cloud-watch-events.html).

You can also use the Amazon SDK to manage instances and other resources with the Amazon EC2 API. 

## Granting permissions to EventBridge (CloudWatch Events)


To process lifecycle events from Amazon EC2, EventBridge (CloudWatch Events) needs permission to invoke your function. This permission comes from the function's [resource-based policy](access-control-resource-based.md). If you use the EventBridge (CloudWatch Events) console to configure an event trigger, the console updates the resource-based policy on your behalf. Otherwise, add a statement like the following:

**Example resource-based policy statement for Amazon EC2 lifecycle notifications**  

```
{
  "Sid": "ec2-events",
  "Effect": "Allow",
  "Principal": {
    "Service": "events.amazonaws.com.cn"
  },
  "Action": "lambda:InvokeFunction",
  "Resource": "arn:aws-cn:lambda:cn-north-1:12456789012:function:my-function",
  "Condition": {
    "ArnLike": {
      "AWS:SourceArn": "arn:aws-cn:events:cn-north-1:12456789012:rule/*"
    }
  }
}
```

To add a statement, use the `add-permission` Amazon CLI command.

```
aws lambda add-permission --action lambda:InvokeFunction --statement-id ec2-events \
--principal events.amazonaws.com.cn --function-name my-function --source-arn 'arn:aws-cn:events:cn-north-1:12456789012:rule/*'
```

If your function uses the Amazon SDK to manage Amazon EC2 resources, add Amazon EC2 permissions to the function's [execution role](lambda-intro-execution-role.md).

# Process Application Load Balancer requests with Lambda
Elastic Load Balancing (Application Load Balancer)

You can use a Lambda function to process requests from an Application Load Balancer. Elastic Load Balancing supports Lambda functions as a target for an Application Load Balancer. Use load balancer rules to route HTTP requests to a function, based on path or header values. Process the request and return an HTTP response from your Lambda function.

Elastic Load Balancing invokes your Lambda function synchronously with an event that contains the request body and metadata.

**Example Application Load Balancer request event**  

```
{
    "requestContext": {
        "elb": {
            "targetGroupArn": "arn:aws-cn:elasticloadbalancing:cn-north-1:123456789012:targetgroup/lambda-279XGJDqGZ5rsrHC2Fjr/49e9d65c45c6791a"
        }
    },
    "httpMethod": "GET",
    "path": "/lambda",
    "queryStringParameters": {
        "query": "1234ABCD"
    },
    "headers": {
        "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
        "accept-encoding": "gzip",
        "accept-language": "zh-CN,zh;q=0.9",
        "connection": "keep-alive",
        "host": "lambda-alb-123578498.cn-north-1.elb.amazonaws.com.cn",
        "upgrade-insecure-requests": "1",
        "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36",
        "x-amzn-trace-id": "Root=1-5c536348-3d683b8b04734faae651f476",
        "x-forwarded-for": "72.12.164.125",
        "x-forwarded-port": "80",
        "x-forwarded-proto": "http",
        "x-imforwards": "20"
    },
    "body": "",
    "isBase64Encoded": False
}
```

Your function processes the event and returns a response document to the load balancer in JSON. Elastic Load Balancing converts the document to an HTTP success or error response and returns it to the user.

**Example response document format**  

```
{
    "statusCode": 200,
    "statusDescription": "200 OK",
    "isBase64Encoded": False,
    "headers": {
        "Content-Type": "text/html"
    },
    "body": "<h1>Hello from Lambda!</h1>"
}
```

To configure an Application Load Balancer as a function trigger, grant Elastic Load Balancing permission to run the function, create a target group that routes requests to the function, and add a rule to the load balancer that sends requests to the target group.

Use the `add-permission` command to add a permission statement to your function's resource-based policy.

```
aws lambda add-permission --function-name alb-function \
--statement-id load-balancer --action "lambda:InvokeFunction" \
--principal elasticloadbalancing.amazonaws.com.cn
```

You should see the following output:

```
{
    "Statement": "{\"Sid\":\"load-balancer\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"elasticloadbalancing.amazonaws.com.cn\"},\"Action\":\"lambda:InvokeFunction\",\"Resource\":\"arn:aws-cn:lambda:cn-north-1:123456789012:function:alb-function\"}"
}
```

For instructions on configuring the Application Load Balancer listener and target group, see [Lambda functions as a target](https://docs.amazonaws.cn/elasticloadbalancing/latest/application/lambda-functions.html) in the *User Guide for Application Load Balancers*.

## Event Handler from Powertools for Amazon Lambda


The event handler from the Powertools for Amazon Lambda toolkit provides routing, middleware, CORS configuration, OpenAPI spec generation, request validation, error handling, and other useful features when writing Lambda functions invoked by an Application Load Balancer. The Event Handler utility is available for Python. For more information, see [Event Handler REST API](https://docs.powertools.aws.dev/lambda/python/latest/core/event_handler/api_gateway/) in the *Powertools for Amazon Lambda (Python) documentation*.

### Python


```
import requests
from requests import Response

from aws_lambda_powertools import Logger, Tracer
from aws_lambda_powertools.event_handler import ALBResolver
from aws_lambda_powertools.logging import correlation_paths
from aws_lambda_powertools.utilities.typing import LambdaContext

tracer = Tracer()
logger = Logger()
app = ALBResolver()


@app.get("/todos")
@tracer.capture_method
def get_todos():
    todos: Response = requests.get("https://jsonplaceholder.typicode.com/todos")
    todos.raise_for_status()

    # for brevity, we'll limit to the first 10 only
    return {"todos": todos.json()[:10]}


# You can continue to use other utilities just as before
@logger.inject_lambda_context(correlation_id_path=correlation_paths.APPLICATION_LOAD_BALANCER)
@tracer.capture_lambda_handler
def lambda_handler(event: dict, context: LambdaContext) -> dict:
    return app.resolve(event, context)
```

# Invoke a Lambda function on a schedule
Invoke using an EventBridge Scheduler

[Amazon EventBridge Scheduler](https://docs.amazonaws.cn/scheduler/latest/UserGuide/what-is-scheduler.html) is a serverless scheduler that allows you to create, run, and manage tasks from one central, managed service. With EventBridge Scheduler, you can create schedules using cron and rate expressions for recurring patterns, or configure one-time invocations. You can set up flexible time windows for delivery, define retry limits, and set the maximum retention time for unprocessed events.

When you set up EventBridge Scheduler with Lambda, EventBridge Scheduler invokes your Lambda function asynchronously. This page explains how to use EventBridge Scheduler to invoke a Lambda function on a schedule.

## Set up the execution role


 When you create a new schedule, EventBridge Scheduler must have permission to invoke its target API operation on your behalf. You grant these permissions to EventBridge Scheduler using an *execution role*. The permission policy you attach to your schedule's execution role defines the required permissions. These permissions depend on the target API you want EventBridge Scheduler to invoke.

 When you use the EventBridge Scheduler console to create a schedule, as in the following procedure, EventBridge Scheduler automatically sets up an execution role based on your selected target. If you want to create a schedule using one of the EventBridge Scheduler SDKs, the Amazon CLI, or Amazon CloudFormation, you must have an existing execution role that grants the permissions EventBridge Scheduler requires to invoke a target. For more information about manually setting up an execution role for your schedule, see [Setting up an execution role](https://docs.amazonaws.cn/scheduler/latest/UserGuide/setting-up.html#setting-up-execution-role) in the *EventBridge Scheduler User Guide*. 

## Create a schedule


**To create a schedule by using the console**

1. Open the Amazon EventBridge Scheduler console at [https://console.amazonaws.cn/scheduler/home](https://console.amazonaws.cn/scheduler/home/).

1.  On the **Schedules** page, choose **Create schedule**. 

1.  On the **Specify schedule detail** page, in the **Schedule name and description** section, do the following: 

   1. For **Schedule name**, enter a name for your schedule. For example, **MyTestSchedule**. 

   1. (Optional) For **Description**, enter a description for your schedule. For example, **My first schedule**.

   1. For **Schedule group**, choose a schedule group from the dropdown list. If you don't have a group, choose **default**. To create a schedule group, choose **create your own schedule**. 

      You use schedule groups to add tags to groups of schedules. 

1. 

   1. Choose your schedule options.    
[\[See the AWS documentation website for more details\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/with-eventbridge-scheduler.html)

1. (Optional) If you chose **Recurring schedule** in the previous step, in the **Timeframe** section, do the following: 

   1. For **Timezone**, choose a timezone. 

   1. For **Start date and time**, enter a valid date in `YYYY/MM/DD` format, and then specify a timestamp in 24-hour `hh:mm` format. 

   1. For **End date and time**, enter a valid date in `YYYY/MM/DD` format, and then specify a timestamp in 24-hour `hh:mm` format. 

1. Choose **Next**. 

1. On the **Select target** page, choose the Amazon API operation that EventBridge Scheduler invokes: 

   1. Choose **Amazon Lambda Invoke**.

   1. In the **Invoke** section, select a function or choose **Create new Lambda function**.

   1. (Optional) Enter a JSON payload. If you don't enter a payload, EventBridge Scheduler uses an empty event to invoke the function.

1. Choose **Next**. 

1. On the **Settings** page, do the following: 

   1. To turn on the schedule, under **Schedule state**, toggle **Enable schedule**. 

   1. To configure a retry policy for your schedule, under **Retry policy and dead-letter queue (DLQ)**, do the following:
      + Toggle **Retry**.
      + For **Maximum age of event**, enter the maximum **hour(s)** and **min(s)** that EventBridge Scheduler must keep an unprocessed event.
      + The maximum time is 24 hours.
      + For **Maximum retries**, enter the maximum number of times EventBridge Scheduler retries the schedule if the target returns an error. 

         The maximum value is 185 retries. 

      With retry policies, if a schedule fails to invoke its target, EventBridge Scheduler re-runs the schedule. If configured, you must set the maximum retention time and retries for the schedule.

   1. Choose where EventBridge Scheduler stores undelivered events.     
[\[See the AWS documentation website for more details\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/with-eventbridge-scheduler.html)

   1. To use a customer managed key to encrypt your target input, under **Encryption**, choose **Customize encryption settings (advanced)**. 

      If you choose this option, enter an existing KMS key ARN or choose **Create an Amazon KMS key** to navigate to the Amazon KMS console. For more information about how EventBridge Scheduler encrypts your data at rest, see [Encryption at rest](https://docs.amazonaws.cn/scheduler/latest/UserGuide/encryption-rest.html) in the *Amazon EventBridge Scheduler User Guide*. 

   1. To have EventBridge Scheduler create a new execution role for you, choose **Create new role for this schedule**. Then, enter a name for **Role name**. If you choose this option, EventBridge Scheduler attaches the required permissions necessary for your templated target to the role.

1. Choose **Next**. 

1.  In the **Review and create schedule** page, review the details of your schedule. In each section, choose **Edit** to go back to that step and edit its details. 

1. Choose **Create schedule**. 

   You can view a list of your new and existing schedules on the **Schedules** page. Under the **Status** column, verify that your new schedule is **Enabled**. 

To confirm that EventBridge Scheduler invoked the function, [check the function's Amazon CloudWatch logs](monitoring-cloudwatchlogs-view.md#monitoring-cloudwatchlogs-console).

## Related resources


 For more information about EventBridge Scheduler, see the following: 
+ [EventBridge Scheduler User Guide](https://docs.amazonaws.cn/scheduler/latest/UserGuide/what-is-scheduler.html)
+ [EventBridge Scheduler API Reference](https://docs.amazonaws.cn/scheduler/latest/APIReference/Welcome.html)
+ [EventBridge Scheduler Pricing](https://www.amazonaws.cn/eventbridge/pricing/#Scheduler)

# Using Amazon Lambda with Amazon IoT
IoT

Amazon IoT provides secure communication between internet-connected devices (such as sensors) and the Amazon Cloud. This makes it possible for you to collect, store, and analyze telemetry data from multiple devices.

You can create Amazon IoT rules for your devices to interact with Amazon Web Services services. The Amazon IoT [Rules Engine](https://docs.amazonaws.cn/iot/latest/developerguide/iot-rules.html) provides a SQL-based language to select data from message payloads and send the data to other services, such as Amazon S3, Amazon DynamoDB, and Amazon Lambda. You define a rule to invoke a Lambda function when you want to invoke another Amazon service or a third-party service. 

When an incoming IoT message triggers the rule, Amazon IoT invokes your Lambda function [asynchronously](invocation-async.md) and passes data from the IoT message to the function. 

The following example shows a moisture reading from a greenhouse sensor. The **row** and **pos** values identify the location of the sensor. This example event is based on the greenhouse type in the [Amazon IoT Rules tutorials](https://docs.amazonaws.cn/iot/latest/developerguide/iot-rules-tutorial.html). 

**Example Amazon IoT message event**  

```
{
    "row" : "10",
    "pos" : "23",
    "moisture" : "75"
}
```

For asynchronous invocation, Lambda queues the message and [retries](invocation-retries.md) if your function returns an error. Configure your function with a [destination](invocation-async-retain-records.md#invocation-async-destinations) to retain events that your function could not process.

You need to grant permission for the Amazon IoT service to invoke your Lambda function. Use the `add-permission` command to add a permission statement to your function's resource-based policy.

```
aws lambda add-permission --function-name my-function \
--statement-id iot-events --action "lambda:InvokeFunction" --principal iot.amazonaws.com.cn
```

You should see the following output:

```
{
    "Statement": "{\"Sid\":\"iot-events\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"iot.amazonaws.com.cn\"},\"Action\":\"lambda:InvokeFunction\",\"Resource\":\"arn:aws-cn:lambda:cn-north-1:123456789012:function:my-function\"}"
}
```

For more information about how to use Lambda with Amazon IoT, see [Creating an Amazon Lambda rule](https://docs.amazonaws.cn/iot/latest/developerguide/iot-lambda-rule.html). 

# Using Lambda to process records from Amazon Kinesis Data Streams
Kinesis Data Streams

You can use a Lambda function to process records in an [Amazon Kinesis data stream](https://docs.amazonaws.cn/streams/latest/dev/introduction.html). You can map a Lambda function to a Kinesis Data Streams shared-throughput consumer (standard iterator), or to a dedicated-throughput consumer with [enhanced fan-out](https://docs.amazonaws.cn/kinesis/latest/dev/enhanced-consumers.html). For standard iterators, Lambda polls each shard in your Kinesis stream for records using HTTP protocol. The event source mapping shares read throughput with other consumers of the shard.

 For details about Kinesis data streams, see [Reading Data from Amazon Kinesis Data Streams](https://docs.amazonaws.cn/kinesis/latest/dev/building-consumers.html).

**Note**  
Kinesis charges for each shard and, for enhanced fan-out, data read from the stream. For pricing details, see [Amazon Kinesis pricing](http://www.amazonaws.cn/kinesis/data-streams/pricing).

## Polling and batching streams


Lambda reads records from the data stream and invokes your function [synchronously](invocation-sync.md) with an event that contains stream records. Lambda reads records in batches and invokes your function to process records from the batch. Each batch contains records from a single shard/data stream.

Your Lambda function is a consumer application for your data stream. It processes one batch of records at a time from each shard. You can map a Lambda function to a shared-throughput consumer (standard iterator), or to a dedicated-throughput consumer with enhanced fan-out.
+ **Standard iterator:** Lambda polls each shard in your Kinesis stream for records at a base rate of once per second. When more records are available, Lambda keeps processing batches until the function catches up with the stream. The event source mapping shares read throughput with other consumers of the shard.
+ **Enhanced fan-out:** To minimize latency and maximize read throughput, create a data stream consumer with [enhanced fan-out](https://docs.amazonaws.cn/streams/latest/dev/enhanced-consumers.html). Enhanced fan-out consumers get a dedicated connection to each shard that doesn't impact other applications reading from the stream. Stream consumers use HTTP/2 to reduce latency by pushing records to Lambda over a long-lived connection and by compressing request headers. You can create a stream consumer with the Kinesis [RegisterStreamConsumer](https://docs.amazonaws.cn/kinesis/latest/APIReference/API_RegisterStreamConsumer.html) API.

```
aws kinesis register-stream-consumer \
--consumer-name con1 \
--stream-arn arn:aws-cn:kinesis:us-west-2:123456789012:stream/lambda-stream
```

You should see the following output:

```
{
    "Consumer": {
        "ConsumerName": "con1",
        "ConsumerARN": "arn:aws-cn:kinesis:us-west-2:123456789012:stream/lambda-stream/consumer/con1:1540591608",
        "ConsumerStatus": "CREATING",
        "ConsumerCreationTimestamp": 1540591608.0
    }
}
```

To increase the speed at which your function processes records, [add shards to your data stream](https://repost.aws/knowledge-center/kinesis-data-streams-open-shards). Lambda processes records in each shard in order. It stops processing additional records in a shard if your function returns an error. With more shards, there are more batches being processed at once, which lowers the impact of errors on concurrency.

If your function can't scale up to handle the total number of concurrent batches, [request a quota increase](https://docs.amazonaws.cn/servicequotas/latest/userguide/request-quota-increase.html) or [reserve concurrency](configuration-concurrency.md) for your function.

By default, Lambda invokes your function as soon as records are available. If the batch that Lambda reads from the event source has only one record in it, Lambda sends only one record to the function. To avoid invoking the function with a small number of records, you can tell the event source to buffer records for up to 5 minutes by configuring a *batching window*. Before invoking the function, Lambda continues to read records from the event source until it has gathered a full batch, the batching window expires, or the batch reaches the payload limit of 6 MB. For more information, see [Batching behavior](invocation-eventsourcemapping.md#invocation-eventsourcemapping-batching).

**Warning**  
Lambda event source mappings process each event at least once, and duplicate processing of records can occur. To avoid potential issues related to duplicate events, we strongly recommend that you make your function code idempotent. To learn more, see [How do I make my Lambda function idempotent](https://repost.aws/knowledge-center/lambda-function-idempotent) in the Amazon Knowledge Center.

Lambda doesn't wait for any configured [extensions](lambda-extensions.md) to complete before sending the next batch for processing. In other words, your extensions may continue to run as Lambda processes the next batch of records. This can cause throttling issues if you breach any of your account's [concurrency](lambda-concurrency.md) settings or limits. To detect whether this is a potential issue, monitor your functions and check whether you're seeing higher [concurrency metrics](monitoring-concurrency.md#general-concurrency-metrics) than expected for your event source mapping. Due to short times in between invokes, Lambda may briefly report higher concurrency usage than the number of shards. This can be true even for Lambda functions without extensions.

Configure the [ParallelizationFactor](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html#lambda-CreateEventSourceMapping-request-ParallelizationFactor) setting to process one shard of a Kinesis data stream with more than one Lambda invocation simultaneously. You can specify the number of concurrent batches that Lambda polls from a shard via a parallelization factor from 1 (default) to 10. For example, when you set `ParallelizationFactor` to 2, you can have 200 concurrent Lambda invocations at maximum to process 100 Kinesis data shards (though in practice, you may see different values for the `ConcurrentExecutions` metric). This helps scale up the processing throughput when the data volume is volatile and the `IteratorAge` is high. When you increase the number of concurrent batches per shard, Lambda still ensures in-order processing at the partition-key level.

You can also use `ParallelizationFactor` with Kinesis aggregation. The behavior of the event source mapping depends on whether you're using [enhanced fan-out](https://docs.amazonaws.cn/streams/latest/dev/enhanced-consumers.html):
+ **Without enhanced fan-out**: All of the events inside an aggregated event must have the same partition key. The partition key must also match that of the aggregated event. If the events inside the aggregated event have different partition keys, Lambda cannot guarantee in-order processing of the events by partition key.
+ **With enhanced fan-out**: First, Lambda decodes the aggregated event into its individual events. The aggregated event can have a different partition key than events it contains. However, events that don't correspond to the partition key are [dropped and lost](https://github.com/awslabs/kinesis-aggregation/blob/master/potential_data_loss.md). Lambda doesn't process these events, and doesn't send them to a configured failure destination.

## Example event


**Example**  

```
{
    "Records": [
        {
            "kinesis": {
                "kinesisSchemaVersion": "1.0",
                "partitionKey": "1",
                "sequenceNumber": "49590338271490256608559692538361571095921575989136588898",
                "data": "SGVsbG8sIHRoaXMgaXMgYSB0ZXN0Lg==",
                "approximateArrivalTimestamp": 1545084650.987
            },
            "eventSource": "aws:kinesis",
            "eventVersion": "1.0",
            "eventID": "shardId-000000000006:49590338271490256608559692538361571095921575989136588898",
            "eventName": "aws:kinesis:record",
            "invokeIdentityArn": "arn:aws-cn:iam::123456789012:role/lambda-role",
            "awsRegion": "us-east-2",
            "eventSourceARN": "arn:aws-cn:kinesis:us-west-2:123456789012:stream/lambda-stream"
        },
        {
            "kinesis": {
                "kinesisSchemaVersion": "1.0",
                "partitionKey": "1",
                "sequenceNumber": "49590338271490256608559692540925702759324208523137515618",
                "data": "VGhpcyBpcyBvbmx5IGEgdGVzdC4=",
                "approximateArrivalTimestamp": 1545084711.166
            },
            "eventSource": "aws:kinesis",
            "eventVersion": "1.0",
            "eventID": "shardId-000000000006:49590338271490256608559692540925702759324208523137515618",
            "eventName": "aws:kinesis:record",
            "invokeIdentityArn": "arn:aws-cn:iam::123456789012:role/lambda-role",
            "awsRegion": "us-east-2",
            "eventSourceARN": "arn:aws-cn:kinesis:us-west-2:123456789012:stream/lambda-stream"
        }
    ]
}
```

# Process Amazon Kinesis Data Streams records with Lambda
Create mapping

To process Amazon Kinesis Data Streams records with Lambda, create a Lambda event source mapping. You can map a Lambda function to a standard iterator or enhanced fan-out consumer. For more information, see [Polling and batching streams](with-kinesis.md#kinesis-polling-and-batching).

## Create an Kinesis event source mapping
Create an event source mapping

To invoke your Lambda function with records from your data stream, create an [event source mapping](invocation-eventsourcemapping.md). You can create multiple event source mappings to process the same data with multiple Lambda functions, or to process items from multiple data streams with a single function. When processing items from multiple streams, each batch contains records from only a single shard or stream.

You can configure event source mappings to process records from a stream in a different Amazon Web Services account. To learn more, see [Creating a cross-account event source mapping](#services-kinesis-eventsourcemapping-cross-account).

Before you create an event source mapping, you need to give your Lambda function permission to read from a Kinesis data stream. Lambda needs the following permissions to manage resources related to your Kinesis data stream:
+ [kinesis:DescribeStream](https://docs.amazonaws.cn/lambda/latest/api/API_DescribeStream.html)
+ [kinesis:DescribeStreamSummary](https://docs.amazonaws.cn/lambda/latest/api/API_DescribeStreamSummary.html)
+ [kinesis:GetRecords](https://docs.amazonaws.cn/lambda/latest/api/API_GetRecords.html)
+ [kinesis:GetShardIterator](https://docs.amazonaws.cn/lambda/latest/api/API_GetShardIterator.html)
+ [kinesis:ListShards](https://docs.amazonaws.cn/lambda/latest/api/API_ListShards.html)
+ [kinesis:SubscribeToShard](https://docs.amazonaws.cn/lambda/latest/api/API_SubscribeToShard.html)

The Amazon managed policy [AWSLambdaKinesisExecutionRole](https://docs.amazonaws.cn/aws-managed-policy/latest/reference/AWSLambdaKinesisExecutionRole.html) includes these permissions. Add this managed policy to your function as described in the following procedure.

**Note**  
You don't need the `kinesis:ListStreams` permission to create and manage event source mappings for Kinesis. However, if you create an event source mapping in the console and you don't have this permission, you won't be able to select a Kinesis stream from a dropdown list and the console will display an error. To create the event source mapping, you'll need to manually enter the Amazon Resource Name (ARN) of your stream.
Lambda makes `kinesis:GetRecords` and `kinesis:GetShardIterator` API calls when retrying failed invocations.

------
#### [ Amazon Web Services Management Console ]

**To add Kinesis permissions to your function**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console and select your function.

1. In the **Configuration** tab, select **Permissions**.

1. In the **Execution role** pane, under **Role name**, choose the link to your function’s execution role. This link opens the page for that role in the IAM console.

1. In the **Permissions policies** pane, choose **Add permissions**, then select **Attach policies**.

1. In the search field, enter **AWSLambdaKinesisExecutionRole**.

1. Select the checkbox next to the policy and choose **Add permission**.

------
#### [ Amazon CLI ]

**To add Kinesis permissions to your function**
+ Run the following CLI command to add the `AWSLambdaKinesisExecutionRole` policy to your function’s execution role:

  ```
  aws iam attach-role-policy \
  --role-name MyFunctionRole \
  --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaKinesisExecutionRole
  ```

------
#### [ Amazon SAM ]

**To add Kinesis permissions to your function**
+ In your function’s definition, add the `Policies` property as shown in the following example:

  ```
  Resources:
    MyFunction:
      Type: AWS::Serverless::Function
      Properties:
        CodeUri: ./my-function/
        Handler: index.handler
        Runtime: nodejs24.x
        Policies:
          - AWSLambdaKinesisExecutionRole
  ```

------

After configuring the required permissions, create the event source mapping.

------
#### [ Amazon Web Services Management Console ]

**To create the Kinesis event source mapping**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console and select your function.

1. In the **Function overview** pane, choose **Add trigger**.

1. Under **Trigger configuration**, for the source, select **Kinesis**.

1. Select the Kinesis stream you want to create the event source mapping for and, optionally, a consumer of your stream.

1. (Optional) edit the **Batch size**, **Starting position**, and **Batch window** for your event source mapping.

1. Choose **Add**.

When creating your event source mapping from the console, your IAM role must have the [kinesis:ListStreams](https://docs.amazonaws.cn/lambda/latest/api/API_ListStreams.html) and [kinesis:ListStreamConsumers](https://docs.amazonaws.cn/lambda/latest/api/API_ListStreamConsumers.html) permissions.

------
#### [ Amazon CLI ]

**To create the Kinesis event source mapping**
+ Run the following CLI command to create a Kinesis event source mapping. Choose your own batch size and starting position according to your use case.

  ```
  aws lambda create-event-source-mapping \
  --function-name MyFunction \
  --event-source-arn arn:aws-cn:kinesis:us-west-2:123456789012:stream/lambda-stream \
  --starting-position LATEST \
  --batch-size 100
  ```

To specify a batching window, add the `--maximum-batching-window-in-seconds` option. For more information about using this and other parameters, see [create-event-source-mapping](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/create-event-source-mapping.html) in the *Amazon CLI Command Reference*.

------
#### [ Amazon SAM ]

**To create the Kinesis event source mapping**
+ In your function’s definition, add the `KinesisEvent` property as shown in the following example:

  ```
  Resources:
    MyFunction:
      Type: AWS::Serverless::Function
      Properties:
        CodeUri: ./my-function/
        Handler: index.handler
        Runtime: nodejs24.x
        Policies:
          - AWSLambdaKinesisExecutionRole
        Events:
          KinesisEvent:
            Type: Kinesis
            Properties:
              Stream: !GetAtt MyKinesisStream.Arn
              StartingPosition: LATEST
              BatchSize: 100
  
    MyKinesisStream:
      Type: AWS::Kinesis::Stream
      Properties:
        ShardCount: 1
  ```

To learn more about creating an event source mapping for Kinesis Data Streams in Amazon SAM, see [Kinesis](https://docs.amazonaws.cn/serverless-application-model/latest/developerguide/sam-property-function-kinesis.html) in the *Amazon Serverless Application Model Developer Guide*.

------

## Polling and stream starting position


Be aware that stream polling during event source mapping creation and updates is eventually consistent.
+ During event source mapping creation, it may take several minutes to start polling events from the stream.
+ During event source mapping updates, it may take several minutes to stop and restart polling events from the stream.

This behavior means that if you specify `LATEST` as the starting position for the stream, the event source mapping could miss events during creation or updates. To ensure that no events are missed, specify the stream starting position as `TRIM_HORIZON` or `AT_TIMESTAMP`.

## Creating a cross-account event source mapping
Cross-account mappings

Amazon Kinesis Data Streams supports [resource-based policies](https://docs.amazonaws.cn/IAM/latest/UserGuide/access_policies_identity-vs-resource.html). Because of this, you can process data ingested into a stream in one Amazon Web Services account with a Lambda function in another account.

To create an event source mapping for your Lambda function using a Kinesis stream in a different Amazon Web Services account, you must configure the stream using a resource-based policy to give your Lambda function permission to read items. To learn how to configure your stream to allow cross-account access, see [Sharing access with cross-account Amazon Lambda functions](https://docs.amazonaws.cn/streams/latest/dev/resource-based-policy-examples.html#Resource-based-policy-examples-lambda) in the *Amazon Kinesis Streams Developer guide*.

Once you’ve configured your stream with a resource-based policy that gives your Lambda function the required permissions, create the event source mapping using any of the methods described in the previous section.

If you choose to create your event source mapping using the Lambda console, paste the ARN of your stream directly into the input field. If you want to specify a consumer for your stream, pasting the ARN of the consumer automatically populates the stream field.

# Configuring partial batch response with Kinesis Data Streams and Lambda
Batch item failures

When consuming and processing streaming data from an event source, by default Lambda checkpoints to the highest sequence number of a batch only when the batch is a complete success. Lambda treats all other results as a complete failure and retries processing the batch up to the retry limit. To allow for partial successes while processing batches from a stream, turn on `ReportBatchItemFailures`. Allowing partial successes can help to reduce the number of retries on a record, though it doesn’t entirely prevent the possibility of retries in a successful record.

To turn on `ReportBatchItemFailures`, include the enum value **ReportBatchItemFailures** in the [FunctionResponseTypes](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html#lambda-CreateEventSourceMapping-request-FunctionResponseTypes) list. This list indicates which response types are enabled for your function. You can configure this list when you [create](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html) or [update](https://docs.amazonaws.cn/lambda/latest/api/API_UpdateEventSourceMapping.html) an event source mapping.

**Note**  
Even when your function code returns partial batch failure responses, these responses will not be processed by Lambda unless the `ReportBatchItemFailures` feature is explicitly turned on for your event source mapping.

## Report syntax


When configuring reporting on batch item failures, the `StreamsEventResponse` class is returned with a list of batch item failures. You can use a `StreamsEventResponse` object to return the sequence number of the first failed record in the batch. You can also create your own custom class using the correct response syntax. The following JSON structure shows the required response syntax:

```
{ 
  "batchItemFailures": [ 
        {
            "itemIdentifier": "<SequenceNumber>"
        }
    ]
}
```

**Note**  
If the `batchItemFailures` array contains multiple items, Lambda uses the record with the lowest sequence number as the checkpoint. Lambda then retries all records starting from that checkpoint.

## Success and failure conditions


Lambda treats a batch as a complete success if you return any of the following:
+ An empty `batchItemFailure` list
+ A null `batchItemFailure` list
+ An empty `EventResponse`
+ A null `EventResponse`

Lambda treats a batch as a complete failure if you return any of the following:
+ An empty string `itemIdentifier`
+ A null `itemIdentifier`
+ An `itemIdentifier` with a bad key name

Lambda retries failures based on your retry strategy.

## Bisecting a batch


If your invocation fails and `BisectBatchOnFunctionError` is turned on, the batch is bisected regardless of your `ReportBatchItemFailures` setting.

When a partial batch success response is received and both `BisectBatchOnFunctionError` and `ReportBatchItemFailures` are turned on, the batch is bisected at the returned sequence number and Lambda retries only the remaining records.

To simplify the implementation of partial batch response logic, consider using the [Batch Processor utility](https://docs.powertools.aws.dev/lambda/python/latest/utilities/batch/) from Powertools for Amazon Lambda, which automatically handles these complexities for you.

Here are some examples of function code that return the list of failed message IDs in the batch:

------
#### [ .NET ]

**Amazon SDK for .NET**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-kinesis-to-lambda-with-batch-item-handling) repository. 
Reporting Kinesis batch item failures with Lambda using .NET.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
﻿using System.Text;
using System.Text.Json.Serialization;
using Amazon.Lambda.Core;
using Amazon.Lambda.KinesisEvents;
using AWS.Lambda.Powertools.Logging;

// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))]

namespace KinesisIntegration;

public class Function
{
    // Powertools Logger requires an environment variables against your function
    // POWERTOOLS_SERVICE_NAME
    [Logging(LogEvent = true)]
    public async Task<StreamsEventResponse> FunctionHandler(KinesisEvent evnt, ILambdaContext context)
    {
        if (evnt.Records.Count == 0)
        {
            Logger.LogInformation("Empty Kinesis Event received");
            return new StreamsEventResponse();
        }

        foreach (var record in evnt.Records)
        {
            try
            {
                Logger.LogInformation($"Processed Event with EventId: {record.EventId}");
                string data = await GetRecordDataAsync(record.Kinesis, context);
                Logger.LogInformation($"Data: {data}");
                // TODO: Do interesting work based on the new data
            }
            catch (Exception ex)
            {
                Logger.LogError($"An error occurred {ex.Message}");
                /* Since we are working with streams, we can return the failed item immediately.
                   Lambda will immediately begin to retry processing from this failed item onwards. */
                return new StreamsEventResponse
                {
                    BatchItemFailures = new List<StreamsEventResponse.BatchItemFailure>
                    {
                        new StreamsEventResponse.BatchItemFailure { ItemIdentifier = record.Kinesis.SequenceNumber }
                    }
                };
            }
        }
        Logger.LogInformation($"Successfully processed {evnt.Records.Count} records.");
        return new StreamsEventResponse();
    }

    private async Task<string> GetRecordDataAsync(KinesisEvent.Record record, ILambdaContext context)
    {
        byte[] bytes = record.Data.ToArray();
        string data = Encoding.UTF8.GetString(bytes);
        await Task.CompletedTask; //Placeholder for actual async work
        return data;
    }
}

public class StreamsEventResponse
{
    [JsonPropertyName("batchItemFailures")]
    public IList<BatchItemFailure> BatchItemFailures { get; set; }
    public class BatchItemFailure
    {
        [JsonPropertyName("itemIdentifier")]
        public string ItemIdentifier { get; set; }
    }
}
```

------
#### [ Go ]

**SDK for Go V2**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-kinesis-to-lambda-with-batch-item-handling) repository. 
Reporting Kinesis batch item failures with Lambda using Go.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
package main

import (
	"context"
	"fmt"
	"github.com/aws/aws-lambda-go/events"
	"github.com/aws/aws-lambda-go/lambda"
)

func handler(ctx context.Context, kinesisEvent events.KinesisEvent) (map[string]interface{}, error) {
	batchItemFailures := []map[string]interface{}{}

	for _, record := range kinesisEvent.Records {
		curRecordSequenceNumber := ""

		// Process your record
		if /* Your record processing condition here */ {
			curRecordSequenceNumber = record.Kinesis.SequenceNumber
		}

		// Add a condition to check if the record processing failed
		if curRecordSequenceNumber != "" {
			batchItemFailures = append(batchItemFailures, map[string]interface{}{"itemIdentifier": curRecordSequenceNumber})
		}
	}

	kinesisBatchResponse := map[string]interface{}{
		"batchItemFailures": batchItemFailures,
	}
	return kinesisBatchResponse, nil
}

func main() {
	lambda.Start(handler)
}
```

------
#### [ Java ]

**SDK for Java 2.x**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-kinesis-to-lambda-with-batch-item-handling) repository. 
Reporting Kinesis batch item failures with Lambda using Java.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.amazonaws.services.lambda.runtime.events.KinesisEvent;
import com.amazonaws.services.lambda.runtime.events.StreamsEventResponse;

import java.io.Serializable;
import java.util.ArrayList;
import java.util.List;

public class ProcessKinesisRecords implements RequestHandler<KinesisEvent, StreamsEventResponse> {

    @Override
    public StreamsEventResponse handleRequest(KinesisEvent input, Context context) {

        List<StreamsEventResponse.BatchItemFailure> batchItemFailures = new ArrayList<>();
        String curRecordSequenceNumber = "";

        for (KinesisEvent.KinesisEventRecord kinesisEventRecord : input.getRecords()) {
            try {
                //Process your record
                KinesisEvent.Record kinesisRecord = kinesisEventRecord.getKinesis();
                curRecordSequenceNumber = kinesisRecord.getSequenceNumber();

            } catch (Exception e) {
                /* Since we are working with streams, we can return the failed item immediately.
                   Lambda will immediately begin to retry processing from this failed item onwards. */
                batchItemFailures.add(new StreamsEventResponse.BatchItemFailure(curRecordSequenceNumber));
                return new StreamsEventResponse(batchItemFailures);
            }
        }
       
       return new StreamsEventResponse(batchItemFailures);   
    }
}
```

------
#### [ JavaScript ]

**SDK for JavaScript (v3)**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/blob/main/integration-kinesis-to-lambda-with-batch-item-handling) repository. 
Reporting Kinesis batch item failures with Lambda using Javascript.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
exports.handler = async (event, context) => {
  for (const record of event.Records) {
    try {
      console.log(`Processed Kinesis Event - EventID: ${record.eventID}`);
      const recordData = await getRecordDataAsync(record.kinesis);
      console.log(`Record Data: ${recordData}`);
      // TODO: Do interesting work based on the new data
    } catch (err) {
      console.error(`An error occurred ${err}`);
      /* Since we are working with streams, we can return the failed item immediately.
            Lambda will immediately begin to retry processing from this failed item onwards. */
      return {
        batchItemFailures: [{ itemIdentifier: record.kinesis.sequenceNumber }],
      };
    }
  }
  console.log(`Successfully processed ${event.Records.length} records.`);
  return { batchItemFailures: [] };
};

async function getRecordDataAsync(payload) {
  var data = Buffer.from(payload.data, "base64").toString("utf-8");
  await Promise.resolve(1); //Placeholder for actual async work
  return data;
}
```
Reporting Kinesis batch item failures with Lambda using TypeScript.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
import {
  KinesisStreamEvent,
  Context,
  KinesisStreamHandler,
  KinesisStreamRecordPayload,
  KinesisStreamBatchResponse,
} from "aws-lambda";
import { Buffer } from "buffer";
import { Logger } from "@aws-lambda-powertools/logger";

const logger = new Logger({
  logLevel: "INFO",
  serviceName: "kinesis-stream-handler-sample",
});

export const functionHandler: KinesisStreamHandler = async (
  event: KinesisStreamEvent,
  context: Context
): Promise<KinesisStreamBatchResponse> => {
  for (const record of event.Records) {
    try {
      logger.info(`Processed Kinesis Event - EventID: ${record.eventID}`);
      const recordData = await getRecordDataAsync(record.kinesis);
      logger.info(`Record Data: ${recordData}`);
      // TODO: Do interesting work based on the new data
    } catch (err) {
      logger.error(`An error occurred ${err}`);
      /* Since we are working with streams, we can return the failed item immediately.
            Lambda will immediately begin to retry processing from this failed item onwards. */
      return {
        batchItemFailures: [{ itemIdentifier: record.kinesis.sequenceNumber }],
      };
    }
  }
  logger.info(`Successfully processed ${event.Records.length} records.`);
  return { batchItemFailures: [] };
};

async function getRecordDataAsync(
  payload: KinesisStreamRecordPayload
): Promise<string> {
  var data = Buffer.from(payload.data, "base64").toString("utf-8");
  await Promise.resolve(1); //Placeholder for actual async work
  return data;
}
```

------
#### [ PHP ]

**SDK for PHP**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-kinesis-to-lambda-with-batch-item-handling) repository. 
Reporting Kinesis batch item failures with Lambda using PHP.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
<?php

# using bref/bref and bref/logger for simplicity

use Bref\Context\Context;
use Bref\Event\Kinesis\KinesisEvent;
use Bref\Event\Handler as StdHandler;
use Bref\Logger\StderrLogger;

require __DIR__ . '/vendor/autoload.php';

class Handler implements StdHandler
{
    private StderrLogger $logger;
    public function __construct(StderrLogger $logger)
    {
        $this->logger = $logger;
    }

    /**
     * @throws JsonException
     * @throws \Bref\Event\InvalidLambdaEvent
     */
    public function handle(mixed $event, Context $context): array
    {
        $kinesisEvent = new KinesisEvent($event);
        $this->logger->info("Processing records");
        $records = $kinesisEvent->getRecords();

        $failedRecords = [];
        foreach ($records as $record) {
            try {
                $data = $record->getData();
                $this->logger->info(json_encode($data));
                // TODO: Do interesting work based on the new data
            } catch (Exception $e) {
                $this->logger->error($e->getMessage());
                // failed processing the record
                $failedRecords[] = $record->getSequenceNumber();
            }
        }
        $totalRecords = count($records);
        $this->logger->info("Successfully processed $totalRecords records");

        // change format for the response
        $failures = array_map(
            fn(string $sequenceNumber) => ['itemIdentifier' => $sequenceNumber],
            $failedRecords
        );

        return [
            'batchItemFailures' => $failures
        ];
    }
}

$logger = new StderrLogger();
return new Handler($logger);
```

------
#### [ Python ]

**SDK for Python (Boto3)**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-kinesis-to-lambda-with-batch-item-handling) repository. 
Reporting Kinesis batch item failures with Lambda using Python.  

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
def handler(event, context):
    records = event.get("Records")
    curRecordSequenceNumber = ""
    
    for record in records:
        try:
            # Process your record
            curRecordSequenceNumber = record["kinesis"]["sequenceNumber"]
        except Exception as e:
            # Return failed record's sequence number
            return {"batchItemFailures":[{"itemIdentifier": curRecordSequenceNumber}]}

    return {"batchItemFailures":[]}
```

------
#### [ Ruby ]

**SDK for Ruby**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-kinesis-to-lambda-with-batch-item-handling) repository. 
Reporting Kinesis batch item failures with Lambda using Ruby.  

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
require 'aws-sdk'

def lambda_handler(event:, context:)
  batch_item_failures = []

  event['Records'].each do |record|
    begin
      puts "Processed Kinesis Event - EventID: #{record['eventID']}"
      record_data = get_record_data_async(record['kinesis'])
      puts "Record Data: #{record_data}"
      # TODO: Do interesting work based on the new data
    rescue StandardError => err
      puts "An error occurred #{err}"
      # Since we are working with streams, we can return the failed item immediately.
      # Lambda will immediately begin to retry processing from this failed item onwards.
      return { batchItemFailures: [{ itemIdentifier: record['kinesis']['sequenceNumber'] }] }
    end
  end

  puts "Successfully processed #{event['Records'].length} records."
  { batchItemFailures: batch_item_failures }
end

def get_record_data_async(payload)
  data = Base64.decode64(payload['data']).force_encoding('utf-8')
  # Placeholder for actual async work
  sleep(1)
  data
end
```

------
#### [ Rust ]

**SDK for Rust**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-kinesis-to-lambda-with-batch-item-handling) repository. 
Reporting Kinesis batch item failures with Lambda using Rust.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
use aws_lambda_events::{
    event::kinesis::KinesisEvent,
    kinesis::KinesisEventRecord,
    streams::{KinesisBatchItemFailure, KinesisEventResponse},
};
use lambda_runtime::{run, service_fn, Error, LambdaEvent};

async fn function_handler(event: LambdaEvent<KinesisEvent>) -> Result<KinesisEventResponse, Error> {
    let mut response = KinesisEventResponse {
        batch_item_failures: vec![],
    };

    if event.payload.records.is_empty() {
        tracing::info!("No records found. Exiting.");
        return Ok(response);
    }

    for record in &event.payload.records {
        tracing::info!(
            "EventId: {}",
            record.event_id.as_deref().unwrap_or_default()
        );

        let record_processing_result = process_record(record);

        if record_processing_result.is_err() {
            response.batch_item_failures.push(KinesisBatchItemFailure {
                item_identifier: record.kinesis.sequence_number.clone(),
            });
            /* Since we are working with streams, we can return the failed item immediately.
            Lambda will immediately begin to retry processing from this failed item onwards. */
            return Ok(response);
        }
    }

    tracing::info!(
        "Successfully processed {} records",
        event.payload.records.len()
    );

    Ok(response)
}

fn process_record(record: &KinesisEventRecord) -> Result<(), Error> {
    let record_data = std::str::from_utf8(record.kinesis.data.as_slice());

    if let Some(err) = record_data.err() {
        tracing::error!("Error: {}", err);
        return Err(Error::from(err));
    }

    let record_data = record_data.unwrap_or_default();

    // do something interesting with the data
    tracing::info!("Data: {}", record_data);

    Ok(())
}

#[tokio::main]
async fn main() -> Result<(), Error> {
    tracing_subscriber::fmt()
        .with_max_level(tracing::Level::INFO)
        // disable printing the name of the module in every log line.
        .with_target(false)
        // disabling time is handy because CloudWatch will add the ingestion time.
        .without_time()
        .init();

    run(service_fn(function_handler)).await
}
```

------

## Using Powertools for Amazon Lambda batch processor


The batch processor utility from Powertools for Amazon Lambda automatically handles partial batch response logic, reducing the complexity of implementing batch failure reporting. Here are examples using the batch processor:

**Python**  
For complete examples and setup instructions, see the [batch processor documentation](https://docs.powertools.aws.dev/lambda/python/latest/utilities/batch/).
Processing Kinesis Data Streams stream records with Amazon Lambda batch processor.  

```
import json
from aws_lambda_powertools import Logger
from aws_lambda_powertools.utilities.batch import BatchProcessor, EventType, process_partial_response
from aws_lambda_powertools.utilities.data_classes import KinesisEvent
from aws_lambda_powertools.utilities.typing import LambdaContext

processor = BatchProcessor(event_type=EventType.KinesisDataStreams)
logger = Logger()

def record_handler(record):
    logger.info(record)
    # Your business logic here
    # Raise an exception to mark this record as failed
    
def lambda_handler(event, context: LambdaContext):
    return process_partial_response(
        event=event, 
        record_handler=record_handler, 
        processor=processor,
        context=context
    )
```

**TypeScript**  
For complete examples and setup instructions, see the [batch processor documentation](https://docs.aws.amazon.com/powertools/typescript/latest/features/batch/).
Processing Kinesis Data Streams stream records with Amazon Lambda batch processor.  

```
import { BatchProcessor, EventType, processPartialResponse } from '@aws-lambda-powertools/batch';
import { Logger } from '@aws-lambda-powertools/logger';
import type { KinesisEvent, Context } from 'aws-lambda';

const processor = new BatchProcessor(EventType.KinesisDataStreams);
const logger = new Logger();

const recordHandler = async (record: any): Promise<void> => {
    logger.info('Processing record', { record });
    // Your business logic here
    // Throw an error to mark this record as failed
};

export const handler = async (event: KinesisEvent, context: Context) => {
    return processPartialResponse(event, recordHandler, processor, {
        context,
    });
};
```

**Java**  
For complete examples and setup instructions, see the [batch processor documentation](https://docs.powertools.aws.dev/lambda/java/latest/utilities/batch/).
Processing Kinesis Data Streams stream records with Amazon Lambda batch processor.  

```
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.amazonaws.services.lambda.runtime.events.KinesisEvent;
import com.amazonaws.services.lambda.runtime.events.StreamsEventResponse;
import software.amazon.lambda.powertools.batch.BatchMessageHandlerBuilder;
import software.amazon.lambda.powertools.batch.handler.BatchMessageHandler;

public class KinesisStreamBatchHandler implements RequestHandler<KinesisEvent, StreamsEventResponse> {

    private final BatchMessageHandler<KinesisEvent, StreamsEventResponse> handler;

    public KinesisStreamBatchHandler() {
        handler = new BatchMessageHandlerBuilder()
                .withKinesisBatchHandler()
                .buildWithRawMessageHandler(this::processMessage);
    }

    @Override
    public StreamsEventResponse handleRequest(KinesisEvent kinesisEvent, Context context) {
        return handler.processBatch(kinesisEvent, context);
    }

    private void processMessage(KinesisEvent.KinesisEventRecord kinesisEventRecord, Context context) {
        // Process the stream record
    }
}
```

**.NET**  
For complete examples and setup instructions, see the [batch processor documentation](https://docs.aws.amazon.com/powertools/dotnet/utilities/batch-processing/).
Processing Kinesis Data Streams stream records with Amazon Lambda batch processor.  

```
using System;
using System.Threading;
using System.Threading.Tasks;
using Amazon.Lambda.Core;
using Amazon.Lambda.KinesisEvents;
using Amazon.Lambda.Serialization.SystemTextJson;
using AWS.Lambda.Powertools.BatchProcessing;

[assembly: LambdaSerializer(typeof(DefaultLambdaJsonSerializer))]

namespace HelloWorld;

public class OrderEvent
{
    public string? OrderId { get; set; }
    public string? CustomerId { get; set; }
    public decimal Amount { get; set; }
    public DateTime OrderDate { get; set; }
}

internal class TypedKinesisRecordHandler : ITypedRecordHandler<OrderEvent> 
{
    public async Task<RecordHandlerResult> HandleAsync(OrderEvent orderEvent, CancellationToken cancellationToken)
    {
        if (string.IsNullOrEmpty(orderEvent.OrderId)) 
        {
            throw new ArgumentException("Order ID is required");
        }

        return await Task.FromResult(RecordHandlerResult.None); 
    }
}

public class Function
{
    [BatchProcessor(TypedRecordHandler = typeof(TypedKinesisRecordHandler))]
    public BatchItemFailuresResponse HandlerUsingTypedAttribute(KinesisEvent _)
    {
        return TypedKinesisStreamBatchProcessor.Result.BatchItemFailuresResponse; 
    }
}
```

# Retain discarded batch records for a Kinesis Data Streams event source in Lambda
Error handling

Error handling for Kinesis event source mappings depends on whether the error occurs before the function is invoked or during function invocation:
+ **Before invocation:** If a Lambda event source mapping is unable to invoke the function due to throttling or other issues, it retries until the records expire or exceed the maximum age configured on the event source mapping ([MaximumRecordAgeInSeconds](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html#lambda-CreateEventSourceMapping-request-MaximumRecordAgeInSeconds)).
+ **During invocation:** If the function is invoked but returns an error, Lambda retries until the records expire, exceed the maximum age ([MaximumRecordAgeInSeconds](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html#lambda-CreateEventSourceMapping-request-MaximumRecordAgeInSeconds)), or reach the configured retry quota ([MaximumRetryAttempts](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html#lambda-CreateEventSourceMapping-request-MaximumRetryAttempts)). For function errors, you can also configure [BisectBatchOnFunctionError](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html#lambda-CreateEventSourceMapping-response-BisectBatchOnFunctionError), which splits a failed batch into two smaller batches, isolating bad records and avoiding timeouts. Splitting batches doesn't consume the retry quota.

If the error handling measures fail, Lambda discards the records and continues processing batches from the stream. With the default settings, this means that a bad record can block processing on the affected shard for up to one week. To avoid this, configure your function's event source mapping with a reasonable number of retries and a maximum record age that fits your use case.

## Configuring destinations for failed invocations
On-failure destinations

To retain records of failed event source mapping invocations, add a destination to your function's event source mapping. Each record sent to the destination is a JSON document containing metadata about the failed invocation. For Amazon S3 destinations, Lambda also sends the entire invocation record along with the metadata. You can configure any Amazon SNS topic, Amazon SQS queue, Amazon S3 bucket, or Kafka as a destination.

With Amazon S3 destinations, you can use the [Amazon S3 Event Notifications](https://docs.amazonaws.cn/) feature to receive notifications when objects are uploaded to your destination S3 bucket. You can also configure S3 Event Notifications to invoke another Lambda function to perform automated processing on failed batches.

Your execution role must have permissions for the destination:
+ **For an SQS destination:** [sqs:SendMessage](https://docs.amazonaws.cn/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html)
+ **For an SNS destination:** [sns:Publish](https://docs.amazonaws.cn/sns/latest/api/API_Publish.html)
+ **For an S3 destination:** [ s3:PutObject](https://docs.amazonaws.cn/AmazonS3/latest/API/API_PutObject.html) and [s3:ListBucket](https://docs.amazonaws.cn/AmazonS3/latest/API/ListObjectsV2.html)
+ **For a Kafka destination:** [kafka-cluster:WriteData](https://docs.aws.amazon.com/msk/latest/developerguide/kafka-actions.html)

You can configure a Kafka topic as an on-failure destination for your Kafka event source mappings. When Lambda can't process records after exhausting retry attempts or when records exceed the maximum age, Lambda sends the failed records to the specified Kafka topic for later processing. Refer to [Using a Kafka topic as an on-failure destination](kafka-on-failure-destination.md).

If you've enabled encryption with your own KMS key for an S3 destination, your function's execution role must also have permission to call [kms:GenerateDataKey](https://docs.amazonaws.cn/kms/latest/APIReference/API_GenerateDataKey.html). If the KMS key and S3 bucket destination are in a different account from your Lambda function and execution role, configure the KMS key to trust the execution role to allow kms:GenerateDataKey.

To configure an on-failure destination using the console, follow these steps:

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Choose a function.

1. Under **Function overview**, choose **Add destination**.

1. For **Source**, choose **Event source mapping invocation**.

1. For **Event source mapping**, choose an event source that's configured for this function.

1. For **Condition**, select **On failure**. For event source mapping invocations, this is the only accepted condition.

1. For **Destination type**, choose the destination type that Lambda sends invocation records to.

1. For **Destination**, choose a resource.

1. Choose **Save**.

You can also configure an on-failure destination using the Amazon Command Line Interface (Amazon CLI). For example, the following [create-event-source-mapping](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/create-event-source-mapping.html) command adds an event source mapping with an SQS on-failure destination to `MyFunction`:

```
aws lambda create-event-source-mapping \
--function-name "MyFunction" \
--event-source-arn arn:aws-cn:kinesis:us-west-2:123456789012:stream/lambda-stream \
--destination-config '{"OnFailure": {"Destination": "arn:aws-cn:sqs:us-east-1:123456789012:dest-queue"}}'
```

The following [update-event-source-mapping](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-event-source-mapping.html) command updates an event source mapping to send failed invocation records to an SNS destination after two retry attempts, or if the records are more than an hour old.

```
aws lambda update-event-source-mapping \
--uuid f89f8514-cdd9-4602-9e1f-01a5b77d449b \
--maximum-retry-attempts 2 \
--maximum-record-age-in-seconds 3600 \
--destination-config '{"OnFailure": {"Destination": "arn:aws-cn:sns:us-east-1:123456789012:dest-topic"}}'
```

Updated settings are applied asynchronously and aren't reflected in the output until the process completes. Use the [get-event-source-mapping](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/get-event-source-mapping.html) command to view the current status.

To remove a destination, supply an empty string as the argument to the `destination-config` parameter:

```
aws lambda update-event-source-mapping \
--uuid f89f8514-cdd9-4602-9e1f-01a5b77d449b \
--destination-config '{"OnFailure": {"Destination": ""}}'
```

### Security best practices for Amazon S3 destinations


Deleting an S3 bucket that's configured as a destination without removing the destination from your function's configuration can create a security risk. If another user knows your destination bucket's name, they can recreate the bucket in their Amazon Web Services account. Records of failed invocations will be sent to their bucket, potentially exposing data from your function.

**Warning**  
To ensure that invocation records from your function can't be sent to an S3 bucket in another Amazon Web Services account, add a condition to your function's execution role that limits `s3:PutObject` permissions to buckets in your account. 

The following example shows an IAM policy that limits your function's `s3:PutObject` permissions to buckets in your account. This policy also gives Lambda the `s3:ListBucket` permission it needs to use an S3 bucket as a destination.

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "S3BucketResourceAccountWrite",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::*/*",
                "arn:aws:s3:::*"
            ],
            "Condition": {
                "StringEquals": {
                    "s3:ResourceAccount": "111122223333"
                }
            }
        }
    ]
}
```

To add a permissions policy to your function's execution role using the Amazon Web Services Management Console or Amazon CLI, refer to the instructions in the following procedures:

------
#### [ Console ]

**To add a permissions policy to a function's execution role (console)**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Select the Lambda function whose execution role you want to modify.

1. In the **Configuration** tab, select **Permissions**.

1. In the **Execution role** tab, select your function's **Role name** to open the role's IAM console page.

1. Add a permissions policy to the role by doing the following:

   1. In the **Permissions policies** pane, choose **Add permissions** and select **Create inline policy**.

   1. In **Policy editor**, select **JSON**.

   1. Paste the policy you want to add into the editor (replacing the existing JSON), and then choose **Next**.

   1. Under **Policy details**, enter a **Policy name**.

   1. Choose **Create policy**.

------
#### [ Amazon CLI ]

**To add a permissions policy to a function's execution role (CLI)**

1. Create a JSON policy document with the required permissions and save it in a local directory.

1. Use the IAM `put-role-policy` CLI command to add the permissions to your function's execution role. Run the following command from the directory you saved your JSON policy document in and replace the role name, policy name, and policy document with your own values.

   ```
   aws iam put-role-policy \
   --role-name my_lambda_role \
   --policy-name LambdaS3DestinationPolicy \
   --policy-document file://my_policy.json
   ```

------

### Example Amazon SNS and Amazon SQS invocation record


The following example shows what Lambda sends to an SQS queue or SNS topic for a failed Kinesis event source invocation. Because Lambda sends only the metadata for these destination types, use the `streamArn`, `shardId`, `startSequenceNumber`, and `endSequenceNumber` fields to obtain the full original record. All of the fields shown in the `KinesisBatchInfo` property will always be present.

```
{
    "requestContext": {
        "requestId": "c9b8fa9f-5a7f-xmpl-af9c-0c604cde93a5",
        "functionArn": "arn:aws-cn:lambda:us-west-2:123456789012:function:myfunction",
        "condition": "RetryAttemptsExhausted",
        "approximateInvokeCount": 1
    },
    "responseContext": {
        "statusCode": 200,
        "executedVersion": "$LATEST",
        "functionError": "Unhandled"
    },
    "version": "1.0",
    "timestamp": "2019-11-14T00:38:06.021Z",
    "KinesisBatchInfo": {
        "shardId": "shardId-000000000001",
        "startSequenceNumber": "49601189658422359378836298521827638475320189012309704722",
        "endSequenceNumber": "49601189658422359378836298522902373528957594348623495186",
        "approximateArrivalOfFirstRecord": "2019-11-14T00:38:04.835Z",
        "approximateArrivalOfLastRecord": "2019-11-14T00:38:05.580Z",
        "batchSize": 500,
        "streamArn": "arn:aws-cn:kinesis:us-west-2:123456789012:stream/mystream"
    }
}
```

You can use this information to retrieve the affected records from the stream for troubleshooting. The actual records aren't included, so you must process this record and retrieve them from the stream before they expire and are lost.

### Example Amazon S3 invocation record


The following example shows what Lambda sends to an Amazon S3 bucket for a failed Kinesis event source invocation. In addition to all of the fields from the previous example for SQS and SNS destinations, the `payload` field contains the original invocation record as an escaped JSON string.

```
{
    "requestContext": {
        "requestId": "c9b8fa9f-5a7f-xmpl-af9c-0c604cde93a5",
        "functionArn": "arn:aws:lambda:us-west-2:123456789012:function:myfunction",
        "condition": "RetryAttemptsExhausted",
        "approximateInvokeCount": 1
    },
    "responseContext": {
        "statusCode": 200,
        "executedVersion": "$LATEST",
        "functionError": "Unhandled"
    },
    "version": "1.0",
    "timestamp": "2019-11-14T00:38:06.021Z",
    "KinesisBatchInfo": {
        "shardId": "shardId-000000000001",
        "startSequenceNumber": "49601189658422359378836298521827638475320189012309704722",
        "endSequenceNumber": "49601189658422359378836298522902373528957594348623495186",
        "approximateArrivalOfFirstRecord": "2019-11-14T00:38:04.835Z",
        "approximateArrivalOfLastRecord": "2019-11-14T00:38:05.580Z",
        "batchSize": 500,
        "streamArn": "arn:aws:kinesis:us-west-2:123456789012:stream/mystream"
    },
    "payload": "<Whole Event>" // Only available in S3
}
```

The S3 object containing the invocation record uses the following naming convention:

```
aws/lambda/<ESM-UUID>/<shardID>/YYYY/MM/DD/YYYY-MM-DDTHH.MM.SS-<Random UUID>
```

# Implementing stateful Kinesis Data Streams processing in Lambda
Stateful processing

Lambda functions can run continuous stream processing applications. A stream represents unbounded data that flows continuously through your application. To analyze information from this continuously updating input, you can bound the included records using a window defined in terms of time.

Tumbling windows are distinct time windows that open and close at regular intervals. By default, Lambda invocations are stateless—you cannot use them for processing data across multiple continuous invocations without an external database. However, with tumbling windows, you can maintain your state across invocations. This state contains the aggregate result of the messages previously processed for the current window. Your state can be a maximum of 1 MB per shard. If it exceeds that size, Lambda terminates the window early.

Each record in a stream belongs to a specific window. Lambda will process each record at least once, but doesn't guarantee that each record will be processed only once. In rare cases, such as error handling, some records might be processed more than once. Records are always processed in order the first time. If records are processed more than once, they might be processed out of order.

## Aggregation and processing


Your user managed function is invoked both for aggregation and for processing the final results of that aggregation. Lambda aggregates all records received in the window. You can receive these records in multiple batches, each as a separate invocation. Each invocation receives a state. Thus, when using tumbling windows, your Lambda function response must contain a `state` property. If the response does not contain a `state` property, Lambda considers this a failed invocation. To satisfy this condition, your function can return a `TimeWindowEventResponse` object, which has the following JSON shape:

**Example `TimeWindowEventResponse` values**  

```
{
    "state": {
        "1": 282,
        "2": 715
    },
    "batchItemFailures": []
}
```

**Note**  
For Java functions, we recommend using a `Map<String, String>` to represent the state.

At the end of the window, the flag `isFinalInvokeForWindow` is set to `true` to indicate that this is the final state and that it’s ready for processing. After processing, the window completes and your final invocation completes, and then the state is dropped.

At the end of your window, Lambda uses final processing for actions on the aggregation results. Your final processing is synchronously invoked. After successful invocation, your function checkpoints the sequence number and stream processing continues. If invocation is unsuccessful, your Lambda function suspends further processing until a successful invocation.

**Example KinesisTimeWindowEvent**  

```
{
    "Records": [
        {
            "kinesis": {
                "kinesisSchemaVersion": "1.0",
                "partitionKey": "1",
                "sequenceNumber": "49590338271490256608559692538361571095921575989136588898",
                "data": "SGVsbG8sIHRoaXMgaXMgYSB0ZXN0Lg==",
                "approximateArrivalTimestamp": 1607497475.000
            },
            "eventSource": "aws:kinesis",
            "eventVersion": "1.0",
            "eventID": "shardId-000000000006:49590338271490256608559692538361571095921575989136588898",
            "eventName": "aws:kinesis:record",
            "invokeIdentityArn": "arn:aws-cn:iam::123456789012:role/lambda-kinesis-role",
            "awsRegion": "us-east-1",
            "eventSourceARN": "arn:aws-cn:kinesis:us-east-1:123456789012:stream/lambda-stream"
        }
    ],
    "window": {
        "start": "2020-12-09T07:04:00Z",
        "end": "2020-12-09T07:06:00Z"
    },
    "state": {
        "1": 282,
        "2": 715
    },
    "shardId": "shardId-000000000006",
    "eventSourceARN": "arn:aws-cn:kinesis:us-east-1:123456789012:stream/lambda-stream",
    "isFinalInvokeForWindow": false,
    "isWindowTerminatedEarly": false
}
```

## Configuration


You can configure tumbling windows when you create or update an event source mapping. To configure a tumbling window, specify the window in seconds ([TumblingWindowInSeconds](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html#lambda-CreateEventSourceMapping-request-TumblingWindowInSeconds)). The following example Amazon Command Line Interface (Amazon CLI) command creates a streaming event source mapping that has a tumbling window of 120 seconds. The Lambda function defined for aggregation and processing is named `tumbling-window-example-function`.

```
aws lambda create-event-source-mapping \
--event-source-arn arn:aws-cn:kinesis:us-east-1:123456789012:stream/lambda-stream \
--function-name tumbling-window-example-function \
--starting-position TRIM_HORIZON \
--tumbling-window-in-seconds 120
```

Lambda determines tumbling window boundaries based on the time when records were inserted into the stream. All records have an approximate timestamp available that Lambda uses in boundary determinations.

Tumbling window aggregations do not support resharding. When a shard ends, Lambda considers the current window to be closed, and any child shards will start their own window in a fresh state. When no new records are being added to the current window, Lambda waits for up to 2 minutes before assuming that the window is over. This helps ensure that the function reads all records in the current window, even if the records are added intermittently.

Tumbling windows fully support the existing retry policies `maxRetryAttempts` and `maxRecordAge`.

**Example Handler.py – Aggregation and processing**  
The following Python function demonstrates how to aggregate and then process your final state:  

```
def lambda_handler(event, context):
    print('Incoming event: ', event)
    print('Incoming state: ', event['state'])

#Check if this is the end of the window to either aggregate or process.
    if event['isFinalInvokeForWindow']:
        # logic to handle final state of the window
        print('Destination invoke')
    else:
        print('Aggregate invoke')

#Check for early terminations
    if event['isWindowTerminatedEarly']:
        print('Window terminated early')

    #Aggregation logic
    state = event['state']
    for record in event['Records']:
        state[record['kinesis']['partitionKey']] = state.get(record['kinesis']['partitionKey'], 0) + 1

    print('Returning state: ', state)
    return {'state': state}
```

# Lambda parameters for Amazon Kinesis Data Streams event source mappings
Parameters

All Lambda event source mappings share the same [CreateEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html) and [UpdateEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_UpdateEventSourceMapping.html) API operations. However, only some of the parameters apply to Kinesis.


| Parameter | Required | Default | Notes | 
| --- | --- | --- | --- | 
|  [BatchSize](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html#lambda-CreateEventSourceMapping-request-BatchSize)  |  N  |  100  |  Maximum: 10,000  | 
|  [BisectBatchOnFunctionError](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html#lambda-CreateEventSourceMapping-request-BisectBatchOnFunctionError)  |  N  |  false  |  none | 
|  [DestinationConfig](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html#lambda-CreateEventSourceMapping-request-DestinationConfig)  |  N  | N/A |  Amazon SQS queue or Amazon SNS topic destination for discarded records. For more information, see [Configuring destinations for failed invocations](kinesis-on-failure-destination.md#kinesis-on-failure-destination-console).  | 
|  [Enabled](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html#lambda-CreateEventSourceMapping-request-Enabled)  |  N  |  true  |  none | 
|  [EventSourceArn](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html#lambda-CreateEventSourceMapping-request-EventSourceArn)  |  Y  | N/A |  ARN of the data stream or a stream consumer  | 
|  [FunctionName](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html#lambda-CreateEventSourceMapping-request-FunctionName)  |  Y  | N/A |  none | 
|  [FunctionResponseTypes](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html#lambda-CreateEventSourceMapping-request-FunctionResponseTypes)  |  N  |  N/A |  To let your function report specific failures in a batch, include the value `ReportBatchItemFailures` in `FunctionResponseTypes`. For more information, see [Configuring partial batch response with Kinesis Data Streams and Lambda](services-kinesis-batchfailurereporting.md).  | 
|  [MaximumBatchingWindowInSeconds](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html#lambda-CreateEventSourceMapping-request-MaximumBatchingWindowInSeconds)  |  N  |  0  |  none | 
|  [MaximumRecordAgeInSeconds](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html#lambda-CreateEventSourceMapping-request-MaximumRecordAgeInSeconds)  |  N  |  -1  |  -1 means infinite: Lambda doesn't discard records ([Kinesis Data Streams data retention settings](https://docs.amazonaws.cn/streams/latest/dev/kinesis-extended-retention.html) still apply) Minimum: -1 Maximum: 604,800  | 
|  [MaximumRetryAttempts](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html#lambda-CreateEventSourceMapping-request-MaximumRetryAttempts)  |  N  |  -1  |  -1 means infinite: failed records are retried until the record expires Minimum: -1 Maximum: 10,000  | 
|  [ParallelizationFactor](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html#lambda-CreateEventSourceMapping-request-ParallelizationFactor)  |  N  |  1  |  Maximum: 10  | 
|  [StartingPosition](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html#lambda-CreateEventSourceMapping-request-StartingPosition)  |  Y  |  N/A |  AT\$1TIMESTAMP, TRIM\$1HORIZON, or LATEST  | 
|  [StartingPositionTimestamp](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html#lambda-CreateEventSourceMapping-request-StartingPositionTimestamp)  |  N  |  N/A |  Only valid if StartingPosition is set to AT\$1TIMESTAMP. The time from which to start reading, in Unix time seconds  | 
|  [TumblingWindowInSeconds](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html#lambda-CreateEventSourceMapping-request-TumblingWindowInSeconds)  |  N  |  N/A |  Minimum: 0 Maximum: 900  | 

# Using event filtering with a Kinesis event source
Event filtering

You can use event filtering to control which records from a stream or queue Lambda sends to your function. For general information about how event filtering works, see [Control which events Lambda sends to your function](invocation-eventfiltering.md).

This section focuses on event filtering for Kinesis event sources.

**Note**  
Kinesis event source mappings only support filtering on the `data` key.

**Topics**
+ [

## Kinesis event filtering basics
](#filtering-kinesis)
+ [

## Filtering Kinesis aggregated records
](#filtering-kinesis-efo)

## Kinesis event filtering basics


Suppose a producer is putting JSON formatted data into your Kinesis data stream. An example record would look like the following, with the JSON data converted to a Base64 encoded string in the `data` field.

```
{
    "kinesis": {
        "kinesisSchemaVersion": "1.0",
        "partitionKey": "1",
        "sequenceNumber": "49590338271490256608559692538361571095921575989136588898",
        "data": "eyJSZWNvcmROdW1iZXIiOiAiMDAwMSIsICJUaW1lU3RhbXAiOiAieXl5eS1tbS1kZFRoaDptbTpzcyIsICJSZXF1ZXN0Q29kZSI6ICJBQUFBIn0=",
        "approximateArrivalTimestamp": 1545084650.987
        },
    "eventSource": "aws:kinesis",
    "eventVersion": "1.0",
    "eventID": "shardId-000000000006:49590338271490256608559692538361571095921575989136588898",
    "eventName": "aws:kinesis:record",
    "invokeIdentityArn": "arn:aws:iam::123456789012:role/lambda-role",
    "awsRegion": "us-east-2",
    "eventSourceARN": "arn:aws:kinesis:us-east-2:123456789012:stream/lambda-stream"
}
```

As long as the data the producer puts into the stream is valid JSON, you can use event filtering to filter records using the `data` key. Suppose a producer is putting records into your Kinesis stream in the following JSON format.

```
{
    "record": 12345,
    "order": {
        "type": "buy",
        "stock": "ANYCO",
        "quantity": 1000
        }
}
```

To filter only those records where the order type is “buy,” the `FilterCriteria` object would be as follows.

```
{
    "Filters": [
        {
            "Pattern": "{ \"data\" : { \"order\" : { \"type\" : [ \"buy\" ] } } }"
        }
    ]
}
```

For added clarity, here is the value of the filter's `Pattern` expanded in plain JSON. 

```
{
    "data": {
        "order": {
            "type": [ "buy" ]
            }
      }
}
```

You can add your filter using the console, Amazon CLI or an Amazon SAM template.

------
#### [ Console ]

To add this filter using the console, follow the instructions in [Attaching filter criteria to an event source mapping (console)](invocation-eventfiltering.md#filtering-console) and enter the following string for the **Filter criteria**.

```
{ "data" : { "order" : { "type" : [ "buy" ] } } }
```

------
#### [ Amazon CLI ]

To create a new event source mapping with these filter criteria using the Amazon Command Line Interface (Amazon CLI), run the following command.

```
aws lambda create-event-source-mapping \
    --function-name my-function \
    --event-source-arn arn:aws:kinesis:us-east-2:123456789012:stream/my-stream \
    --filter-criteria '{"Filters": [{"Pattern": "{ \"data\" : { \"order\" : { \"type\" : [ \"buy\" ] } } }"}]}'
```

To add these filter criteria to an existing event source mapping, run the following command.

```
aws lambda update-event-source-mapping \
    --uuid "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE" \
    --filter-criteria '{"Filters": [{"Pattern": "{ \"data\" : { \"order\" : { \"type\" : [ \"buy\" ] } } }"}]}'
```

------
#### [ Amazon SAM ]

To add this filter using Amazon SAM, add the following snippet to the YAML template for your event source.

```
FilterCriteria:
  Filters:
    - Pattern: '{ "data" : { "order" : { "type" : [ "buy" ] } } }'
```

------

To properly filter events from Kinesis sources, both the data field and your filter criteria for the data field must be in valid JSON format. If either field isn't in a valid JSON format, Lambda drops the message or throws an exception. The following table summarizes the specific behavior: 


| Incoming data format | Filter pattern format for data properties | Resulting action | 
| --- | --- | --- | 
|  Valid JSON  |  Valid JSON  |  Lambda filters based on your filter criteria.  | 
|  Valid JSON  |  No filter pattern for data properties  |  Lambda filters (on the other metadata properties only) based on your filter criteria.  | 
|  Valid JSON  |  Non-JSON  |  Lambda throws an exception at the time of the event source mapping creation or update. The filter pattern for data properties must be in a valid JSON format.  | 
|  Non-JSON  |  Valid JSON  |  Lambda drops the record.  | 
|  Non-JSON  |  No filter pattern for data properties  |  Lambda filters (on the other metadata properties only) based on your filter criteria.  | 
|  Non-JSON  |  Non-JSON  |  Lambda throws an exception at the time of the event source mapping creation or update. The filter pattern for data properties must be in a valid JSON format.  | 

## Filtering Kinesis aggregated records


With Kinesis, you can aggregate multiple records into a single Kinesis Data Streams record to increase your data throughput. Lambda can only apply filter criteria to aggregated records when you use Kinesis [enhanced fan-out](https://docs.amazonaws.cn/streams/latest/dev/enhanced-consumers.html). Filtering aggregated records with standard Kinesis isn't supported. When using enhanced fan-out, you configure a Kinesis dedicated-throughput consumer to act as the trigger for your Lambda function. Lambda then filters the aggregated records and passes only those records that meet your filter criteria.

To learn more about Kinesis record aggregation, refer to the [Aggregation](https://docs.amazonaws.cn/streams/latest/dev/kinesis-kpl-concepts.html#kinesis-kpl-concepts-aggretation) section on the Kinesis Producer Library (KPL) Key Concepts page. To Learn more about using Lambda with Kinesis enhanced fan-out, see [Increasing real-time stream processing performance with Amazon Kinesis Data Streams enhanced fan-out and Amazon Lambda](https://amazonaws-china.com/blogs/compute/increasing-real-time-stream-processing-performance-with-amazon-kinesis-data-streams-enhanced-fan-out-and-aws-lambda/) on the Amazon compute blog.

# Tutorial: Using Lambda with Kinesis Data Streams
Tutorial

In this tutorial, you create a Lambda function to consume events from a Amazon Kinesis data stream. 

1. Custom app writes records to the stream.

1. Amazon Lambda polls the stream and, when it detects new records in the stream, invokes your Lambda function.

1. Amazon Lambda runs the Lambda function by assuming the execution role you specified at the time you created the Lambda function.

## Prerequisites


### Install the Amazon Command Line Interface


If you have not yet installed the Amazon Command Line Interface, follow the steps at [Installing or updating the latest version of the Amazon CLI](https://docs.amazonaws.cn/cli/latest/userguide/getting-started-install.html) to install it.

The tutorial requires a command line terminal or shell to run commands. In Linux and macOS, use your preferred shell and package manager.

**Note**  
In Windows, some Bash CLI commands that you commonly use with Lambda (such as `zip`) are not supported by the operating system's built-in terminals. To get a Windows-integrated version of Ubuntu and Bash, [install the Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/install-win10). 

## Create the execution role


Create the [execution role](lambda-intro-execution-role.md) that gives your function permission to access Amazon resources.

**To create an execution role**

1. Open the [roles page](https://console.amazonaws.cn/iam/home#/roles) in the IAM console.

1. Choose **Create role**.

1. Create a role with the following properties.
   + **Trusted entity** – **Amazon Lambda**.
   + **Permissions** – **AWSLambdaKinesisExecutionRole**.
   + **Role name** – **lambda-kinesis-role**.

The **AWSLambdaKinesisExecutionRole** policy has the permissions that the function needs to read items from Kinesis and write logs to CloudWatch Logs.

## Create the function


Create a Lambda function that processes your Kinesis messages. The function code logs the event ID and event data of the Kinesis record to CloudWatch Logs.

This tutorial uses the Node.js 24 runtime, but we've also provided example code in other runtime languages. You can select the tab in the following box to see code for the runtime you're interested in. The JavaScript code you'll use in this step is in the first example shown in the **JavaScript** tab.

------
#### [ .NET ]

**Amazon SDK for .NET**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-kinesis-to-lambda) repository. 
Consuming a Kinesis event with Lambda using .NET.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
﻿using System.Text;
using Amazon.Lambda.Core;
using Amazon.Lambda.KinesisEvents;
using AWS.Lambda.Powertools.Logging;

// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))]

namespace KinesisIntegrationSampleCode;

public class Function
{
    // Powertools Logger requires an environment variables against your function
    // POWERTOOLS_SERVICE_NAME
    [Logging(LogEvent = true)]
    public async Task FunctionHandler(KinesisEvent evnt, ILambdaContext context)
    {
        if (evnt.Records.Count == 0)
        {
            Logger.LogInformation("Empty Kinesis Event received");
            return;
        }

        foreach (var record in evnt.Records)
        {
            try
            {
                Logger.LogInformation($"Processed Event with EventId: {record.EventId}");
                string data = await GetRecordDataAsync(record.Kinesis, context);
                Logger.LogInformation($"Data: {data}");
                // TODO: Do interesting work based on the new data
            }
            catch (Exception ex)
            {
                Logger.LogError($"An error occurred {ex.Message}");
                throw;
            }
        }
        Logger.LogInformation($"Successfully processed {evnt.Records.Count} records.");
    }

    private async Task<string> GetRecordDataAsync(KinesisEvent.Record record, ILambdaContext context)
    {
        byte[] bytes = record.Data.ToArray();
        string data = Encoding.UTF8.GetString(bytes);
        await Task.CompletedTask; //Placeholder for actual async work
        return data;
    }
}
```

------
#### [ Go ]

**SDK for Go V2**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-kinesis-to-lambda) repository. 
Consuming a Kinesis event with Lambda using Go.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
package main

import (
	"context"
	"log"

	"github.com/aws/aws-lambda-go/events"
	"github.com/aws/aws-lambda-go/lambda"
)

func handler(ctx context.Context, kinesisEvent events.KinesisEvent) error {
	if len(kinesisEvent.Records) == 0 {
		log.Printf("empty Kinesis event received")
		return nil
	}

	for _, record := range kinesisEvent.Records {
		log.Printf("processed Kinesis event with EventId: %v", record.EventID)
		recordDataBytes := record.Kinesis.Data
		recordDataText := string(recordDataBytes)
		log.Printf("record data: %v", recordDataText)
		// TODO: Do interesting work based on the new data
	}
	log.Printf("successfully processed %v records", len(kinesisEvent.Records))
	return nil
}

func main() {
	lambda.Start(handler)
}
```

------
#### [ Java ]

**SDK for Java 2.x**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-kinesis-to-lambda) repository. 
Consuming a Kinesis event with Lambda using Java.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
package example;

import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.LambdaLogger;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.amazonaws.services.lambda.runtime.events.KinesisEvent;

public class Handler implements RequestHandler<KinesisEvent, Void> {
    @Override
    public Void handleRequest(final KinesisEvent event, final Context context) {
        LambdaLogger logger = context.getLogger();
        if (event.getRecords().isEmpty()) {
            logger.log("Empty Kinesis Event received");
            return null;
        }
        for (KinesisEvent.KinesisEventRecord record : event.getRecords()) {
            try {
                logger.log("Processed Event with EventId: "+record.getEventID());
                String data = new String(record.getKinesis().getData().array());
                logger.log("Data:"+ data);
                // TODO: Do interesting work based on the new data
            }
            catch (Exception ex) {
                logger.log("An error occurred:"+ex.getMessage());
                throw ex;
            }
        }
        logger.log("Successfully processed:"+event.getRecords().size()+" records");
        return null;
    }

}
```

------
#### [ JavaScript ]

**SDK for JavaScript (v3)**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/blob/main/integration-kinesis-to-lambda) repository. 
Consuming a Kinesis event with Lambda using JavaScript.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
exports.handler = async (event, context) => {
  for (const record of event.Records) {
    try {
      console.log(`Processed Kinesis Event - EventID: ${record.eventID}`);
      const recordData = await getRecordDataAsync(record.kinesis);
      console.log(`Record Data: ${recordData}`);
      // TODO: Do interesting work based on the new data
    } catch (err) {
      console.error(`An error occurred ${err}`);
      throw err;
    }
  }
  console.log(`Successfully processed ${event.Records.length} records.`);
};

async function getRecordDataAsync(payload) {
  var data = Buffer.from(payload.data, "base64").toString("utf-8");
  await Promise.resolve(1); //Placeholder for actual async work
  return data;
}
```
Consuming a Kinesis event with Lambda using TypeScript.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
import {
  KinesisStreamEvent,
  Context,
  KinesisStreamHandler,
  KinesisStreamRecordPayload,
} from "aws-lambda";
import { Buffer } from "buffer";
import { Logger } from "@aws-lambda-powertools/logger";

const logger = new Logger({
  logLevel: "INFO",
  serviceName: "kinesis-stream-handler-sample",
});

export const functionHandler: KinesisStreamHandler = async (
  event: KinesisStreamEvent,
  context: Context
): Promise<void> => {
  for (const record of event.Records) {
    try {
      logger.info(`Processed Kinesis Event - EventID: ${record.eventID}`);
      const recordData = await getRecordDataAsync(record.kinesis);
      logger.info(`Record Data: ${recordData}`);
      // TODO: Do interesting work based on the new data
    } catch (err) {
      logger.error(`An error occurred ${err}`);
      throw err;
    }
    logger.info(`Successfully processed ${event.Records.length} records.`);
  }
};

async function getRecordDataAsync(
  payload: KinesisStreamRecordPayload
): Promise<string> {
  var data = Buffer.from(payload.data, "base64").toString("utf-8");
  await Promise.resolve(1); //Placeholder for actual async work
  return data;
}
```

------
#### [ PHP ]

**SDK for PHP**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-kinesis-to-lambda) repository. 
Consuming an Kinesis event with Lambda using PHP.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
<?php

# using bref/bref and bref/logger for simplicity

use Bref\Context\Context;
use Bref\Event\Kinesis\KinesisEvent;
use Bref\Event\Kinesis\KinesisHandler;
use Bref\Logger\StderrLogger;

require __DIR__ . '/vendor/autoload.php';

class Handler extends KinesisHandler
{
    private StderrLogger $logger;
    public function __construct(StderrLogger $logger)
    {
        $this->logger = $logger;
    }

    /**
     * @throws JsonException
     * @throws \Bref\Event\InvalidLambdaEvent
     */
    public function handleKinesis(KinesisEvent $event, Context $context): void
    {
        $this->logger->info("Processing records");
        $records = $event->getRecords();
        foreach ($records as $record) {
            $data = $record->getData();
            $this->logger->info(json_encode($data));
            // TODO: Do interesting work based on the new data

            // Any exception thrown will be logged and the invocation will be marked as failed
        }
        $totalRecords = count($records);
        $this->logger->info("Successfully processed $totalRecords records");
    }
}

$logger = new StderrLogger();
return new Handler($logger);
```

------
#### [ Python ]

**SDK for Python (Boto3)**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-kinesis-to-lambda) repository. 
Consuming a Kinesis event with Lambda using Python.  

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
import base64
def lambda_handler(event, context):

    for record in event['Records']:
        try:
            print(f"Processed Kinesis Event - EventID: {record['eventID']}")
            record_data = base64.b64decode(record['kinesis']['data']).decode('utf-8')
            print(f"Record Data: {record_data}")
            # TODO: Do interesting work based on the new data
        except Exception as e:
            print(f"An error occurred {e}")
            raise e
    print(f"Successfully processed {len(event['Records'])} records.")
```

------
#### [ Ruby ]

**SDK for Ruby**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-kinesis-to-lambda) repository. 
Consuming an Kinesis event with Lambda using Ruby.  

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
require 'aws-sdk'

def lambda_handler(event:, context:)
  event['Records'].each do |record|
    begin
      puts "Processed Kinesis Event - EventID: #{record['eventID']}"
      record_data = get_record_data_async(record['kinesis'])
      puts "Record Data: #{record_data}"
      # TODO: Do interesting work based on the new data
    rescue => err
      $stderr.puts "An error occurred #{err}"
      raise err
    end
  end
  puts "Successfully processed #{event['Records'].length} records."
end

def get_record_data_async(payload)
  data = Base64.decode64(payload['data']).force_encoding('UTF-8')
  # Placeholder for actual async work
  # You can use Ruby's asynchronous programming tools like async/await or fibers here.
  return data
end
```

------
#### [ Rust ]

**SDK for Rust**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-kinesis-to-lambda) repository. 
Consuming an Kinesis event with Lambda using Rust.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
use aws_lambda_events::event::kinesis::KinesisEvent;
use lambda_runtime::{run, service_fn, Error, LambdaEvent};

async fn function_handler(event: LambdaEvent<KinesisEvent>) -> Result<(), Error> {
    if event.payload.records.is_empty() {
        tracing::info!("No records found. Exiting.");
        return Ok(());
    }

    event.payload.records.iter().for_each(|record| {
        tracing::info!("EventId: {}",record.event_id.as_deref().unwrap_or_default());

        let record_data = std::str::from_utf8(&record.kinesis.data);

        match record_data {
            Ok(data) => {
                // log the record data
                tracing::info!("Data: {}", data);
            }
            Err(e) => {
                tracing::error!("Error: {}", e);
            }
        }
    });

    tracing::info!(
        "Successfully processed {} records",
        event.payload.records.len()
    );

    Ok(())
}

#[tokio::main]
async fn main() -> Result<(), Error> {
    tracing_subscriber::fmt()
        .with_max_level(tracing::Level::INFO)
        // disable printing the name of the module in every log line.
        .with_target(false)
        // disabling time is handy because CloudWatch will add the ingestion time.
        .without_time()
        .init();

    run(service_fn(function_handler)).await
}
```

------

**To create the function**

1. Create a directory for the project, and then switch to that directory.

   ```
   mkdir kinesis-tutorial
   cd kinesis-tutorial
   ```

1. Copy the sample JavaScript code into a new file named `index.js`.

1. Create a deployment package.

   ```
   zip function.zip index.js
   ```

1. Create a Lambda function with the `create-function` command.

   ```
   aws lambda create-function --function-name ProcessKinesisRecords \
   --zip-file fileb://function.zip --handler index.handler --runtime nodejs24.x \
   --role arn:aws-cn:iam::111122223333:role/lambda-kinesis-role
   ```

## Test the Lambda function


Invoke your Lambda function manually using the `invoke` Amazon Lambda CLI command and a sample Kinesis event.

**To test the Lambda function**

1. Copy the following JSON into a file and save it as `input.txt`. 

   ```
   {
       "Records": [
           {
               "kinesis": {
                   "kinesisSchemaVersion": "1.0",
                   "partitionKey": "1",
                   "sequenceNumber": "49590338271490256608559692538361571095921575989136588898",
                   "data": "SGVsbG8sIHRoaXMgaXMgYSB0ZXN0Lg==",
                   "approximateArrivalTimestamp": 1545084650.987
               },
               "eventSource": "aws:kinesis",
               "eventVersion": "1.0",
               "eventID": "shardId-000000000006:49590338271490256608559692538361571095921575989136588898",
               "eventName": "aws:kinesis:record",
               "invokeIdentityArn": "arn:aws-cn:iam::111122223333:role/lambda-kinesis-role",
               "awsRegion": "us-east-2",
               "eventSourceARN": "arn:aws-cn:kinesis:us-east-2:111122223333:stream/lambda-stream"
           }
       ]
   }
   ```

1. Use the `invoke` command to send the event to the function.

   ```
   aws lambda invoke --function-name ProcessKinesisRecords \
   --cli-binary-format raw-in-base64-out \
   --payload file://input.txt outputfile.txt
   ```

   The **cli-binary-format** option is required if you're using Amazon CLI version 2. To make this the default setting, run `aws configure set cli-binary-format raw-in-base64-out`. For more information, see [Amazon CLI supported global command line options](https://docs.amazonaws.cn/cli/latest/userguide/cli-configure-options.html#cli-configure-options-list) in the *Amazon Command Line Interface User Guide for Version 2*.

   The response is saved to `out.txt`.

## Create a Kinesis stream


Use the `create-stream ` command to create a stream.

```
aws kinesis create-stream --stream-name lambda-stream --shard-count 1
```

Run the following `describe-stream` command to get the stream ARN.

```
aws kinesis describe-stream --stream-name lambda-stream
```

You should see the following output:

```
{
    "StreamDescription": {
        "Shards": [
            {
                "ShardId": "shardId-000000000000",
                "HashKeyRange": {
                    "StartingHashKey": "0",
                    "EndingHashKey": "340282366920746074317682119384634633455"
                },
                "SequenceNumberRange": {
                    "StartingSequenceNumber": "49591073947768692513481539594623130411957558361251844610"
                }
            }
        ],
        "StreamARN": "arn:aws-cn:kinesis:us-east-1:111122223333:stream/lambda-stream",
        "StreamName": "lambda-stream",
        "StreamStatus": "ACTIVE",
        "RetentionPeriodHours": 24,
        "EnhancedMonitoring": [
            {
                "ShardLevelMetrics": []
            }
        ],
        "EncryptionType": "NONE",
        "KeyId": null,
        "StreamCreationTimestamp": 1544828156.0
    }
}
```

You use the stream ARN in the next step to associate the stream with your Lambda function.

## Add an event source in Amazon Lambda


Run the following Amazon CLI `add-event-source` command.

```
aws lambda create-event-source-mapping --function-name ProcessKinesisRecords \
--event-source  arn:aws-cn:kinesis:us-east-1:111122223333:stream/lambda-stream \
--batch-size 100 --starting-position LATEST
```

Note the mapping ID for later use. You can get a list of event source mappings by running the `list-event-source-mappings` command.

```
aws lambda list-event-source-mappings --function-name ProcessKinesisRecords \
--event-source arn:aws-cn:kinesis:us-east-1:111122223333:stream/lambda-stream
```

In the response, you can verify the status value is `enabled`. Event source mappings can be disabled to pause polling temporarily without losing any records.

## Test the setup


To test the event source mapping, add event records to your Kinesis stream. The `--data` value is a string that the CLI encodes to base64 prior to sending it to Kinesis. You can run the same command more than once to add multiple records to the stream.

```
aws kinesis put-record --stream-name lambda-stream --partition-key 1 \
--data "Hello, this is a test."
```

Lambda uses the execution role to read records from the stream. Then it invokes your Lambda function, passing in batches of records. The function decodes data from each record and logs it, sending the output to CloudWatch Logs. View the logs in the [CloudWatch console](https://console.amazonaws.cn/cloudwatch).

## Clean up your resources


You can now delete the resources that you created for this tutorial, unless you want to retain them. By deleting Amazon resources that you're no longer using, you prevent unnecessary charges to your Amazon Web Services account.

**To delete the execution role**

1. Open the [Roles page](https://console.amazonaws.cn/iam/home#/roles) of the IAM console.

1. Select the execution role that you created.

1. Choose **Delete**.

1. Enter the name of the role in the text input field and choose **Delete**.

**To delete the Lambda function**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Select the function that you created.

1. Choose **Actions**, **Delete**.

1. Type **confirm** in the text input field and choose **Delete**.

**To delete the Kinesis stream**

1. Sign in to the Amazon Web Services Management Console and open the Kinesis console at [https://console.amazonaws.cn/kinesis](https://console.amazonaws.cn/kinesis).

1. Select the stream you created.

1. Choose **Actions**, **Delete**.

1. Enter **delete** in the text input field.

1. Choose **Delete**.

# Using Lambda with Kubernetes
Kubernetes

You can deploy and manage Lambda functions with the Kubernetes API using [Amazon Controllers for Kubernetes (ACK)](https://aws-controllers-k8s.github.io/community/docs/community/overview/) or [Crossplane](https://docs.crossplane.io/latest/packages/providers/).

## Amazon Controllers for Kubernetes (ACK)


You can use ACK to deploy and manage Amazon resources from the Kubernetes API. Through ACK, Amazon provides open-source custom controllers for Amazon services such as Lambda, Amazon Elastic Container Registry (Amazon ECR), Amazon Simple Storage Service (Amazon S3), and Amazon SageMaker AI. Each supported Amazon service has its own custom controller. In your Kubernetes cluster, install a controller for each Amazon service that you want to use. Then, create a [Custom Resource Definition (CRD)](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) to define the Amazon resources.

We recommend that you use [Helm 3.8 or later](https://helm.sh/docs/intro/install/) to install ACK controllers. Every ACK controller comes with its own Helm chart, which installs the controller, CRDs, and Kubernetes RBAC rules. For more information, see [Install an ACK Controller](https://aws-controllers-k8s.github.io/community/docs/user-docs/install/) in the ACK documentation.

After you create the ACK custom resource, you can use it like any other built-in Kubernetes object. For example, you can deploy and manage Lambda functions with your preferred Kubernetes toolchains, including [kubectl](https://kubernetes.io/docs/reference/kubectl/).

Here are some example use cases for provisioning Lambda functions through ACK:
+ Your organization uses [role-based access control (RBAC)](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) and [IAM roles for service accounts](https://docs.amazonaws.cn/eks/latest/userguide/iam-roles-for-service-accounts.html) to create permissions boundaries. With ACK, you can reuse this security model for Lambda without having to create new users and policies.
+ Your organization has a DevOps process to deploy resources into an Amazon Elastic Kubernetes Service (Amazon EKS) cluster using Kubernetes manifests. With ACK, you can use a manifest to provision Lambda functions without creating separate infrastructure as code templates.

For more information about using ACK, see the [Lambda tutorial in the ACK documentation](https://aws-controllers-k8s.github.io/community/docs/tutorials/lambda-oci-example/).

## Crossplane


[Crossplane](https://docs.crossplane.io/latest/packages/providers/) is an open-source Cloud Native Computing Foundation (CNCF) project that uses Kubernetes to manage cloud infrastructure resources. With Crossplane, developers can request infrastructure without needing to understand its complexities. Platform teams retain control over how the infrastructure is provisioned and managed.

Using Crossplane, you can deploy and manage Lambda functions with your preferred Kubernetes toolchains such as [kubectl](https://kubernetes.io/docs/reference/kubectl/), and any CI/CD pipeline that can deploy manifests to Kubernetes. Here are some example use cases for provisioning Lambda functions through Crossplane:
+ Your organization wants to enforce compliance by ensuring that Lambda functions have the correct [tags](configuration-tags.md). Platform teams can use [Crossplane Compositions](https://docs.crossplane.io/latest/get-started/get-started-with-composition/) to define this policy through API abstractions. Developers can then use these abstractions to deploy Lambda functions with tags.
+ Your project uses GitOps with Kubernetes. In this model, Kubernetes continuously reconciles the git repository (desired state) with the resources running inside the cluster (current state). If there are differences, the GitOps process automatically makes changes to the cluster. You can use GitOps with Kubernetes for deploying and managing Lambda functions through Crossplane, using familiar Kubernetes tools and concepts such as [CRDs](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) and [Controllers](https://kubernetes.io/docs/concepts/architecture/controller/).

To learn more about using Crossplane with Lambda, see the following:
+ [Amazon Blueprints for Crossplane](https://github.com/awslabs/crossplane-on-eks/blob/main/examples/upbound-aws-provider/README.md#deploy-the-examples): This repository includes examples of how to use Crossplane to deploy Amazon resources, including Lambda functions.
**Note**  
Amazon Blueprints for Crossplane are under active development and should not be used in production.
+ [Deploying Lambda with Amazon EKS and Crossplane](https://www.youtube.com/watch?v=m-9KLq29K4k): This video demonstrates an advanced example of deploying an Amazon serverless architecture with Crossplane, exploring the design from both the developer and platform perspectives.

# Using Lambda with Amazon MQ
MQ

**Note**  
If you want to send data to a target other than a Lambda function or enrich the data before sending it, see [ Amazon EventBridge Pipes](https://docs.amazonaws.cn/eventbridge/latest/userguide/eb-pipes.html).

Amazon MQ is a managed message broker service for [Apache ActiveMQ](https://activemq.apache.org/) and [RabbitMQ](https://www.rabbitmq.com). A *message broker* enables software applications and components to communicate using various programming languages, operating systems, and formal messaging protocols through either topic or queue event destinations.

Amazon MQ can also manage Amazon Elastic Compute Cloud (Amazon EC2) instances on your behalf by installing ActiveMQ or RabbitMQ brokers and by providing different network topologies and other infrastructure needs.

You can use a Lambda function to process records from your Amazon MQ message broker. Lambda invokes your function through an [event source mapping](invocation-eventsourcemapping.md), a Lambda resource that reads messages from your broker and invokes the function [synchronously](invocation-sync.md).

**Warning**  
Lambda event source mappings process each event at least once, and duplicate processing of records can occur. To avoid potential issues related to duplicate events, we strongly recommend that you make your function code idempotent. To learn more, see [How do I make my Lambda function idempotent](https://repost.aws/knowledge-center/lambda-function-idempotent) in the Amazon Knowledge Center.

The Amazon MQ event source mapping has the following configuration restrictions:
+ Concurrency – Lambda functions that use an Amazon MQ event source mapping have a default maximum [concurrency](lambda-concurrency.md) setting. For ActiveMQ, the Lambda service limits the number of concurrent execution environments to five per Amazon MQ event source mapping. For RabbitMQ, the number of concurrent execution environments is limited to 1 per Amazon MQ event source mapping. Even if you change your function's reserved or provisioned concurrency settings, the Lambda service won't make more execution environments available. To request an increase in the default maximum concurrency for a single Amazon MQ event source mapping, contact Amazon Web Services Support with the event source mapping UUID, as well as the region. Because increases are applied at the specific event source mapping level, not the account or region level, you need to manually request a scaling increase for each event source mapping.
+ Cross account – Lambda does not support cross-account processing. You cannot use Lambda to process records from an Amazon MQ message broker that is in a different Amazon Web Services account.
+ Authentication – For ActiveMQ, only the ActiveMQ [SimpleAuthenticationPlugin](https://activemq.apache.org/security#simple-authentication-plugin) is supported. For RabbitMQ, only the [PLAIN](https://www.rabbitmq.com/access-control.html#mechanisms) authentication mechanism is supported. Users must use Amazon Secrets Manager to manage their credentials. For more information about ActiveMQ authentication, see [Integrating ActiveMQ brokers with LDAP](https://docs.amazonaws.cn/amazon-mq/latest/developer-guide/security-authentication-authorization.html) in the *Amazon MQ Developer Guide*.
+ Connection quota – Brokers have a maximum number of allowed connections per wire-level protocol. This quota is based on the broker instance type. For more information, see the [Brokers](https://docs.amazonaws.cn/amazon-mq/latest/developer-guide/amazon-mq-limits.html#broker-limits) section of **Quotas in Amazon MQ** in the *Amazon MQ Developer Guide*.
+ Connectivity – You can create brokers in a public or private virtual private cloud (VPC). For private VPCs, your Lambda function needs access to the VPC to receive messages. For more information, see [Configure network security](process-mq-messages-with-lambda.md#process-mq-messages-with-lambda-networkconfiguration) later in this section.
+ Event destinations – Only queue destinations are supported. However, you can use a virtual topic, which behaves as a topic internally while interacting with Lambda as a queue. For more information, see [Virtual Destinations](https://activemq.apache.org/virtual-destinations) on the Apache ActiveMQ website, and [Virtual Hosts](https://www.rabbitmq.com/vhosts.html) on the RabbitMQ website.
+ Network topology – For ActiveMQ, only one single-instance or standby broker is supported per event source mapping. For RabbitMQ, only one single-instance broker or cluster deployment is supported per event source mapping. Single-instance brokers require a failover endpoint. For more information about these broker deployment modes, see [Active MQ Broker Architecture](https://docs.amazonaws.cn/amazon-mq/latest/developer-guide/amazon-mq-broker-architecture.html) and [Rabbit MQ Broker Architecture](https://docs.amazonaws.cn/amazon-mq/latest/developer-guide/rabbitmq-broker-architecture.html)in the *Amazon MQ Developer Guide*.
+ Protocols – Supported protocols depend on the type of Amazon MQ integration.
  + For ActiveMQ integrations, Lambda consumes messages using the OpenWire/Java Message Service (JMS) protocol. No other protocols are supported for consuming messages. Within the JMS protocol, only [https://activemq.apache.org/components/cms/api_docs/activemqcpp-3.6.0/html/classactivemq_1_1commands_1_1_active_m_q_text_message.html](https://activemq.apache.org/components/cms/api_docs/activemqcpp-3.6.0/html/classactivemq_1_1commands_1_1_active_m_q_text_message.html) and [https://activemq.apache.org/components/cms/api_docs/activemqcpp-3.9.0/html/classactivemq_1_1commands_1_1_active_m_q_bytes_message.html](https://activemq.apache.org/components/cms/api_docs/activemqcpp-3.9.0/html/classactivemq_1_1commands_1_1_active_m_q_bytes_message.html) are supported. Lambda also supports JMS custom properties. For more information about the OpenWire protocol, see [OpenWire](https://activemq.apache.org/openwire.html) on the Apache ActiveMQ website.
  + For RabbitMQ integrations, Lambda consumes messages using the AMQP 0-9-1 protocol. No other protocols are supported for consuming messages. For more information about RabbitMQ's implementation of the AMQP 0-9-1 protocol, see [AMQP 0-9-1 Complete Reference Guide](https://www.rabbitmq.com/amqp-0-9-1-reference.html) on the RabbitMQ website.

Lambda automatically supports the latest versions of ActiveMQ and RabbitMQ that Amazon MQ supports. For the latest supported versions, see [Amazon MQ release notes](https://docs.amazonaws.cn/amazon-mq/latest/developer-guide/amazon-mq-release-notes.html) in the *Amazon MQ Developer Guide*.

**Note**  
By default, Amazon MQ has a weekly maintenance window for brokers. During that window of time, brokers are unavailable. For brokers without standby, Lambda cannot process any messages during that window.

**Topics**
+ [

## Understanding the Lambda consumer group for Amazon MQ
](#services-mq-configure)
+ [

# Configuring Amazon MQ event source for Lambda
](process-mq-messages-with-lambda.md)
+ [

# Event source mapping parameters
](services-mq-params.md)
+ [

# Filter events from an Amazon MQ event source
](with-mq-filtering.md)
+ [

# Troubleshoot Amazon MQ event source mapping errors
](services-mq-errors.md)

## Understanding the Lambda consumer group for Amazon MQ


To interact with Amazon MQ, Lambda creates a consumer group which can read from your Amazon MQ brokers. The consumer group is created with the same ID as the event source mapping UUID.

For Amazon MQ event sources, Lambda batches records together and sends them to your function in a single payload. To control behavior, you can configure the batching window and batch size. Lambda pulls messages until it processes the payload size maximum of 6 MB, the batching window expires, or the number of records reaches the full batch size. For more information, see [Batching behavior](invocation-eventsourcemapping.md#invocation-eventsourcemapping-batching).

The consumer group retrieves the messages as a BLOB of bytes, base64-encodes them into a single JSON payload, and then invokes your function. If your function returns an error for any of the messages in a batch, Lambda retries the whole batch of messages until processing succeeds or the messages expire.

**Note**  
While Lambda functions typically have a maximum timeout limit of 15 minutes, event source mappings for Amazon MSK, self-managed Apache Kafka, Amazon DocumentDB, and Amazon MQ for ActiveMQ and RabbitMQ only support functions with maximum timeout limits of 14 minutes. This constraint ensures that the event source mapping can properly handle function errors and retries.

You can monitor a given function's concurrency usage using the `ConcurrentExecutions` metric in Amazon CloudWatch. For more information about concurrency, see [Configuring reserved concurrency for a function](configuration-concurrency.md).

**Example Amazon MQ record events**  

```
{
   "eventSource": "aws:mq",
   "eventSourceArn": "arn:aws-cn:mq:cn-north-1:111122223333:broker:test:b-9bcfa592-423a-4942-879d-eb284b418fc8",
   "messages": [
      { 
        "messageID": "ID:b-9bcfa592-423a-4942-879d-eb284b418fc8-1.mq.cn-north-1.amazonaws.com.cn-37557-1234520418293-4:1:1:1:1", 
        "messageType": "jms/text-message",
        "deliveryMode": 1,
        "replyTo": null,
        "type": null,
        "expiration": "60000",
        "priority": 1,
        "correlationId": "myJMSCoID",
        "redelivered": false,
        "destination": { 
          "physicalName": "testQueue" 
        },
        "data":"QUJDOkFBQUE=",
        "timestamp": 1598827811958,
        "brokerInTime": 1598827811958, 
        "brokerOutTime": 1598827811959, 
        "properties": {
          "index": "1",
          "doAlarm": "false",
          "myCustomProperty": "value"
        }
      },
      { 
        "messageID": "ID:b-9bcfa592-423a-4942-879d-eb284b418fc8-1.mq.cn-north-1.amazonaws.com.cn-37557-1234520418293-4:1:1:1:1",
        "messageType": "jms/bytes-message",
        "deliveryMode": 1,
        "replyTo": null,
        "type": null,
        "expiration": "60000",
        "priority": 2,
        "correlationId": "myJMSCoID1",
        "redelivered": false,
        "destination": { 
          "physicalName": "testQueue" 
        },
        "data":"LQaGQ82S48k=",
        "timestamp": 1598827811958,
        "brokerInTime": 1598827811958, 
        "brokerOutTime": 1598827811959, 
        "properties": {
          "index": "1",
          "doAlarm": "false",
          "myCustomProperty": "value"
        }
      }
   ]
}
```

```
{
  "eventSource": "aws:rmq",
  "eventSourceArn": "arn:aws-cn:mq:cn-north-1:111122223333:broker:pizzaBroker:b-9bcfa592-423a-4942-879d-eb284b418fc8",
  "rmqMessagesByQueue": {
    "pizzaQueue::/": [
      {
        "basicProperties": {
          "contentType": "text/plain",
          "contentEncoding": null,
          "headers": {
            "header1": {
              "bytes": [
                118,
                97,
                108,
                117,
                101,
                49
              ]
            },
            "header2": {
              "bytes": [
                118,
                97,
                108,
                117,
                101,
                50
              ]
            },
            "numberInHeader": 10
          },
          "deliveryMode": 1,
          "priority": 34,
          "correlationId": null,
          "replyTo": null,
          "expiration": "60000",
          "messageId": null,
          "timestamp": "Jan 1, 1970, 12:33:41 AM",
          "type": null,
          "userId": "AIDACKCEVSQ6C2EXAMPLE",
          "appId": null,
          "clusterId": null,
          "bodySize": 80
        },
        "redelivered": false,
        "data": "eyJ0aW1lb3V0IjowLCJkYXRhIjoiQ1pybWYwR3c4T3Y0YnFMUXhENEUifQ=="
      }
    ]
  }
}
```
In the RabbitMQ example, `pizzaQueue` is the name of the RabbitMQ queue, and `/` is the name of the virtual host. When receiving messages, the event source lists messages under `pizzaQueue::/`.

# Configuring Amazon MQ event source for Lambda
Configure event source

**Topics**
+ [

## Configure network security
](#process-mq-messages-with-lambda-networkconfiguration)
+ [

## Create the event source mapping
](#services-mq-eventsourcemapping)

## Configure network security


To give Lambda full access to Amazon MQ through your event source mapping, either your broker must use a public endpoint (public IP address), or you must provide access to the Amazon VPC you created the broker in.

When you use Amazon MQ with Lambda, create [Amazon PrivateLink VPC endpoints](https://docs.amazonaws.cn/vpc/latest/privatelink/create-interface-endpoint.html) that provide your function access to the resources in your Amazon VPC.

**Note**  
Amazon PrivateLink VPC endpoints are required for functions with event source mappings that use the default (on-demand) mode for event pollers. If your event source mapping uses [ provisioned mode](invocation-eventsourcemapping.md#invocation-eventsourcemapping-provisioned-mode), you don't need to configure Amazon PrivateLink VPC endpoints.

Create an endpoint to provide access to the following resources:
+  Lambda — Create an endpoint for the Lambda service principal. 
+  Amazon STS — Create an endpoint for the Amazon STS in order for a service principal to assume a role on your behalf. 
+  Secrets Manager — If your broker uses Secrets Manager to store credentials, create an endpoint for Secrets Manager. 

Alternatively, configure a NAT gateway on each public subnet in the Amazon VPC. For more information, see [Enable internet access for VPC-connected Lambda functions](configuration-vpc-internet.md).

When you create an event source mapping for Amazon MQ, Lambda checks whether Elastic Network Interfaces (ENIs) are already present for the subnets and security groups configured for your Amazon VPC. If Lambda finds existing ENIs, it attempts to re-use them. Otherwise, Lambda creates new ENIs to connect to the event source and invoke your function.

**Note**  
Lambda functions always run inside VPCs owned by the Lambda service. Your function's VPC configuration does not affect the event source mapping. Only the networking configuration of the event source's determines how Lambda connects to your event source.

Configure the security groups for the Amazon VPC containing your broker. By default, Amazon MQ uses the following ports: `61617` (Amazon MQ for ActiveMQ), and `5671` (Amazon MQ for RabbitMQ).
+ Inbound rules – Allow all traffic on the default broker port for the security group associated with your event source. Alternatively, you can use a self-referencing security group rule to allow access from instances within the same security group.
+ Outbound rules – Allow all traffic on port `443` for external destinations if your function needs to communicate with Amazon services. Alternatively, you can also use a self-referencing security group rule to limit access to the broker if you don't need to communicate with other Amazon services.
+ Amazon VPC endpoint inbound rules — If you are using an Amazon VPC endpoint, the security group associated with your Amazon VPC endpoint must allow inbound traffic on port `443` from the broker security group.

If your broker uses authentication, you can also restrict the endpoint policy for the Secrets Manager endpoint. To call the Secrets Manager API, Lambda uses your function role, not the Lambda service principal.

**Example VPC endpoint policy — Secrets Manager endpoint**  

```
{
      "Statement": [
          {
              "Action": "secretsmanager:GetSecretValue",
              "Effect": "Allow",
              "Principal": {
                  "AWS": [
                      "arn:aws-cn::iam::123456789012:role/my-role"
                  ]
              },
              "Resource": "arn:aws-cn::secretsmanager:us-west-2:123456789012:secret:my-secret"
          }
      ]
  }
```

When you use Amazon VPC endpoints, Amazon routes your API calls to invoke your function using the endpoint's Elastic Network Interface (ENI). The Lambda service principal needs to call `lambda:InvokeFunction` on any roles and functions that use those ENIs.

By default, Amazon VPC endpoints have open IAM policies that allow broad access to resources. Best practice is to restrict these policies to perform the needed actions using that endpoint. To ensure that your event source mapping is able to invoke your Lambda function, the VPC endpoint policy must allow the Lambda service principal to call `sts:AssumeRole` and `lambda:InvokeFunction`. Restricting your VPC endpoint policies to allow only API calls originating within your organization prevents the event source mapping from functioning properly, so `"Resource": "*"` is required in these policies.

The following example VPC endpoint policies show how to grant the required access to the Lambda service principal for the Amazon STS and Lambda endpoints.

**Example VPC Endpoint policy — Amazon STS endpoint**  

```
{
      "Statement": [
          {
              "Action": "sts:AssumeRole",
              "Effect": "Allow",
              "Principal": {
                  "Service": [
                      "lambda.amazonaws.com"
                  ]
              },
              "Resource": "*"
          }
      ]
    }
```

**Example VPC Endpoint policy — Lambda endpoint**  

```
{
      "Statement": [
          {
              "Action": "lambda:InvokeFunction",
              "Effect": "Allow",
              "Principal": {
                  "Service": [
                      "lambda.amazonaws.com"
                  ]
              },
              "Resource": "*"
          }
      ]
  }
```

## Create the event source mapping
Create mapping

Create an [event source mapping](invocation-eventsourcemapping.md) to tell Lambda to send records from an Amazon MQ broker to a Lambda function. You can create multiple event source mappings to process the same data with multiple functions, or to process items from multiple sources with a single function.

To configure your function to read from Amazon MQ, add the required permissions and create an **MQ** trigger in the Lambda console.

To read records from an Amazon MQ broker, your Lambda function needs the following permissions. You grant Lambda permission to interact with your Amazon MQ broker and its underlying resouces by adding permission statements to your function [execution role](lambda-intro-execution-role.md):
+ [mq:DescribeBroker](https://docs.amazonaws.cn/amazon-mq/latest/api-reference/brokers-broker-id.html#brokers-broker-id-http-methods)
+ [secretsmanager:GetSecretValue](https://docs.amazonaws.cn/secretsmanager/latest/apireference/API_GetSecretValue.html)
+ [ec2:CreateNetworkInterface](https://docs.amazonaws.cn/AWSEC2/latest/APIReference/API_CreateNetworkInterface.html)
+ [ec2:DeleteNetworkInterface](https://docs.amazonaws.cn/AWSEC2/latest/APIReference/API_DeleteNetworkInterface.html)
+ [ec2:DescribeNetworkInterfaces](https://docs.amazonaws.cn/AWSEC2/latest/APIReference/API_DescribeNetworkInterfaces.html)
+ [ec2:DescribeSecurityGroups](https://docs.amazonaws.cn/AWSEC2/latest/APIReference/API_DescribeSecurityGroups.html)
+ [ec2:DescribeSubnets](https://docs.amazonaws.cn/AWSEC2/latest/APIReference/API_DescribeSubnets.html)
+ [ec2:DescribeVpcs](https://docs.amazonaws.cn/AWSEC2/latest/APIReference/API_DescribeVpcs.html)
+ [logs:CreateLogGroup](https://docs.amazonaws.cn/AmazonCloudWatchLogs/latest/APIReference/API_CreateLogGroup.html)
+ [logs:CreateLogStream](https://docs.amazonaws.cn/AmazonCloudWatchLogs/latest/APIReference/API_CreateLogStream.html)
+ [logs:PutLogEvents](https://docs.amazonaws.cn/AmazonCloudWatchLogs/latest/APIReference/API_PutLogEvents.html)

**Note**  
When using an encrypted customer managed key, add the `[kms:Decrypt](https://docs.amazonaws.cn/msk/1.0/apireference/clusters-clusterarn-bootstrap-brokers.html#clusters-clusterarn-bootstrap-brokersget)` permission as well.

**To add permissions and create a trigger**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Choose the name of a function.

1. Choose the **Configuration** tab, and then choose **Permissions**.

1. Under **Role name**, choose the link to your execution role. This link opens the role in the IAM console.  
![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/execution-role.png)

1. Choose **Add permissions**, and then choose **Create inline policy**.  
![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/inline-policy.png)

1. In the **Policy editor**, choose **JSON**. Enter the following policy. Your function needs these permissions to read from an Amazon MQ broker.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
         {
           "Effect": "Allow",
           "Action": [
             "mq:DescribeBroker",
             "secretsmanager:GetSecretValue",
             "ec2:CreateNetworkInterface",
             "ec2:DeleteNetworkInterface",
             "ec2:DescribeNetworkInterfaces", 
             "ec2:DescribeSecurityGroups",
             "ec2:DescribeSubnets",
             "ec2:DescribeVpcs",
             "logs:CreateLogGroup",
             "logs:CreateLogStream", 
             "logs:PutLogEvents"		
           ],
           "Resource": "*"
         }
       ]
     }
   ```

------
**Note**  
When using an encrypted customer managed key, you must also add the `kms:Decrypt` permission.

1. Choose **Next**. Enter a policy name and then choose **Create policy**.

1. Go back to your function in the Lambda console. Under **Function overview**, choose **Add trigger**.  
![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/add-trigger.png)

1. Choose the **MQ** trigger type.

1. Configure the required options, and then choose **Add**.

Lambda supports the following options for Amazon MQ event sources:
+ **MQ broker** – Select an Amazon MQ broker.
+ **Batch size** – Set the maximum number of messages to retrieve in a single batch.
+ **Queue name** – Enter the Amazon MQ queue to consume.
+ **Source access configuration** – Enter virtual host information and the Secrets Manager secret that stores your broker credentials.
+ **Enable trigger** – Disable the trigger to stop processing records.

To enable or disable the trigger (or delete it), choose the **MQ** trigger in the designer. To reconfigure the trigger, use the event source mapping API operations.

# Event source mapping parameters
Parameters

All Lambda event source types share the same [CreateEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html) and [UpdateEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_UpdateEventSourceMapping.html) API operations. However, only some of the parameters apply to Amazon MQ and RabbitMQ.


| Parameter | Required | Default | Notes | 
| --- | --- | --- | --- | 
|  BatchSize  |  N  |  100  |  Maximum: 10,000  | 
|  Enabled  |  N  |  true  | none | 
|  FunctionName  |  Y  | N/A  | none | 
|  FilterCriteria  |  N  |  N/A   |  [Control which events Lambda sends to your function](invocation-eventfiltering.md)  | 
|  MaximumBatchingWindowInSeconds  |  N  |  500 ms  |  [Batching behavior](invocation-eventsourcemapping.md#invocation-eventsourcemapping-batching)  | 
|  Queues  |  N  | N/A |  The name of the Amazon MQ broker destination queue to consume.  | 
|  SourceAccessConfigurations  |  N  | N/A  |  For ActiveMQ, BASIC\$1AUTH credentials. For RabbitMQ, can contain both BASIC\$1AUTH credentials and VIRTUAL\$1HOST information.  | 

# Filter events from an Amazon MQ event source
Event filtering

You can use event filtering to control which records from a stream or queue Lambda sends to your function. For general information about how event filtering works, see [Control which events Lambda sends to your function](invocation-eventfiltering.md).

This section focuses on event filtering for Amazon MQ event sources.

**Note**  
Amazon MQ event source mappings only support filtering on the `data` key.

**Topics**
+ [

## Amazon MQ event filtering basics
](#filtering-AMQ)

## Amazon MQ event filtering basics


Suppose your Amazon MQ message queue contains messages either in valid JSON format or as plain strings. An example record would look like the following, with the data converted to a Base64 encoded string in the `data` field.

------
#### [ ActiveMQ ]

```
{ 
    "messageID": "ID:b-9bcfa592-423a-4942-879d-eb284b418fc8-1.mq.cn-north-1.amazonaws.com.cn-37557-1234520418293-4:1:1:1:1", 
    "messageType": "jms/text-message",
    "deliveryMode": 1,
    "replyTo": null,
    "type": null,
    "expiration": "60000",
    "priority": 1,
    "correlationId": "myJMSCoID",
    "redelivered": false,
    "destination": { 
      "physicalName": "testQueue" 
    },
    "data":"QUJDOkFBQUE=",
    "timestamp": 1598827811958,
    "brokerInTime": 1598827811958, 
    "brokerOutTime": 1598827811959, 
    "properties": {
      "index": "1",
      "doAlarm": "false",
      "myCustomProperty": "value"
    }
}
```

------
#### [ RabbitMQ ]

```
{
    "basicProperties": {
        "contentType": "text/plain",
        "contentEncoding": null,
        "headers": {
            "header1": {
                "bytes": [
                  118,
                  97,
                  108,
                  117,
                  101,
                  49
                ]
            },
            "header2": {
                "bytes": [
                  118,
                  97,
                  108,
                  117,
                  101,
                  50
                ]
            },
            "numberInHeader": 10
        },
        "deliveryMode": 1,
        "priority": 34,
        "correlationId": null,
        "replyTo": null,
        "expiration": "60000",
        "messageId": null,
        "timestamp": "Jan 1, 1970, 12:33:41 AM",
        "type": null,
        "userId": "AIDACKCEVSQ6C2EXAMPLE",
        "appId": null,
        "clusterId": null,
        "bodySize": 80
        },
    "redelivered": false,
    "data": "eyJ0aW1lb3V0IjowLCJkYXRhIjoiQ1pybWYwR3c4T3Y0YnFMUXhENEUifQ=="
}
```

------

For both Active MQ and Rabbit MQ brokers, you can use event filtering to filter records using the `data` key. Suppose your Amazon MQ queue contains messages in the following JSON format.

```
{
    "timeout": 0,
    "IPAddress": "203.0.113.254"
}
```

To filter only those records where the `timeout` field is greater than 0, the `FilterCriteria` object would be as follows.

```
{
    "Filters": [
        {
            "Pattern": "{ \"data\" : { \"timeout\" : [ { \"numeric\": [ \">\", 0] } } ] } }"
        }
    ]
}
```

For added clarity, here is the value of the filter's `Pattern` expanded in plain JSON.

```
{
    "data": {
        "timeout": [ { "numeric": [ ">", 0 ] } ]
        }
}
```

You can add your filter using the console, Amazon CLI or an Amazon SAM template.

------
#### [ Console ]

to add this filter using the console, follow the instructions in [Attaching filter criteria to an event source mapping (console)](invocation-eventfiltering.md#filtering-console) and enter the following string for the **Filter criteria**.

```
{ "data" : { "timeout" : [ { "numeric": [ ">", 0 ] } ] } }
```

------
#### [ Amazon CLI ]

To create a new event source mapping with these filter criteria using the Amazon Command Line Interface (Amazon CLI), run the following command.

To add these filter criteria to an existing event source mapping, run the following command.

```
aws lambda update-event-source-mapping \
    --uuid "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE" \
    --filter-criteria '{"Filters": [{"Pattern": "{ \"data\" : { \"timeout\" : [ { \"numeric\": [ \">\", 0 ] } ] } }"}]}'
```

```
aws lambda create-event-source-mapping \
    --function-name my-function \
    --event-source-arn arn:aws-cn:mq:cn-north-1:123456789012:broker:my-broker:b-8ac7cc01-5898-482d-be2f-a6b596050ea8 \
    --filter-criteria '{"Filters": [{"Pattern": "{ \"data\" : { \"timeout\" : [ { \"numeric\": [ \">\", 0 ] } ] } }"}]}'
```

To add these filter criteria to an existing event source mapping, run the following command.

```
aws lambda update-event-source-mapping \
    --uuid "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE" \
    --filter-criteria '{"Filters": [{"Pattern": "{ \"data\" : { \"timeout\" : [ { \"numeric\": [ \">\", 0 ] } ] } }"}]}'
```

------
#### [ Amazon SAM ]

To add this filter using Amazon SAM, add the following snippet to the YAML template for your event source.

```
FilterCriteria:
  Filters:
    - Pattern: '{ "data" : { "timeout" : [ { "numeric": [ ">", 0 ] } ] } }'
```

------

With Amazon MQ, you can also filter records where the message is a plain string. Suppose you want to process only records where the message begins with "Result: ". The `FilterCriteria` object would look as follows.

```
{
    "Filters": [
        {
            "Pattern": "{ \"data\" : [ { \"prefix\": \"Result: \" } ] }"
        }
    ]
}
```

For added clarity, here is the value of the filter's `Pattern` expanded in plain JSON.

```
{
    "data": [
        {
        "prefix": "Result: "
        }
    ]
}
```

You can add your filter using the console, Amazon CLI or an Amazon SAM template.

------
#### [ Console ]

To add this filter using the console, follow the instructions in [Attaching filter criteria to an event source mapping (console)](invocation-eventfiltering.md#filtering-console) and enter the following string for the **Filter criteria**.

```
{ "data" : [ { "prefix": "Result: " } ] }
```

------
#### [ Amazon CLI ]

To create a new event source mapping with these filter criteria using the Amazon Command Line Interface (Amazon CLI), run the following command.

```
aws lambda create-event-source-mapping \
    --function-name my-function \
    --event-source-arn arn:aws-cn:mq:cn-north-1:123456789012:broker:my-broker:b-8ac7cc01-5898-482d-be2f-a6b596050ea8 \
    --filter-criteria '{"Filters": [{"Pattern": "{ \"data\" : [ { \"prefix\": \"Result: \" } ] }"}]}'
```

To add these filter criteria to an existing event source mapping, run the following command.

```
aws lambda update-event-source-mapping \
    --uuid "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE" \
    --filter-criteria '{"Filters": [{"Pattern": "{ \"data\" : [ { \"prefix\": \"Result: \" } ] }"}]}'
```

------
#### [ Amazon SAM ]

To add this filter using Amazon SAM, add the following snippet to the YAML template for your event source.

```
FilterCriteria:
  Filters:
    - Pattern: '{ "data" : [ { "prefix": "Result " } ] }'
```

------

Amazon MQ messages must be UTF-8 encoded strings, either plain strings or in JSON format. That's because Lambda decodes Amazon MQ byte arrays into UTF-8 before applying filter criteria. If your messages use another encoding, such as UTF-16 or ASCII, or if the message format doesn't match the `FilterCriteria` format, Lambda processes metadata filters only. The following table summarizes the specific behavior:


| Incoming message format | Filter pattern format for message properties | Resulting action | 
| --- | --- | --- | 
|  Plain string  |  Plain string  |  Lambda filters based on your filter criteria.  | 
|  Plain string  |  No filter pattern for data properties  |  Lambda filters (on the other metadata properties only) based on your filter criteria.  | 
|  Plain string  |  Valid JSON  |  Lambda filters (on the other metadata properties only) based on your filter criteria.  | 
|  Valid JSON  |  Plain string  |  Lambda filters (on the other metadata properties only) based on your filter criteria.  | 
|  Valid JSON  |  No filter pattern for data properties  |  Lambda filters (on the other metadata properties only) based on your filter criteria.  | 
|  Valid JSON  |  Valid JSON  |  Lambda filters based on your filter criteria.  | 
|  Non-UTF-8 encoded string  |  JSON, plain string, or no pattern  |  Lambda filters (on the other metadata properties only) based on your filter criteria.  | 

# Troubleshoot Amazon MQ event source mapping errors
Troubleshoot

When a Lambda function encounters an unrecoverable error, your Amazon MQ consumer stops processing records. Any other consumers can continue processing, provided that they do not encounter the same error. To determine the potential cause of a stopped consumer, check the `StateTransitionReason` field in the return details of your `EventSourceMapping` for one of the following codes:

**`ESM_CONFIG_NOT_VALID`**  
The event source mapping configuration is not valid.

**`EVENT_SOURCE_AUTHN_ERROR`**  
Lambda failed to authenticate the event source.

**`EVENT_SOURCE_AUTHZ_ERROR`**  
Lambda does not have the required permissions to access the event source.

**`FUNCTION_CONFIG_NOT_VALID`**  
The function's configuration is not valid.

Records also go unprocessed if Lambda drops them due to their size. The size limit for Lambda records is 6 MB. To redeliver messages upon function error, you can use a dead-letter queue (DLQ). For more information, see [Message Redelivery and DLQ Handling](https://activemq.apache.org/message-redelivery-and-dlq-handling) on the Apache ActiveMQ website and [Reliability Guide](https://www.rabbitmq.com/reliability.html) on the RabbitMQ website.

**Note**  
Lambda does not support custom redelivery policies. Instead, Lambda uses a policy with the default values from the [Redelivery Policy](https://activemq.apache.org/redelivery-policy) page on the Apache ActiveMQ website, with `maximumRedeliveries` set to 6.

# Using Amazon Lambda with Amazon RDS
RDS

You can connect a Lambda function to an Amazon Relational Database Service (Amazon RDS) database directly and through an Amazon RDS Proxy. Direct connections are useful in simple scenarios, and proxies are recommended for production. A database proxy manages a pool of shared database connections which enables your function to reach high concurrency levels without exhausting database connections.

We recommend using Amazon RDS Proxy for Lambda functions that make frequent short database connections, or open and close large numbers of database connections. For more information, see [ Automatically connecting a Lambda function and a DB instance](https://docs.amazonaws.cn/AmazonRDS/latest/UserGuide/lambda-rds-connect.html) in the Amazon Relational Database Service Developer Guide.

**Tip**  
To quickly connect a Lambda function to an Amazon RDS database, you can use the in-console guided wizard. To open the wizard, do the following:  
Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.
Select the function you want to connect a database to.
On the **Configuration** tab, select **RDS databases**.
Choose **Connect to RDS database**.
After you've connected your function to a database, you can create a proxy by choosing **Add proxy**.

## Configuring your function to work with RDS resources


In the Lambda console, you can provision, and configure, Amazon RDS database instances and proxy resources. You can do this by navigating to **RDS databases** under the **Configuration** tab. Alternatively, you can also create and configure connections to Lambda functions in the Amazon RDS console. When configuring an RDS database instance to use with Lambda, note the following criteria:
+ To connect to a database, your function must be in the same Amazon VPC where your database runs.
+ You can use Amazon RDS databases with MySQL, MariaDB, PostgreSQL, or Microsoft SQL Server engines.
+ You can also use Aurora DB clusters with MySQL or PostgreSQL engines.
+ You need to provide a Secrets Manager secret for database authentication.
+ An IAM role must provide permission to use the secret, and a trust policy must allow Amazon RDS to assume the role.
+  The IAM principal that uses the console to configure the Amazon RDS resource, and connect it to your function must have the following permissions:

### Example permissions policy


**Note**  
 You need the Amazon RDS Proxy permissions only if you configure an Amazon RDS Proxy to manage a pool of your database connections. 

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "ec2:CreateSecurityGroup",
        "ec2:DescribeSecurityGroups",
        "ec2:DescribeSubnets",
        "ec2:DescribeVpcs",
        "ec2:AuthorizeSecurityGroupIngress",
        "ec2:AuthorizeSecurityGroupEgress",
        "ec2:RevokeSecurityGroupEgress",
        "ec2:CreateNetworkInterface",
        "ec2:DeleteNetworkInterface",
        "ec2:DescribeNetworkInterfaces"
      ],
      "Resource": "*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "rds-db:connect",
        "rds:CreateDBProxy",
        "rds:CreateDBInstance",
        "rds:CreateDBSubnetGroup",
        "rds:DescribeDBClusters",
        "rds:DescribeDBInstances",
        "rds:DescribeDBSubnetGroups",
        "rds:DescribeDBProxies",
        "rds:DescribeDBProxyTargets",
        "rds:DescribeDBProxyTargetGroups",
        "rds:RegisterDBProxyTargets",
        "rds:ModifyDBInstance",
        "rds:ModifyDBProxy"
      ],
      "Resource": "*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "lambda:CreateFunction",
        "lambda:ListFunctions",
        "lambda:UpdateFunctionConfiguration"
      ],
      "Resource": "*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "iam:AttachRolePolicy",
        "iam:CreateRole",
        "iam:CreatePolicy"
      ],
      "Resource": "*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "secretsmanager:GetResourcePolicy",
        "secretsmanager:GetSecretValue",
        "secretsmanager:DescribeSecret",
        "secretsmanager:ListSecretVersionIds",
        "secretsmanager:CreateSecret"
      ],
      "Resource": "*"
    }
  ]
}
```

------

Amazon RDS charges an hourly rate for proxies based on the database instance size, see [RDS Proxy pricing](http://www.amazonaws.cn/rds/proxy/pricing/) for details. For more information on proxy connections in general, see [Using Amazon RDS Proxy](https://docs.amazonaws.cn/AmazonRDS/latest/UserGuide/rds-proxy.html) in the Amazon RDS User Guide.

### SSL/TLS requirements for Amazon RDS connections


To make secure SSL/TLS connections to an Amazon RDS database instance, your Lambda function must verify the database server's identity using a trusted certificate. Lambda handles these certificates differently depending on your deployment package type:
+ [.zip file archives](configuration-function-zip.md): Certificate handling varies by runtime:
  + **Node.js 18 and earlier**: Lambda automatically includes CA certificates and RDS certificates.
  + **Node.js 20 and later**: Lambda no longer loads additional CA certificates by default. Set the `NODE_EXTRA_CA_CERTS` environment variable to `/var/runtime/ca-cert.pem`.

  It might take up to 4 weeks for Amazon RDS certificates for new Amazon Web Services Regions to be added to the Lambda managed runtimes.
+ [Container images](images-create.md): Amazon base images include only CA certificates. If your function connects to an Amazon RDS database instance, you must include the appropriate certificates in your container image. In your Dockerfile, download the [certificate bundle that corresponds with the Amazon Web Services Region where you host your database.](https://docs.amazonaws.cn/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html#UsingWithRDS.SSL.CertificatesDownload) Example:

  ```
  RUN curl https://truststore.pki.rds.amazonaws.com/us-east-1/us-east-1-bundle.pem -o /us-east-1-bundle.pem
  ```

This command downloads the Amazon RDS certificate bundle and saves it at the absolute path `/us-east-1-bundle.pem` in your container's root directory. When configuring the database connection in your function code, you must reference this exact path. Example:

------
#### [ Node.js ]

The `readFileSync` function is required because Node.js database clients need the actual certificate content in memory, not just the path to the certificate file. Without `readFileSync`, the client interprets the path string as certificate content, resulting in a "self-signed certificate in certificate chain" error.

**Example Node.js connection config for OCI function**  

```
import { readFileSync } from 'fs';

// ...

let connectionConfig = {
    host: process.env.ProxyHostName,
    user: process.env.DBUserName,
    password: token,
    database: process.env.DBName,
    ssl: {
        ca: readFileSync('/us-east-1-bundle.pem') // Load RDS certificate content from file into memory
    }
};
```

------
#### [ Python ]

**Example Python connection config for OCI function**  

```
connection = pymysql.connect(
    host=proxy_host_name,
    user=db_username,
    password=token,
    db=db_name,
    port=port,
    ssl={'ca': '/us-east-1-bundle.pem'}  #Path to the certificate in container
)
```

------
#### [ Java ]

For Java functions using JDBC connections, the connection string must include:
+ `useSSL=true`
+ `requireSSL=true`
+ An `sslCA` parameter that points to the location of the Amazon RDS certificate in the container image

**Example Java connection string for OCI function**  

```
// Define connection string
String connectionString = String.format("jdbc:mysql://%s:%s/%s?useSSL=true&requireSSL=true&sslCA=/us-east-1-bundle.pem", // Path to the certificate in container
        System.getenv("ProxyHostName"),
        System.getenv("Port"),
        System.getenv("DBName"));
```

------
#### [ .NET ]

**Example .NET connection string for MySQL connection in OCI function**  

```
/// Build the Connection String with the Token 
string connectionString = $"Server={Environment.GetEnvironmentVariable("RDS_ENDPOINT")};" +
                         $"Port={Environment.GetEnvironmentVariable("RDS_PORT")};" +
                         $"Uid={Environment.GetEnvironmentVariable("RDS_USERNAME")};" +
                         $"Pwd={authToken};" +
                         "SslMode=Required;" +
                         "SslCa=/us-east-1-bundle.pem";  // Path to the certificate in container
```

------
#### [ Go ]

For Go functions using MySQL connections, load the Amazon RDS certificate into a certificate pool and register it with the MySQL driver. The connection string must then reference this configuration using the `tls` parameter.

**Example Go code for MySQL connection in OCI function**  

```
import (
    "crypto/tls"
    "crypto/x509"
    "os"
    "github.com/go-sql-driver/mysql"
)

...

// Create certificate pool and register TLS config
rootCertPool := x509.NewCertPool()
pem, err := os.ReadFile("/us-east-1-bundle.pem")  // Path to the certificate in container
if err != nil {
    panic("failed to read certificate file: " + err.Error())
}
if ok := rootCertPool.AppendCertsFromPEM(pem); !ok {
    panic("failed to append PEM")
}

mysql.RegisterTLSConfig("custom", &tls.Config{
    RootCAs: rootCertPool,
})

dsn := fmt.Sprintf("%s:%s@tcp(%s)/%s?allowCleartextPasswords=true&tls=custom",
    dbUser, authenticationToken, dbEndpoint, dbName,
)
```

------
#### [ Ruby ]

**Example Ruby connection config for OCI function**  

```
conn = Mysql2::Client.new(
    host: endpoint,
    username: user,
    password: token,
    port: port,
    database: db_name,
    sslca: '/us-east-1-bundle.pem',  # Path to the certificate in container
    sslverify: true
)
```

------

## Connecting to an Amazon RDS database in a Lambda function


The following code examples shows how to implement a Lambda function that connects to an Amazon RDS database. The function makes a simple database request and returns the result.

**Note**  
These code examples are valid for [.zip deployment packages](configuration-function-zip.md) only. If you're deploying your function using a [container image,](images-create.md) you must specify the Amazon RDS certificate file in your function code, as explained in the [preceding section](#oci-certificate).

------
#### [ .NET ]

**Amazon SDK for .NET**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/lambda-function-connect-rds-iam) repository. 
Connecting to an Amazon RDS database in a Lambda function using .NET.  

```
using System.Data;
using System.Text.Json;
using Amazon.Lambda.APIGatewayEvents;
using Amazon.Lambda.Core;
using MySql.Data.MySqlClient;

// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))]

namespace aws_rds;

public class InputModel
{
    public string key1 { get; set; }
    public string key2 { get; set; }
}

public class Function
{
    /// <summary>
    // Handles the Lambda function execution for connecting to RDS using IAM authentication.
    /// </summary>
    /// <param name="input">The input event data passed to the Lambda function</param>
    /// <param name="context">The Lambda execution context that provides runtime information</param>
    /// <returns>A response object containing the execution result</returns>

    public async Task<APIGatewayProxyResponse> FunctionHandler(APIGatewayProxyRequest request, ILambdaContext context)
    {
        // Sample Input: {"body": "{\"key1\":\"20\", \"key2\":\"25\"}"}
        var input = JsonSerializer.Deserialize<InputModel>(request.Body);

        /// Obtain authentication token
        var authToken = RDSAuthTokenGenerator.GenerateAuthToken(
            Environment.GetEnvironmentVariable("RDS_ENDPOINT"),
            Convert.ToInt32(Environment.GetEnvironmentVariable("RDS_PORT")),
            Environment.GetEnvironmentVariable("RDS_USERNAME")
        );

        /// Build the Connection String with the Token 
        string connectionString = $"Server={Environment.GetEnvironmentVariable("RDS_ENDPOINT")};" +
                                  $"Port={Environment.GetEnvironmentVariable("RDS_PORT")};" +
                                  $"Uid={Environment.GetEnvironmentVariable("RDS_USERNAME")};" +
                                  $"Pwd={authToken};";


        try
        {
            await using var connection = new MySqlConnection(connectionString);
            await connection.OpenAsync();

            const string sql = "SELECT @param1 + @param2 AS Sum";

            await using var command = new MySqlCommand(sql, connection);
            command.Parameters.AddWithValue("@param1", int.Parse(input.key1 ?? "0"));
            command.Parameters.AddWithValue("@param2", int.Parse(input.key2 ?? "0"));

            await using var reader = await command.ExecuteReaderAsync();
            if (await reader.ReadAsync())
            {
                int result = reader.GetInt32("Sum");

                //Sample Response: {"statusCode":200,"body":"{\"message\":\"The sum is: 45\"}","isBase64Encoded":false}
                return new APIGatewayProxyResponse
                {
                    StatusCode = 200,
                    Body = JsonSerializer.Serialize(new { message = $"The sum is: {result}" })
                };
            }

        }
        catch (Exception ex)
        {
            Console.WriteLine($"Error: {ex.Message}");
        }

        return new APIGatewayProxyResponse
        {
            StatusCode = 500,
            Body = JsonSerializer.Serialize(new { error = "Internal server error" })
        };
    }
}
```

------
#### [ Go ]

**SDK for Go V2**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/lambda-function-connect-rds-iam) repository. 
Connecting to an Amazon RDS database in a Lambda function using Go.  

```
/*
Golang v2 code here.
*/

package main

import (
	"context"
	"database/sql"
	"encoding/json"
	"fmt"
	"os"

	"github.com/aws/aws-lambda-go/lambda"
	"github.com/aws/aws-sdk-go-v2/config"
	"github.com/aws/aws-sdk-go-v2/feature/rds/auth"
	_ "github.com/go-sql-driver/mysql"
)

type MyEvent struct {
	Name string `json:"name"`
}

func HandleRequest(event *MyEvent) (map[string]interface{}, error) {

	var dbName string = os.Getenv("DatabaseName")
	var dbUser string = os.Getenv("DatabaseUser")
	var dbHost string = os.Getenv("DBHost") // Add hostname without https
	var dbPort int = os.Getenv("Port")      // Add port number
	var dbEndpoint string = fmt.Sprintf("%s:%d", dbHost, dbPort)
	var region string = os.Getenv("AWS_REGION")

	cfg, err := config.LoadDefaultConfig(context.TODO())
	if err != nil {
		panic("configuration error: " + err.Error())
	}

	authenticationToken, err := auth.BuildAuthToken(
		context.TODO(), dbEndpoint, region, dbUser, cfg.Credentials)
	if err != nil {
		panic("failed to create authentication token: " + err.Error())
	}

	dsn := fmt.Sprintf("%s:%s@tcp(%s)/%s?tls=true&allowCleartextPasswords=true",
		dbUser, authenticationToken, dbEndpoint, dbName,
	)

	db, err := sql.Open("mysql", dsn)
	if err != nil {
		panic(err)
	}

	defer db.Close()

	var sum int
	err = db.QueryRow("SELECT ?+? AS sum", 3, 2).Scan(&sum)
	if err != nil {
		panic(err)
	}
	s := fmt.Sprint(sum)
	message := fmt.Sprintf("The selected sum is: %s", s)

	messageBytes, err := json.Marshal(message)
	if err != nil {
		return nil, err
	}

	messageString := string(messageBytes)
	return map[string]interface{}{
		"statusCode": 200,
		"headers":    map[string]string{"Content-Type": "application/json"},
		"body":       messageString,
	}, nil
}

func main() {
	lambda.Start(HandleRequest)
}
```

------
#### [ Java ]

**SDK for Java 2.x**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/lambda-function-connect-rds-iam) repository. 
Connecting to an Amazon RDS database in a Lambda function using Java.  

```
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.amazonaws.services.lambda.runtime.events.APIGatewayProxyRequestEvent;
import com.amazonaws.services.lambda.runtime.events.APIGatewayProxyResponseEvent;
import software.amazon.awssdk.auth.credentials.DefaultCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.rdsdata.RdsDataClient;
import software.amazon.awssdk.services.rdsdata.model.ExecuteStatementRequest;
import software.amazon.awssdk.services.rdsdata.model.ExecuteStatementResponse;
import software.amazon.awssdk.services.rdsdata.model.Field;

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;

public class RdsLambdaHandler implements RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {

    @Override
    public APIGatewayProxyResponseEvent handleRequest(APIGatewayProxyRequestEvent event, Context context) {
        APIGatewayProxyResponseEvent response = new APIGatewayProxyResponseEvent();

        try {
            // Obtain auth token
            String token = createAuthToken();

            // Define connection configuration
            String connectionString = String.format("jdbc:mysql://%s:%s/%s?useSSL=true&requireSSL=true",
                    System.getenv("ProxyHostName"),
                    System.getenv("Port"),
                    System.getenv("DBName"));

            // Establish a connection to the database
            try (Connection connection = DriverManager.getConnection(connectionString, System.getenv("DBUserName"), token);
                 PreparedStatement statement = connection.prepareStatement("SELECT ? + ? AS sum")) {

                statement.setInt(1, 3);
                statement.setInt(2, 2);

                try (ResultSet resultSet = statement.executeQuery()) {
                    if (resultSet.next()) {
                        int sum = resultSet.getInt("sum");
                        response.setStatusCode(200);
                        response.setBody("The selected sum is: " + sum);
                    }
                }
            }

        } catch (Exception e) {
            response.setStatusCode(500);
            response.setBody("Error: " + e.getMessage());
        }

        return response;
    }

    private String createAuthToken() {
        // Create RDS Data Service client
        RdsDataClient rdsDataClient = RdsDataClient.builder()
                .region(Region.of(System.getenv("AWS_REGION")))
                .credentialsProvider(DefaultCredentialsProvider.create())
                .build();

        // Define authentication request
        ExecuteStatementRequest request = ExecuteStatementRequest.builder()
                .resourceArn(System.getenv("ProxyHostName"))
                .secretArn(System.getenv("DBUserName"))
                .database(System.getenv("DBName"))
                .sql("SELECT 'RDS IAM Authentication'")
                .build();

        // Execute request and obtain authentication token
        ExecuteStatementResponse response = rdsDataClient.executeStatement(request);
        Field tokenField = response.records().get(0).get(0);

        return tokenField.stringValue();
    }
}
```

------
#### [ JavaScript ]

**SDK for JavaScript (v3)**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/lambda-function-connect-rds-iam) repository. 
Connecting to an Amazon RDS database in a Lambda function using JavaScript.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
/* 
Node.js code here.
*/
// ES6+ example
import { Signer } from "@aws-sdk/rds-signer";
import mysql from 'mysql2/promise';

async function createAuthToken() {
  // Define connection authentication parameters
  const dbinfo = {

    hostname: process.env.ProxyHostName,
    port: process.env.Port,
    username: process.env.DBUserName,
    region: process.env.AWS_REGION,

  }

  // Create RDS Signer object
  const signer = new Signer(dbinfo);

  // Request authorization token from RDS, specifying the username
  const token = await signer.getAuthToken();
  return token;
}

async function dbOps() {

  // Obtain auth token
  const token = await createAuthToken();
  // Define connection configuration
  let connectionConfig = {
    host: process.env.ProxyHostName,
    user: process.env.DBUserName,
    password: token,
    database: process.env.DBName,
    ssl: 'Amazon RDS'
  }
  // Create the connection to the DB
  const conn = await mysql.createConnection(connectionConfig);
  // Obtain the result of the query
  const [res,] = await conn.execute('select ?+? as sum', [3, 2]);
  return res;

}

export const handler = async (event) => {
  // Execute database flow
  const result = await dbOps();
  // Return result
  return {
    statusCode: 200,
    body: JSON.stringify("The selected sum is: " + result[0].sum)
  }
};
```
Connecting to an Amazon RDS database in a Lambda function using TypeScript.  

```
import { Signer } from "@aws-sdk/rds-signer";
import mysql from 'mysql2/promise';

// RDS settings
// Using '!' (non-null assertion operator) to tell the TypeScript compiler that the DB settings are not null or undefined,
const proxy_host_name = process.env.PROXY_HOST_NAME!
const port = parseInt(process.env.PORT!)
const db_name = process.env.DB_NAME!
const db_user_name = process.env.DB_USER_NAME!
const aws_region = process.env.AWS_REGION!


async function createAuthToken(): Promise<string> {

    // Create RDS Signer object
    const signer = new Signer({
        hostname: proxy_host_name,
        port: port,
        region: aws_region,
        username: db_user_name
    });

    // Request authorization token from RDS, specifying the username
    const token = await signer.getAuthToken();
    return token;
}

async function dbOps(): Promise<mysql.QueryResult | undefined> {
    try {
        // Obtain auth token
        const token = await createAuthToken();
        const conn = await mysql.createConnection({
            host: proxy_host_name,
            user: db_user_name,
            password: token,
            database: db_name,
            ssl: 'Amazon RDS' // Ensure you have the CA bundle for SSL connection
        });
        const [rows, fields] = await conn.execute('SELECT ? + ? AS sum', [3, 2]);
        console.log('result:', rows);
        return rows;
    }
    catch (err) {
        console.log(err);
    }
}

export const lambdaHandler = async (event: any): Promise<{ statusCode: number; body: string }> => {
    // Execute database flow
    const result = await dbOps();

    // Return error is result is undefined
    if (result == undefined)
        return {
            statusCode: 500,
            body: JSON.stringify(`Error with connection to DB host`)
        }

    // Return result
    return {
        statusCode: 200,
        body: JSON.stringify(`The selected sum is: ${result[0].sum}`)
    };
};
```

------
#### [ PHP ]

**SDK for PHP**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/lambda-function-connect-rds-iam) repository. 
Connecting to an Amazon RDS database in a Lambda function using PHP.  

```
<?php
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0

# using bref/bref and bref/logger for simplicity

use Bref\Context\Context;
use Bref\Event\Handler as StdHandler;
use Bref\Logger\StderrLogger;
use Aws\Rds\AuthTokenGenerator;
use Aws\Credentials\CredentialProvider;

require __DIR__ . '/vendor/autoload.php';

class Handler implements StdHandler
{
    private StderrLogger $logger;
    public function __construct(StderrLogger $logger)
    {
        $this->logger = $logger;
    }


    private function getAuthToken(): string {
        // Define connection authentication parameters
        $dbConnection = [
            'hostname' => getenv('DB_HOSTNAME'),
            'port' => getenv('DB_PORT'),
            'username' => getenv('DB_USERNAME'),
            'region' => getenv('AWS_REGION'),
        ];

        // Create RDS AuthTokenGenerator object
        $generator = new AuthTokenGenerator(CredentialProvider::defaultProvider());

        // Request authorization token from RDS, specifying the username
        return $generator->createToken(
            $dbConnection['hostname'] . ':' . $dbConnection['port'],
            $dbConnection['region'],
            $dbConnection['username']
        );
    }

    private function getQueryResults() {
        // Obtain auth token
        $token = $this->getAuthToken();

        // Define connection configuration
        $connectionConfig = [
            'host' => getenv('DB_HOSTNAME'),
            'user' => getenv('DB_USERNAME'),
            'password' => $token,
            'database' => getenv('DB_NAME'),
        ];

        // Create the connection to the DB
        $conn = new PDO(
            "mysql:host={$connectionConfig['host']};dbname={$connectionConfig['database']}",
            $connectionConfig['user'],
            $connectionConfig['password'],
            [
                PDO::MYSQL_ATTR_SSL_CA => '/path/to/rds-ca-2019-root.pem',
                PDO::MYSQL_ATTR_SSL_VERIFY_SERVER_CERT => true,
            ]
        );

        // Obtain the result of the query
        $stmt = $conn->prepare('SELECT ?+? AS sum');
        $stmt->execute([3, 2]);

        return $stmt->fetch(PDO::FETCH_ASSOC);
    }

    /**
     * @param mixed $event
     * @param Context $context
     * @return array
     */
    public function handle(mixed $event, Context $context): array
    {
        $this->logger->info("Processing query");

        // Execute database flow
        $result = $this->getQueryResults();

        return [
            'sum' => $result['sum']
        ];
    }
}

$logger = new StderrLogger();
return new Handler($logger);
```

------
#### [ Python ]

**SDK for Python (Boto3)**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/lambda-function-connect-rds-iam) repository. 
Connecting to an Amazon RDS database in a Lambda function using Python.  

```
import json
import os
import boto3
import pymysql

# RDS settings
proxy_host_name = os.environ['PROXY_HOST_NAME']
port = int(os.environ['PORT'])
db_name = os.environ['DB_NAME']
db_user_name = os.environ['DB_USER_NAME']
aws_region = os.environ['AWS_REGION']


# Fetch RDS Auth Token
def get_auth_token():
    client = boto3.client('rds')
    token = client.generate_db_auth_token(
        DBHostname=proxy_host_name,
        Port=port
        DBUsername=db_user_name
        Region=aws_region
    )
    return token

def lambda_handler(event, context):
    token = get_auth_token()
    try:
        connection = pymysql.connect(
            host=proxy_host_name,
            user=db_user_name,
            password=token,
            db=db_name,
            port=port,
            ssl={'ca': 'Amazon RDS'}  # Ensure you have the CA bundle for SSL connection
        )
        
        with connection.cursor() as cursor:
            cursor.execute('SELECT %s + %s AS sum', (3, 2))
            result = cursor.fetchone()

        return result
        
    except Exception as e:
        return (f"Error: {str(e)}")  # Return an error message if an exception occurs
```

------
#### [ Ruby ]

**SDK for Ruby**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/lambda-function-connect-rds-iam) repository. 
Connecting to an Amazon RDS database in a Lambda function using Ruby.  

```
# Ruby code here.

require 'aws-sdk-rds'
require 'json'
require 'mysql2'

def lambda_handler(event:, context:)
  endpoint = ENV['DBEndpoint'] # Add the endpoint without https"
  port = ENV['Port']           # 3306
  user = ENV['DBUser']
  region = ENV['DBRegion']     # 'us-east-1'
  db_name = ENV['DBName']

  credentials = Aws::Credentials.new(
    ENV['AWS_ACCESS_KEY_ID'],
    ENV['AWS_SECRET_ACCESS_KEY'],
    ENV['AWS_SESSION_TOKEN']
  )
  rds_client = Aws::RDS::AuthTokenGenerator.new(
    region: region, 
    credentials: credentials
  )

  token = rds_client.auth_token(
    endpoint: endpoint+ ':' + port,
    user_name: user,
    region: region
  )

  begin
    conn = Mysql2::Client.new(
      host: endpoint,
      username: user,
      password: token,
      port: port,
      database: db_name,
      sslca: '/var/task/global-bundle.pem', 
      sslverify: true,
      enable_cleartext_plugin: true
    )
    a = 3
    b = 2
    result = conn.query("SELECT #{a} + #{b} AS sum").first['sum']
    puts result
    conn.close
    {
      statusCode: 200,
      body: result.to_json
    }
  rescue => e
    puts "Database connection failed due to #{e}"
  end
end
```

------
#### [ Rust ]

**SDK for Rust**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/lambda-function-connect-rds-iam) repository. 
Connecting to an Amazon RDS database in a Lambda function using Rust.  

```
use aws_config::BehaviorVersion;
use aws_credential_types::provider::ProvideCredentials;
use aws_sigv4::{
    http_request::{sign, SignableBody, SignableRequest, SigningSettings},
    sign::v4,
};
use lambda_runtime::{run, service_fn, Error, LambdaEvent};
use serde_json::{json, Value};
use sqlx::postgres::PgConnectOptions;
use std::env;
use std::time::{Duration, SystemTime};

const RDS_CERTS: &[u8] = include_bytes!("global-bundle.pem");

async fn generate_rds_iam_token(
    db_hostname: &str,
    port: u16,
    db_username: &str,
) -> Result<String, Error> {
    let config = aws_config::load_defaults(BehaviorVersion::v2024_03_28()).await;

    let credentials = config
        .credentials_provider()
        .expect("no credentials provider found")
        .provide_credentials()
        .await
        .expect("unable to load credentials");
    let identity = credentials.into();
    let region = config.region().unwrap().to_string();

    let mut signing_settings = SigningSettings::default();
    signing_settings.expires_in = Some(Duration::from_secs(900));
    signing_settings.signature_location = aws_sigv4::http_request::SignatureLocation::QueryParams;

    let signing_params = v4::SigningParams::builder()
        .identity(&identity)
        .region(&region)
        .name("rds-db")
        .time(SystemTime::now())
        .settings(signing_settings)
        .build()?;

    let url = format!(
        "https://{db_hostname}:{port}/?Action=connect&DBUser={db_user}",
        db_hostname = db_hostname,
        port = port,
        db_user = db_username
    );

    let signable_request =
        SignableRequest::new("GET", &url, std::iter::empty(), SignableBody::Bytes(&[]))
            .expect("signable request");

    let (signing_instructions, _signature) =
        sign(signable_request, &signing_params.into())?.into_parts();

    let mut url = url::Url::parse(&url).unwrap();
    for (name, value) in signing_instructions.params() {
        url.query_pairs_mut().append_pair(name, &value);
    }

    let response = url.to_string().split_off("https://".len());

    Ok(response)
}

#[tokio::main]
async fn main() -> Result<(), Error> {
    run(service_fn(handler)).await
}

async fn handler(_event: LambdaEvent<Value>) -> Result<Value, Error> {
    let db_host = env::var("DB_HOSTNAME").expect("DB_HOSTNAME must be set");
    let db_port = env::var("DB_PORT")
        .expect("DB_PORT must be set")
        .parse::<u16>()
        .expect("PORT must be a valid number");
    let db_name = env::var("DB_NAME").expect("DB_NAME must be set");
    let db_user_name = env::var("DB_USERNAME").expect("DB_USERNAME must be set");

    let token = generate_rds_iam_token(&db_host, db_port, &db_user_name).await?;

    let opts = PgConnectOptions::new()
        .host(&db_host)
        .port(db_port)
        .username(&db_user_name)
        .password(&token)
        .database(&db_name)
        .ssl_root_cert_from_pem(RDS_CERTS.to_vec())
        .ssl_mode(sqlx::postgres::PgSslMode::Require);

    let pool = sqlx::postgres::PgPoolOptions::new()
        .connect_with(opts)
        .await?;

    let result: i32 = sqlx::query_scalar("SELECT $1 + $2")
        .bind(3)
        .bind(2)
        .fetch_one(&pool)
        .await?;

    println!("Result: {:?}", result);

    Ok(json!({
        "statusCode": 200,
        "content-type": "text/plain",
        "body": format!("The selected sum is: {result}")
    }))
}
```

------

## Processing event notifications from Amazon RDS


You can use Lambda to process event notifications from an Amazon RDS database. Amazon RDS sends notifications to an Amazon Simple Notification Service (Amazon SNS) topic, which you can configure to invoke a Lambda function. Amazon SNS wraps the message from Amazon RDS in its own event document and sends it to your function.

For more information about configuring an Amazon RDS database to send notifications, see [Using Amazon RDS event notifications](https://docs.amazonaws.cn/AmazonRDS/latest/UserGuide/USER_Events.html). 

**Example Amazon RDS message in an Amazon SNS event**  

```
{
        "Records": [
          {
            "EventVersion": "1.0",
            "EventSubscriptionArn": "arn:aws-cn:sns:us-east-2:123456789012:rds-lambda:21be56ed-a058-49f5-8c98-aedd2564c486",
            "EventSource": "aws:sns",
            "Sns": {
              "SignatureVersion": "1",
              "Timestamp": "2023-01-02T12:45:07.000Z",
              "Signature": "tcc6faL2yUC6dgZdmrwh1Y4cGa/ebXEkAi6RibDsvpi+tE/1+82j...65r==",
              "SigningCertUrl": "https://sns.us-east-2.amazonaws.com/SimpleNotificationService-ac565b8b1a6c5d002d285f9598aa1d9b.pem",
              "MessageId": "95df01b4-ee98-5cb9-9903-4c221d41eb5e",
              "Message": "{\"Event Source\":\"db-instance\",\"Event Time\":\"2023-01-02 12:45:06.000\",\"Identifier Link\":\"https://console.amazonaws.cn/rds/home?region=eu-west-1#dbinstance:id=dbinstanceid\",\"Source ID\":\"dbinstanceid\",\"Event ID\":\"http://docs.amazonwebservices.com/AmazonRDS/latest/UserGuide/USER_Events.html#RDS-EVENT-0002\",\"Event Message\":\"Finished DB Instance backup\"}",
              "MessageAttributes": {},
              "Type": "Notification",
              "UnsubscribeUrl": "https://sns.us-east-2.amazonaws.com/?Action=Unsubscribe&amp;SubscriptionArn=arn:aws-cn:sns:us-east-2:123456789012:test-lambda:21be56ed-a058-49f5-8c98-aedd2564c486",
              "TopicArn":"arn:aws-cn:sns:us-east-2:123456789012:sns-lambda",
              "Subject": "RDS Notification Message"
            }
          }
        ]
      }
```

## Complete Lambda and Amazon RDS tutorial

+ [ Using a Lambda function to access an Amazon RDS database](https://docs.amazonaws.cn/AmazonRDS/latest/UserGuide/rds-lambda-tutorial.html) – From the Amazon RDS User Guide, learn how to use a Lambda function to write data to an Amazon RDS database through an Amazon RDS Proxy. Your Lambda function will read records from an Amazon SQS queue and write new items to a table in your database whenever a message is added.

# Select a database service for your Lambda-based applications
Amazon RDS vs DynamoDB

Many serverless applications need to store and retrieve data. Amazon offers multiple database options that work with Lambda functions. Two of the most popular choices are Amazon DynamoDB, a NoSQL database service, and Amazon RDS, a traditional relational database solution. The following sections explain the key differences between these services when using them with Lambda and help you select the right database service for your serverless application.

To learn more about the other database services offered by Amazon, and to understand their use cases and tradeoffs more generally, see [Choosing an Amazon database service](https://docs.amazonaws.cn/decision-guides/latest/databases-on-aws-how-to-choose/databases-on-aws-how-to-choose.html). All of the Amazon database services are compatible with Lambda, but not all of them may be suited to your particular use case.

## What are your choices when selecting a database service with Lambda?
What are your choices

Amazon offers multiple database services. For serverless applications, two of the most popular choices are DynamoDB and Amazon RDS.
+ **DynamoDB** is a fully managed NoSQL database service optimized for serverless applications. It provides seamless scaling and consistent single-digit millisecond performance at any scale.
+ **Amazon RDS** is a managed relational database service that supports multiple database engines including MySQL and PostgreSQL. It provides familiar SQL capabilities with managed infrastructure.

## Recommendations if you already know your requirements
Basic recommendations

If you're already clear on your requirements, here are our basic recommendations:

We recommend [DynamoDB](with-ddb.md) for serverless applications that need consistent low-latency performance, automatic scaling, and don't require complex joins or transactions. It's particularly well-suited for Lambda-based applications due to its serverless nature.

[Amazon RDS](services-rds.md) is a better choice when you need complex SQL queries, joins, or have existing applications using relational databases. However, be aware that connecting Lambda functions to Amazon RDS requires additional configuration and can impact cold start times.

## What to consider when selecting a database service
What to consider

When selecting between DynamoDB and Amazon RDS for your Lambda applications, consider these factors:
+ Connection management and cold starts
+ Data access patterns
+ Query complexity
+ Data consistency requirements
+ Scaling characteristics
+ Cost model

By understanding these factors, you can select the option that best meets the needs of your particular use case.

### Connection management and cold starts

+ DynamoDB uses an HTTP API for all operations. Lambda functions can make immediate requests without maintaining connections, resulting in better cold start performance. Each request is authenticated using Amazon credentials without connection overhead.
+ Amazon RDS requires managing connection pools since it uses traditional database connections. This can impact cold starts as new Lambda instances need to establish connections. You'll need to implement connection pooling strategies and potentially use [Amazon RDS Proxy](https://docs.amazonaws.cn/AmazonRDS/latest/UserGuide/rds-proxy.html) to manage connections effectively. Note that using Amazon RDS Proxy incurs additional costs.

### Data access patterns

+ DynamoDB works best with known access patterns and single-table designs. It's ideal for Lambda applications that need consistent low-latency access to data based on primary keys or secondary indexes.
+ Amazon RDS provides flexibility for complex queries and changing access patterns. It's better suited when your Lambda functions need to perform unique, tailored queries or complex joins across multiple tables.

### Query complexity

+ DynamoDB excels at simple, key-based operations and predefined access patterns. Complex queries must be designed around index structures, and joins must be handled in application code.
+ Amazon RDS supports complex SQL queries with joins, subqueries, and aggregations. This can simplify your Lambda function code when complex data operations are needed.

### Data consistency requirements

+ DynamoDB offers both eventual and strong consistency options, with strong consistency available for single-item reads. Transactions are supported but with some limitations.
+ Amazon RDS provides full atomicity, consistency, isolation, and durability (ACID) compliance and complex transaction support. If your Lambda functions require complex transactions or strong consistency across multiple records, Amazon RDS might be more suitable.

### Scaling characteristics

+ DynamoDB scales automatically with your workload. It can handle sudden spikes in traffic from Lambda functions without pre-provisioning. You can use on-demand capacity mode to pay only for what you use, perfectly matching Lambda's scaling model.
+ Amazon RDS has fixed capacity based on the instance size you choose. If multiple Lambda functions try to connect simultaneously, you may exceed your connection quota. You need to carefully manage connection pools and potentially implement retry logic.

### Cost model

+ DynamoDB's pricing aligns well with serverless applications. With on-demand capacity, you pay only for the actual reads and writes performed by your Lambda functions. There are no charges for idle time.
+ Amazon RDS charges for the running instance regardless of usage. This can be less cost-effective for sporadic workloads that can be typical in serverless applications. However, it might be more economical for high-throughput workloads with consistent usage.

## Getting started with your chosen database service
Select a database

Now that you've read about the criteria for selecting between DynamoDB and Amazon RDS and the key differences between them, you can select the option that best matches your needs and use the following resources to get started using it.

------
#### [ DynamoDB ]

**Get started with DynamoDB with the following resources**
+ For an introduction to the DynamoDB service, read [What is DynamoDB?](https://docs.amazonaws.cn/amazondynamodb/latest/developerguide/Introduction.html) in the *Amazon DynamoDB Developer Guide*.
+ Follow the tutorial [Using Lambda with API Gateway](services-apigateway-tutorial.md) to see an example of using a Lambda function to perform CRUD operations on a DynamoDB table in response to an API request.
+ Read [Programming with DynamoDB and the Amazon SDKs](https://docs.amazonaws.cn/amazondynamodb/latest/developerguide/Programming.html) in the *Amazon DynamoDB Developer Guide* to learn more about how to access DynamoDB from within your Lambda function by using one of the Amazon SDKs.

------
#### [ Amazon RDS ]

**Get started with Amazon RDS with the following resources**
+ For an introduction to the Amazon RDS service, read [What is Amazon Relational Database Service (Amazon RDS)?](https://docs.amazonaws.cn/AmazonRDS/latest/UserGuide/Welcome.html) in the *Amazon Relational Database Service User Guide*.
+ Follow the tutorial [Using a Lambda function to access an Amazon RDS database](https://docs.amazonaws.cn/AmazonRDS/latest/UserGuide/rds-lambda-tutorial.html) in the *Amazon Relational Database Service User Guide*.
+ Learn more about using Lambda with Amazon RDS by reading [Using Amazon Lambda with Amazon RDS](services-rds.md).

------

# Process Amazon S3 event notifications with Lambda
S3

You can use Lambda to process [event notifications](https://docs.amazonaws.cn/AmazonS3/latest/userguide/NotificationHowTo.html) from Amazon Simple Storage Service. Amazon S3 can send an event to a Lambda function when an object is created or deleted. You configure notification settings on a bucket, and grant Amazon S3 permission to invoke a function on the function's resource-based permissions policy.

**Warning**  
If your Lambda function uses the same bucket that triggers it, it could cause the function to run in a loop. For example, if the bucket triggers a function each time an object is uploaded, and the function uploads an object to the bucket, then the function indirectly triggers itself. To avoid this, use two buckets, or configure the trigger to only apply to a prefix used for incoming objects.

Amazon S3 invokes your function [asynchronously](invocation-async.md) with an event that contains details about the object. The following example shows an event that Amazon S3 sent when a deployment package was uploaded to Amazon S3.

**Example Amazon S3 notification event**  

```
{
  "Records": [
    {
      "eventVersion": "2.1",
      "eventSource": "aws:s3",
      "awsRegion": "us-east-2",
      "eventTime": "2019-09-03T19:37:27.192Z",
      "eventName": "ObjectCreated:Put",
      "userIdentity": {
        "principalId": "AWS:AIDAINPONIXQXHT3IKHL2"
      },
      "requestParameters": {
        "sourceIPAddress": "205.255.255.255"
      },
      "responseElements": {
        "x-amz-request-id": "D82B88E5F771F645",
        "x-amz-id-2": "vlR7PnpV2Ce81l0PRw6jlUpck7Jo5ZsQjryTjKlc5aLWGVHPZLj5NeC6qMa0emYBDXOo6QBU0Wo="
      },
      "s3": {
        "s3SchemaVersion": "1.0",
        "configurationId": "828aa6fc-f7b5-4305-8584-487c791949c1",
        "bucket": {
          "name": "amzn-s3-demo-bucket",
          "ownerIdentity": {
            "principalId": "A3I5XTEXAMAI3E"
          },
          "arn": "arn:aws-cn:s3:::lambda-artifacts-deafc19498e3f2df"
        },
        "object": {
          "key": "b21b84d653bb07b05b1e6b33684dc11b",
          "size": 1305107,
          "eTag": "b21b84d653bb07b05b1e6b33684dc11b",
          "sequencer": "0C0F6F405D6ED209E1"
        }
      }
    }
  ]
}
```

To invoke your function, Amazon S3 needs permission from the function's [resource-based policy](access-control-resource-based.md). When you configure an Amazon S3 trigger in the Lambda console, the console modifies the resource-based policy to allow Amazon S3 to invoke the function if the bucket name and account ID match. If you configure the notification in Amazon S3, you use the Lambda API to update the policy. You can also use the Lambda API to grant permission to another account, or restrict permission to a designated alias.

If your function uses the Amazon SDK to manage Amazon S3 resources, it also needs Amazon S3 permissions in its [execution role](lambda-intro-execution-role.md). 

**Topics**
+ [

# Tutorial: Using an Amazon S3 trigger to invoke a Lambda function
](with-s3-example.md)
+ [

# Tutorial: Using an Amazon S3 trigger to create thumbnail images
](with-s3-tutorial.md)

# Tutorial: Using an Amazon S3 trigger to invoke a Lambda function
Tutorial: Use an S3 trigger

In this tutorial, you use the console to create a Lambda function and configure a trigger for an Amazon Simple Storage Service (Amazon S3) bucket. Every time that you add an object to your Amazon S3 bucket, your function runs and outputs the object type to Amazon CloudWatch Logs.

![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/services-s3-example/s3_tut_config.png)


This tutorial demonstrates how to:

1. Create an Amazon S3 bucket.

1. Create a Lambda function that returns the object type of objects in an Amazon S3 bucket.

1. Configure a Lambda trigger that invokes your function when objects are uploaded to your bucket.

1. Test your function, first with a dummy event, and then using the trigger.

By completing these steps, you’ll learn how to configure a Lambda function to run whenever objects are added to or deleted from an Amazon S3 bucket. You can complete this tutorial using only the Amazon Web Services Management Console.

## Create an Amazon S3 bucket


![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/services-s3-example/s3trigger_tut_steps1.png)


**To create an Amazon S3 bucket**

1. Open the [Amazon S3 console](https://console.amazonaws.cn/s3) and select the **General purpose buckets** page.

1. Select the Amazon Web Services Region closest to your geographical location. You can change your region using the drop-down list at the top of the screen. Later in the tutorial, you must create your Lambda function in the same Region.  
![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/console_region_select.png)

1. Choose **Create bucket**.

1. Under **General configuration**, do the following:

   1. For **Bucket type**, ensure **General purpose** is selected.

   1. For **Bucket name**, enter a globally unique name that meets the Amazon S3 [Bucket naming rules](https://docs.amazonaws.cn/AmazonS3/latest/userguide/bucketnamingrules.html). Bucket names can contain only lower case letters, numbers, dots (.), and hyphens (-).

1. Leave all other options set to their default values and choose **Create bucket**.

## Upload a test object to your bucket


![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/services-s3-example/s3trigger_tut_steps2.png)


**To upload a test object**

1. Open the [Buckets](https://console.amazonaws.cn/s3/buckets) page of the Amazon S3 console and choose the bucket you created during the previous step.

1. Choose **Upload**.

1. Choose **Add files** and select the object that you want to upload. You can select any file (for example, `HappyFace.jpg`).

1. Choose **Open**, then choose **Upload**.

Later in the tutorial, you’ll test your Lambda function using this object.

## Create a permissions policy


![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/services-s3-example/s3trigger_tut_steps3.png)


Create a permissions policy that allows Lambda to get objects from an Amazon S3 bucket and to write to Amazon CloudWatch Logs. 

**To create the policy**

1. Open the [Policies page](https://console.amazonaws.cn/iam/home#/policies) of the IAM console.

1. Choose **Create Policy**.

1. Choose the **JSON** tab, and then paste the following custom policy into the JSON editor.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "logs:PutLogEvents",
                   "logs:CreateLogGroup",
                   "logs:CreateLogStream"
               ],
               "Resource": "arn:aws-cn:logs:*:*:*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "s3:GetObject"
               ],
               "Resource": "arn:aws-cn:s3:::*/*"
           }
       ]
   }
   ```

------

1. Choose **Next: Tags**.

1. Choose **Next: Review**.

1. Under **Review policy**, for the policy **Name**, enter **s3-trigger-tutorial**.

1. Choose **Create policy**.

## Create an execution role


![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/services-s3-example/s3trigger_tut_steps4.png)


An [execution role](lambda-intro-execution-role.md) is an Amazon Identity and Access Management (IAM) role that grants a Lambda function permission to access Amazon Web Services services and resources. In this step, create an execution role using the permissions policy that you created in the previous step.

**To create an execution role and attach your custom permissions policy**

1. Open the [Roles page](https://console.amazonaws.cn/iam/home#/roles) of the IAM console.

1. Choose **Create role**.

1. For the type of trusted entity, choose **Amazon service**, then for the use case, choose **Lambda**.

1. Choose **Next**.

1. In the policy search box, enter **s3-trigger-tutorial**.

1. In the search results, select the policy that you created (`s3-trigger-tutorial`), and then choose **Next**.

1. Under **Role details**, for the **Role name**, enter **lambda-s3-trigger-role**, then choose **Create role**.

## Create the Lambda function


![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/services-s3-example/s3trigger_tut_steps5.png)


Create a Lambda function in the console using the Python 3.14 runtime.

**To create the Lambda function**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Make sure you're working in the same Amazon Web Services Region you created your Amazon S3 bucket in. You can change your Region using the drop-down list at the top of the screen.  
![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/console_region_select.png)

1. Choose **Create function**.

1. Choose **Author from scratch**

1. Under **Basic information**, do the following:

   1. For **Function name**, enter `s3-trigger-tutorial`

   1. For **Runtime**, choose **Python 3.14**.

   1. For **Architecture**, choose **x86\$164**.

1. In the **Change default execution role** tab, do the following:

   1. Expand the tab, then choose **Use an existing role**.

   1. Select the `lambda-s3-trigger-role` you created earlier.

1. Choose **Create function**.

## Deploy the function code


![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/services-s3-example/s3trigger_tut_steps6.png)


This tutorial uses the Python 3.14 runtime, but we’ve also provided example code files for other runtimes. You can select the tab in the following box to see the code for the runtime you’re interested in.

The Lambda function retrieves the key name of the uploaded object and the name of the bucket from the `event` parameter it receives from Amazon S3. The function then uses the [get\$1object](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3/client/get_object.html) method from the Amazon SDK for Python (Boto3) to retrieve the object's metadata, including the content type (MIME type) of the uploaded object.

**To deploy the function code**

1. Choose the **Python** tab in the following box and copy the code.

------
#### [ .NET ]

**Amazon SDK for .NET**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-s3-to-lambda) repository. 
Consuming an S3 event with Lambda using .NET.  

   ```
   // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
   // SPDX-License-Identifier: Apache-2.0
   ﻿using System.Threading.Tasks;
   using Amazon.Lambda.Core;
   using Amazon.S3;
   using System;
   using Amazon.Lambda.S3Events;
   using System.Web;
   
   // Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
   [assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))]
   
   namespace S3Integration
   {
       public class Function
       {
           private static AmazonS3Client _s3Client;
           public Function() : this(null)
           {
           }
   
           internal Function(AmazonS3Client s3Client)
           {
               _s3Client = s3Client ?? new AmazonS3Client();
           }
   
           public async Task<string> Handler(S3Event evt, ILambdaContext context)
           {
               try
               {
                   if (evt.Records.Count <= 0)
                   {
                       context.Logger.LogLine("Empty S3 Event received");
                       return string.Empty;
                   }
   
                   var bucket = evt.Records[0].S3.Bucket.Name;
                   var key = HttpUtility.UrlDecode(evt.Records[0].S3.Object.Key);
   
                   context.Logger.LogLine($"Request is for {bucket} and {key}");
   
                   var objectResult = await _s3Client.GetObjectAsync(bucket, key);
   
                   context.Logger.LogLine($"Returning {objectResult.Key}");
   
                   return objectResult.Key;
               }
               catch (Exception e)
               {
                   context.Logger.LogLine($"Error processing request - {e.Message}");
   
                   return string.Empty;
               }
           }
       }
   }
   ```

------
#### [ Go ]

**SDK for Go V2**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-s3-to-lambda) repository. 
Consuming an S3 event with Lambda using Go.  

   ```
   // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
   // SPDX-License-Identifier: Apache-2.0
   package main
   
   import (
   	"context"
   	"log"
   
   	"github.com/aws/aws-lambda-go/events"
   	"github.com/aws/aws-lambda-go/lambda"
   	"github.com/aws/aws-sdk-go-v2/config"
   	"github.com/aws/aws-sdk-go-v2/service/s3"
   )
   
   func handler(ctx context.Context, s3Event events.S3Event) error {
   	sdkConfig, err := config.LoadDefaultConfig(ctx)
   	if err != nil {
   		log.Printf("failed to load default config: %s", err)
   		return err
   	}
   	s3Client := s3.NewFromConfig(sdkConfig)
   
   	for _, record := range s3Event.Records {
   		bucket := record.S3.Bucket.Name
   		key := record.S3.Object.URLDecodedKey
   		headOutput, err := s3Client.HeadObject(ctx, &s3.HeadObjectInput{
   			Bucket: &bucket,
   			Key:    &key,
   		})
   		if err != nil {
   			log.Printf("error getting head of object %s/%s: %s", bucket, key, err)
   			return err
   		}
   		log.Printf("successfully retrieved %s/%s of type %s", bucket, key, *headOutput.ContentType)
   	}
   
   	return nil
   }
   
   func main() {
   	lambda.Start(handler)
   }
   ```

------
#### [ Java ]

**SDK for Java 2.x**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-s3-to-lambda) repository. 
Consuming an S3 event with Lambda using Java.  

   ```
   // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
   // SPDX-License-Identifier: Apache-2.0
   package example;
   
   import software.amazon.awssdk.services.s3.model.HeadObjectRequest;
   import software.amazon.awssdk.services.s3.model.HeadObjectResponse;
   import software.amazon.awssdk.services.s3.S3Client;
   
   import com.amazonaws.services.lambda.runtime.Context;
   import com.amazonaws.services.lambda.runtime.RequestHandler;
   import com.amazonaws.services.lambda.runtime.events.S3Event;
   import com.amazonaws.services.lambda.runtime.events.models.s3.S3EventNotification.S3EventNotificationRecord;
   
   import org.slf4j.Logger;
   import org.slf4j.LoggerFactory;
   
   public class Handler implements RequestHandler<S3Event, String> {
       private static final Logger logger = LoggerFactory.getLogger(Handler.class);
       @Override
       public String handleRequest(S3Event s3event, Context context) {
           try {
             S3EventNotificationRecord record = s3event.getRecords().get(0);
             String srcBucket = record.getS3().getBucket().getName();
             String srcKey = record.getS3().getObject().getUrlDecodedKey();
   
             S3Client s3Client = S3Client.builder().build();
             HeadObjectResponse headObject = getHeadObject(s3Client, srcBucket, srcKey);
   
             logger.info("Successfully retrieved " + srcBucket + "/" + srcKey + " of type " + headObject.contentType());
   
             return "Ok";
           } catch (Exception e) {
             throw new RuntimeException(e);
           }
       }
   
       private HeadObjectResponse getHeadObject(S3Client s3Client, String bucket, String key) {
           HeadObjectRequest headObjectRequest = HeadObjectRequest.builder()
                   .bucket(bucket)
                   .key(key)
                   .build();
           return s3Client.headObject(headObjectRequest);
       }
   }
   ```

------
#### [ JavaScript ]

**SDK for JavaScript (v3)**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-s3-to-lambda) repository. 
Consuming an S3 event with Lambda using JavaScript.  

   ```
   import { S3Client, HeadObjectCommand } from "@aws-sdk/client-s3";
   
   const client = new S3Client();
   
   export const handler = async (event, context) => {
   
       // Get the object from the event and show its content type
       const bucket = event.Records[0].s3.bucket.name;
       const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' '));
   
       try {
           const { ContentType } = await client.send(new HeadObjectCommand({
               Bucket: bucket,
               Key: key,
           }));
   
           console.log('CONTENT TYPE:', ContentType);
           return ContentType;
   
       } catch (err) {
           console.log(err);
           const message = `Error getting object ${key} from bucket ${bucket}. Make sure they exist and your bucket is in the same region as this function.`;
           console.log(message);
           throw new Error(message);
       }
   };
   ```
Consuming an S3 event with Lambda using TypeScript.  

   ```
   // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
   // SPDX-License-Identifier: Apache-2.0
   import { S3Event } from 'aws-lambda';
   import { S3Client, HeadObjectCommand } from '@aws-sdk/client-s3';
   
   const s3 = new S3Client({ region: process.env.AWS_REGION });
   
   export const handler = async (event: S3Event): Promise<string | undefined> => {
     // Get the object from the event and show its content type
     const bucket = event.Records[0].s3.bucket.name;
     const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' '));
     const params = {
       Bucket: bucket,
       Key: key,
     };
     try {
       const { ContentType } = await s3.send(new HeadObjectCommand(params));
       console.log('CONTENT TYPE:', ContentType);
       return ContentType;
     } catch (err) {
       console.log(err);
       const message = `Error getting object ${key} from bucket ${bucket}. Make sure they exist and your bucket is in the same region as this function.`;
       console.log(message);
       throw new Error(message);
     }
   };
   ```

------
#### [ PHP ]

**SDK for PHP**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-s3-to-lambda) repository. 
Consuming an S3 event with Lambda using PHP.  

   ```
   <?php
   
   use Bref\Context\Context;
   use Bref\Event\S3\S3Event;
   use Bref\Event\S3\S3Handler;
   use Bref\Logger\StderrLogger;
   
   require __DIR__ . '/vendor/autoload.php';
   
   
   class Handler extends S3Handler 
   {
       private StderrLogger $logger;
       public function __construct(StderrLogger $logger)
       {
           $this->logger = $logger;
       }
       
       public function handleS3(S3Event $event, Context $context) : void
       {
           $this->logger->info("Processing S3 records");
   
           // Get the object from the event and show its content type
           $records = $event->getRecords();
           
           foreach ($records as $record) 
           {
               $bucket = $record->getBucket()->getName();
               $key = urldecode($record->getObject()->getKey());
   
               try {
                   $fileSize = urldecode($record->getObject()->getSize());
                   echo "File Size: " . $fileSize . "\n";
                   // TODO: Implement your custom processing logic here
               } catch (Exception $e) {
                   echo $e->getMessage() . "\n";
                   echo 'Error getting object ' . $key . ' from bucket ' . $bucket . '. Make sure they exist and your bucket is in the same region as this function.' . "\n";
                   throw $e;
               }
           }
       }
   }
   
   $logger = new StderrLogger();
   return new Handler($logger);
   ```

------
#### [ Python ]

**SDK for Python (Boto3)**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-s3-to-lambda) repository. 
Consuming an S3 event with Lambda using Python.  

   ```
   # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
   # SPDX-License-Identifier: Apache-2.0
   import json
   import urllib.parse
   import boto3
   
   print('Loading function')
   
   s3 = boto3.client('s3')
   
   
   def lambda_handler(event, context):
       #print("Received event: " + json.dumps(event, indent=2))
   
       # Get the object from the event and show its content type
       bucket = event['Records'][0]['s3']['bucket']['name']
       key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8')
       try:
           response = s3.get_object(Bucket=bucket, Key=key)
           print("CONTENT TYPE: " + response['ContentType'])
           return response['ContentType']
       except Exception as e:
           print(e)
           print('Error getting object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket))
           raise e
   ```

------
#### [ Ruby ]

**SDK for Ruby**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-s3-to-lambda) repository. 
Consuming an S3 event with Lambda using Ruby.  

   ```
   require 'json'
   require 'uri'
   require 'aws-sdk'
   
   puts 'Loading function'
   
   def lambda_handler(event:, context:)
     s3 = Aws::S3::Client.new(region: 'region') # Your AWS region
     # puts "Received event: #{JSON.dump(event)}"
   
     # Get the object from the event and show its content type
     bucket = event['Records'][0]['s3']['bucket']['name']
     key = URI.decode_www_form_component(event['Records'][0]['s3']['object']['key'], Encoding::UTF_8)
     begin
       response = s3.get_object(bucket: bucket, key: key)
       puts "CONTENT TYPE: #{response.content_type}"
       return response.content_type
     rescue StandardError => e
       puts e.message
       puts "Error getting object #{key} from bucket #{bucket}. Make sure they exist and your bucket is in the same region as this function."
       raise e
     end
   end
   ```

------
#### [ Rust ]

**SDK for Rust**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-s3-to-lambda) repository. 
Consuming an S3 event with Lambda using Rust.  

   ```
   // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
   // SPDX-License-Identifier: Apache-2.0
   use aws_lambda_events::event::s3::S3Event;
   use aws_sdk_s3::{Client};
   use lambda_runtime::{run, service_fn, Error, LambdaEvent};
   
   
   /// Main function
   #[tokio::main]
   async fn main() -> Result<(), Error> {
       tracing_subscriber::fmt()
           .with_max_level(tracing::Level::INFO)
           .with_target(false)
           .without_time()
           .init();
   
       // Initialize the AWS SDK for Rust
       let config = aws_config::load_from_env().await;
       let s3_client = Client::new(&config);
   
       let res = run(service_fn(|request: LambdaEvent<S3Event>| {
           function_handler(&s3_client, request)
       })).await;
   
       res
   }
   
   async fn function_handler(
       s3_client: &Client,
       evt: LambdaEvent<S3Event>
   ) -> Result<(), Error> {
       tracing::info!(records = ?evt.payload.records.len(), "Received request from SQS");
   
       if evt.payload.records.len() == 0 {
           tracing::info!("Empty S3 event received");
       }
   
       let bucket = evt.payload.records[0].s3.bucket.name.as_ref().expect("Bucket name to exist");
       let key = evt.payload.records[0].s3.object.key.as_ref().expect("Object key to exist");
   
       tracing::info!("Request is for {} and object {}", bucket, key);
   
       let s3_get_object_result = s3_client
           .get_object()
           .bucket(bucket)
           .key(key)
           .send()
           .await;
   
       match s3_get_object_result {
           Ok(_) => tracing::info!("S3 Get Object success, the s3GetObjectResult contains a 'body' property of type ByteStream"),
           Err(_) => tracing::info!("Failure with S3 Get Object request")
       }
   
       Ok(())
   }
   ```

------

1. In the **Code source** pane on the Lambda console, paste the code into the code editor, replacing the code that Lambda created.

1. In the **DEPLOY** section, choose **Deploy** to update your function's code:  
![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/getting-started-tutorial/deploy-console.png)

## Create the Amazon S3 trigger


![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/services-s3-example/s3trigger_tut_steps7.png)


**To create the Amazon S3 trigger**

1. In the **Function overview** pane, choose **Add trigger**.  
![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/overview-trigger.png)

1. Select **S3**.

1. Under **Bucket**, select the bucket you created earlier in the tutorial.

1. Under **Event types**, be sure that **All object create events** is selected.

1. Under **Recursive invocation**, select the check box to acknowledge that using the same Amazon S3 bucket for input and output is not recommended.

1. Choose **Add**.

**Note**  
When you create an Amazon S3 trigger for a Lambda function using the Lambda console, Amazon S3 configures an [event notification](https://docs.amazonaws.cn/AmazonS3/latest/userguide/EventNotifications.html) on the bucket you specify. Before configuring this event notification, Amazon S3 performs a series of checks to confirm that the event destination exists and has the required IAM policies. Amazon S3 also performs these tests on any other event notifications configured for that bucket.  
Because of this check, if the bucket has previously configured event destinations for resources that no longer exist, or for resources that don't have the required permissions policies, Amazon S3 won't be able to create the new event notification. You'll see the following error message indicating that your trigger couldn't be created:  

```
An error occurred when creating the trigger: Unable to validate the following destination configurations.
```
You can see this error if you previously configured a trigger for another Lambda function using the same bucket, and you have since deleted the function or modified its permissions policies.

## Test your Lambda function with a dummy event
Test the Lambda function

![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/services-s3-example/s3trigger_tut_steps8.png)


**To test the Lambda function with a dummy event**

1. In the Lambda console page for your function, choose the **Test** tab.  
![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/test-tab.png)

1. For **Event name**, enter `MyTestEvent`.

1. In the **Event JSON**, paste the following test event. Be sure to replace these values:
   + Replace `us-east-1` with the region you created your Amazon S3 bucket in.
   + Replace both instances of `amzn-s3-demo-bucket` with the name of your own Amazon S3 bucket.
   + Replace `test%2FKey` with the name of the test object you uploaded to your bucket earlier (for example, `HappyFace.jpg`).

   ```
   {
     "Records": [
       {
         "eventVersion": "2.0",
         "eventSource": "aws:s3",
         "awsRegion": "us-east-1",
         "eventTime": "1970-01-01T00:00:00.000Z",
         "eventName": "ObjectCreated:Put",
         "userIdentity": {
           "principalId": "EXAMPLE"
         },
         "requestParameters": {
           "sourceIPAddress": "127.0.0.1"
         },
         "responseElements": {
           "x-amz-request-id": "EXAMPLE123456789",
           "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH"
         },
         "s3": {
           "s3SchemaVersion": "1.0",
           "configurationId": "testConfigRule",
           "bucket": {
             "name": "amzn-s3-demo-bucket",
             "ownerIdentity": {
               "principalId": "EXAMPLE"
             },
             "arn": "arn:aws:s3:::amzn-s3-demo-bucket"
           },
           "object": {
             "key": "test%2Fkey",
             "size": 1024,
             "eTag": "0123456789abcdef0123456789abcdef",
             "sequencer": "0A1B2C3D4E5F678901"
           }
         }
       }
     ]
   }
   ```

1. Choose **Save**.

1. Choose **Test**.

1. If your function runs successfully, you’ll see output similar to the following in the **Execution results** tab.

   ```
   Response
   "image/jpeg"
   
   Function Logs
   START RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Version: $LATEST
   2021-02-18T21:40:59.280Z    12b3cae7-5f4e-415e-93e6-416b8f8b66e6    INFO    INPUT BUCKET AND KEY:  { Bucket: 'amzn-s3-demo-bucket', Key: 'HappyFace.jpg' }
   2021-02-18T21:41:00.215Z    12b3cae7-5f4e-415e-93e6-416b8f8b66e6    INFO    CONTENT TYPE: image/jpeg
   END RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6
   REPORT RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6    Duration: 976.25 ms    Billed Duration: 977 ms    Memory Size: 128 MB    Max Memory Used: 90 MB    Init Duration: 430.47 ms        
   
   Request ID
   12b3cae7-5f4e-415e-93e6-416b8f8b66e6
   ```

### Test the Lambda function with the Amazon S3 trigger


![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/services-s3-example/s3trigger_tut_steps9.png)


To test your function with the configured trigger, upload an object to your Amazon S3 bucket using the console. To verify that your Lambda function ran as expected, use CloudWatch Logs to view your function’s output.

**To upload an object to your Amazon S3 bucket**

1. Open the [Buckets](https://console.amazonaws.cn/s3/buckets) page of the Amazon S3 console and choose the bucket that you created earlier.

1. Choose **Upload**.

1. Choose **Add files** and use the file selector to choose an object you want to upload. This object can be any file you choose.

1. Choose **Open**, then choose **Upload**.

**To verify the function invocation using CloudWatch Logs**

1. Open the [CloudWatch](https://console.amazonaws.cn/cloudwatch/home) console.

1. Make sure you're working in the same Amazon Web Services Region you created your Lambda function in. You can change your Region using the drop-down list at the top of the screen.  
![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/console_region_select.png)

1. Choose **Logs**, then choose **Log groups**.

1. Choose the log group for your function (`/aws/lambda/s3-trigger-tutorial`).

1. Under **Log streams**, choose the most recent log stream.

1. If your function was invoked correctly in response to your Amazon S3 trigger, you’ll see output similar to the following. The `CONTENT TYPE` you see depends on the type of file you uploaded to your bucket.

   ```
   2022-05-09T23:17:28.702Z	0cae7f5a-b0af-4c73-8563-a3430333cc10	INFO	CONTENT TYPE: image/jpeg
   ```

## Clean up your resources


You can now delete the resources that you created for this tutorial, unless you want to retain them. By deleting Amazon resources that you're no longer using, you prevent unnecessary charges to your Amazon Web Services account.

**To delete the Lambda function**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Select the function that you created.

1. Choose **Actions**, **Delete**.

1. Type **confirm** in the text input field and choose **Delete**.

**To delete the execution role**

1. Open the [Roles page](https://console.amazonaws.cn/iam/home#/roles) of the IAM console.

1. Select the execution role that you created.

1. Choose **Delete**.

1. Enter the name of the role in the text input field and choose **Delete**.

**To delete the S3 bucket**

1. Open the [Amazon S3 console.](https://console.amazonaws.cn//s3/home#)

1. Select the bucket you created.

1. Choose **Delete**.

1. Enter the name of the bucket in the text input field.

1. Choose **Delete bucket**.

## Next steps


In [Tutorial: Using an Amazon S3 trigger to create thumbnail images](with-s3-tutorial.md), the Amazon S3 trigger invokes a function that creates a thumbnail image for each image file that is uploaded to a bucket. This tutorial requires a moderate level of Amazon and Lambda domain knowledge. It demonstrates how to create resources using the Amazon Command Line Interface (Amazon CLI) and how to create a .zip file archive deployment package for the function and its dependencies.

# Tutorial: Using an Amazon S3 trigger to create thumbnail images
Tutorial: Use an Amazon S3 trigger to create thumbnails

In this tutorial, you create and configure a Lambda function that resizes images added to an Amazon Simple Storage Service (Amazon S3) bucket. When you add an image file to your bucket, Amazon S3 invokes your Lambda function. The function then creates a thumbnail version of the image and outputs it to a different Amazon S3 bucket.

![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/services-s3-tutorial/s3thumb_tut_resources.png)


To complete this tutorial, you carry out the following steps:

1. Create source and destination Amazon S3 buckets and upload a sample image.

1. Create a Lambda function that resizes an image and outputs a thumbnail to an Amazon S3 bucket.

1. Configure a Lambda trigger that invokes your function when objects are uploaded to your source bucket.

1. Test your function, first with a dummy event, and then by uploading an image to your source bucket.

By completing these steps, you’ll learn how to use Lambda to carry out a file processing task on objects added to an Amazon S3 bucket. You can complete this tutorial using the Amazon Command Line Interface (Amazon CLI) or the Amazon Web Services Management Console.

If you're looking for a simpler example to learn how to configure an Amazon S3 trigger for Lambda, you can try [Tutorial: Using an Amazon S3 trigger to invoke a Lambda function](https://docs.amazonaws.cn/lambda/latest/dg/with-s3-example.html).

**Topics**
+ [

## Prerequisites
](#with-s3-example-prereqs)
+ [

## Create two Amazon S3 buckets
](#with-s3-tutorial-prepare-create-buckets)
+ [

## Upload a test image to your source bucket
](#with-s3-tutorial-test-image)
+ [

## Create a permissions policy
](#with-s3-tutorial-create-policy)
+ [

## Create an execution role
](#with-s3-tutorial-create-execution-role)
+ [

## Create the function deployment package
](#with-s3-tutorial-create-function-package)
+ [

## Create the Lambda function
](#with-s3-tutorial-create-function-createfunction)
+ [

## Configure Amazon S3 to invoke the function
](#with-s3-tutorial-configure-s3-trigger)
+ [

## Test your Lambda function with a dummy event
](#with-s3-tutorial-dummy-test)
+ [

## Test your function using the Amazon S3 trigger
](#with-s3-tutorial-test-s3)
+ [

## Clean up your resources
](#s3-tutorial-cleanup)

## Prerequisites


If you want to use the Amazon CLI to complete the tutorial, install the [latest version of the Amazon Command Line Interface]().

For your Lambda function code, you can use Python or Node.js. Install the language support tools and a package manager for the language that you want to use. 

### Install the Amazon Command Line Interface


If you have not yet installed the Amazon Command Line Interface, follow the steps at [Installing or updating the latest version of the Amazon CLI](https://docs.amazonaws.cn/cli/latest/userguide/getting-started-install.html) to install it.

The tutorial requires a command line terminal or shell to run commands. In Linux and macOS, use your preferred shell and package manager.

**Note**  
In Windows, some Bash CLI commands that you commonly use with Lambda (such as `zip`) are not supported by the operating system's built-in terminals. To get a Windows-integrated version of Ubuntu and Bash, [install the Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/install-win10). 

## Create two Amazon S3 buckets


![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/services-s3-tutorial/s3thumb_tut_steps1.png)


First create two Amazon S3 buckets. The first bucket is the source bucket you will upload your images to. The second bucket is used by Lambda to save the resized thumbnail when you invoke your function.

------
#### [ Amazon Web Services Management Console ]

**To create the Amazon S3 buckets (console)**

1. Open the [Amazon S3 console](https://console.amazonaws.cn/s3) and select the **General purpose buckets** page.

1. Select the Amazon Web Services Region closest to your geographical location. You can change your region using the drop-down list at the top of the screen. Later in the tutorial, you must create your Lambda function in the same Region.  
![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/console_region_select.png)

1. Choose **Create bucket**.

1. Under **General configuration**, do the following:

   1. For **Bucket type**, ensure **General purpose** is selected.

   1. For **Bucket name**, enter a globally unique name that meets the Amazon S3 [Bucket naming rules](https://docs.amazonaws.cn/AmazonS3/latest/userguide/bucketnamingrules.html). Bucket names can contain only lower case letters, numbers, dots (.), and hyphens (-).

1. Leave all other options set to their default values and choose **Create bucket**.

1. Repeat steps 1 to 5 to create your destination bucket. For **Bucket name**, enter `amzn-s3-demo-source-bucket-resized`, where `amzn-s3-demo-source-bucket` is the name of the source bucket you just created.

------
#### [ Amazon CLI ]

**To create the Amazon S3 buckets (Amazon CLI)**

1. Run the following CLI command to create your source bucket. The name you choose for your bucket must be globally unique and follow the Amazon S3 [Bucket naming rules](https://docs.amazonaws.cn/AmazonS3/latest/userguide/bucketnamingrules.html). Names can only contain lower case letters, numbers, dots (.), and hyphens (-). For `region` and `LocationConstraint`, choose the [Amazon Web Services Region](https://docs.amazonaws.cn/general/latest/gr/lambda-service.html) closest to your geographical location.

   ```
   aws s3api create-bucket --bucket amzn-s3-demo-source-bucket --region us-east-1 \
   --create-bucket-configuration LocationConstraint=us-east-1
   ```

   Later in the tutorial, you must create your Lambda function in the same Amazon Web Services Region as your source bucket, so make a note of the region you chose.

1. Run the following command to create your destination bucket. For the bucket name, you must use `amzn-s3-demo-source-bucket-resized`, where `amzn-s3-demo-source-bucket` is the name of the source bucket you created in step 1. For `region` and `LocationConstraint`, choose the same Amazon Web Services Region you used to create your source bucket.

   ```
   aws s3api create-bucket --bucket amzn-s3-demo-source-bucket-resized --region us-east-1 \
   --create-bucket-configuration LocationConstraint=us-east-1
   ```

------

## Upload a test image to your source bucket


![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/services-s3-tutorial/s3thumb_tut_steps2.png)


Later in the tutorial, you’ll test your Lambda function by invoking it using the Amazon CLI or the Lambda console. To confirm that your function is operating correctly, your source bucket needs to contain a test image. This image can be any JPG or PNG file you choose.

------
#### [ Amazon Web Services Management Console ]

**To upload a test image to your source bucket (console)**

1. Open the [Buckets](https://console.amazonaws.cn/s3/buckets) page of the Amazon S3 console.

1. Select the source bucket you created in the previous step.

1. Choose **Upload**.

1. Choose **Add files** and use the file selector to choose the object you want to upload.

1. Choose **Open**, then choose **Upload**.

------
#### [ Amazon CLI ]

**To upload a test image to your source bucket (Amazon CLI)**
+ From the directory containing the image you want to upload, run the following CLI command. Replace the `--bucket` parameter with the name of your source bucket. For the `--key` and `--body` parameters, use the filename of your test image.

  ```
  aws s3api put-object --bucket amzn-s3-demo-source-bucket --key HappyFace.jpg --body ./HappyFace.jpg
  ```

------

## Create a permissions policy


![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/services-s3-tutorial/s3thumb_tut_steps3.png)


The first step in creating your Lambda function is to create a permissions policy. This policy gives your function the permissions it needs to access other Amazon resources. For this tutorial, the policy gives Lambda read and write permissions for Amazon S3 buckets and allows it to write to Amazon CloudWatch Logs.

------
#### [ Amazon Web Services Management Console ]

**To create the policy (console)**

1. Open the [Policies](https://console.amazonaws.cn/iamv2/home#policies) page of the Amazon Identity and Access Management (IAM) console.

1. Choose **Create policy**.

1. Choose the **JSON** tab, and then paste the following custom policy into the JSON editor.  
****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "logs:PutLogEvents",
                   "logs:CreateLogGroup",
                   "logs:CreateLogStream"
               ],
               "Resource": "arn:aws-cn:logs:*:*:*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "s3:GetObject"
               ],
               "Resource": "arn:aws-cn:s3:::*/*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "s3:PutObject"
               ],
               "Resource": "arn:aws-cn:s3:::*/*"
           }
       ]
   }
   ```

1. Choose **Next**.

1. Under **Policy details**, for **Policy name**, enter `LambdaS3Policy`.

1. Choose **Create policy**.

------
#### [ Amazon CLI ]

**To create the policy (Amazon CLI)**

1. Save the following JSON in a file named `policy.json`.  
****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "logs:PutLogEvents",
                   "logs:CreateLogGroup",
                   "logs:CreateLogStream"
               ],
               "Resource": "arn:aws-cn:logs:*:*:*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "s3:GetObject"
               ],
               "Resource": "arn:aws-cn:s3:::*/*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "s3:PutObject"
               ],
               "Resource": "arn:aws-cn:s3:::*/*"
           }
       ]
   }
   ```

1. From the directory you saved the JSON policy document in, run the following CLI command.

   ```
   aws iam create-policy --policy-name LambdaS3Policy --policy-document file://policy.json
   ```

------

## Create an execution role


![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/services-s3-tutorial/s3thumb_tut_steps4.png)


An execution role is an IAM role that grants a Lambda function permission to access Amazon Web Services services and resources. To give your function read and write access to an Amazon S3 bucket, you attach the permissions policy you created in the previous step.

------
#### [ Amazon Web Services Management Console ]

**To create an execution role and attach your permissions policy (console)**

1. Open the [Roles](https://console.amazonaws.cn/iamv2/home#roles) page of the (IAM) console.

1. Choose **Create role**.

1. For **Trusted entity type**, select **Amazon Web Services service**, and for **Use case**, select **Lambda**.

1. Choose **Next**.

1. Add the permissions policy you created in the previous step by doing the following:

   1. In the policy search box, enter `LambdaS3Policy`.

   1. In the search results, select the check box for `LambdaS3Policy`.

   1. Choose **Next**.

1. Under **Role details**, for the **Role name** enter `LambdaS3Role`.

1. Choose **Create role**.

------
#### [ Amazon CLI ]

**To create an execution role and attach your permissions policy (Amazon CLI)**

1. Save the following JSON in a file named `trust-policy.json`. This trust policy allows Lambda to use the role’s permissions by giving the service principal `lambda.amazonaws.com` permission to call the Amazon Security Token Service (Amazon STS) `AssumeRole` action.  
****  

   ```
   {
     "Version":"2012-10-17",		 	 	 
     "Statement": [
       {
         "Effect": "Allow",
         "Principal": {
           "Service": "lambda.amazonaws.com"
         },
         "Action": "sts:AssumeRole"
       }
     ]
   }
   ```

1. From the directory you saved the JSON trust policy document in, run the following CLI command to create the execution role.

   ```
   aws iam create-role --role-name LambdaS3Role --assume-role-policy-document file://trust-policy.json
   ```

1. To attach the permissions policy you created in the previous step, run the following CLI command. Replace the Amazon Web Services account number in the policy’s ARN with your own account number.

   ```
   aws iam attach-role-policy --role-name LambdaS3Role --policy-arn arn:aws:iam::123456789012:policy/LambdaS3Policy
   ```

------

## Create the function deployment package


![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/services-s3-tutorial/s3thumb_tut_steps5.png)


To create your function, you create a *deployment package* containing your function code and its dependencies. For this `CreateThumbnail` function, your function code uses a separate library for the image resizing. Follow the instructions for your chosen language to create a deployment package containing the required library.

------
#### [ Node.js ]

**To create the deployment package (Node.js)**

1. Create a directory named `lambda-s3` for your function code and dependencies and navigate into it.

   ```
   mkdir lambda-s3
   cd lambda-s3
   ```

1. Create a new Node.js project with `npm`. To accept the default options provided in the interactive experience, press `Enter`.

   ```
   npm init
   ```

1. Save the following function code in a file named `index.mjs`. Make sure to replace `us-east-1` with the Amazon Web Services Region in which you created your own source and destination buckets.

   ```
   // dependencies
   import { S3Client, GetObjectCommand, PutObjectCommand } from '@aws-sdk/client-s3';
   
   import { Readable } from 'stream';
   
   import sharp from 'sharp';
   import util from 'util';
   
   
   // create S3 client
   const s3 = new S3Client({region: 'us-east-1'});
   
   // define the handler function
   export const handler = async (event, context) => {
   
   // Read options from the event parameter and get the source bucket
   console.log("Reading options from event:\n", util.inspect(event, {depth: 5}));
     const srcBucket = event.Records[0].s3.bucket.name;
     
   // Object key may have spaces or unicode non-ASCII characters
   const srcKey    = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " "));
   const dstBucket = srcBucket + "-resized";
   const dstKey    = "resized-" + srcKey;
   
   // Infer the image type from the file suffix
   const typeMatch = srcKey.match(/\.([^.]*)$/);
   if (!typeMatch) {
     console.log("Could not determine the image type.");
     return;
   }
   
   // Check that the image type is supported
   const imageType = typeMatch[1].toLowerCase();
   if (imageType != "jpg" && imageType != "png") {
     console.log(`Unsupported image type: ${imageType}`);
     return;
   }
   
   // Get the image from the source bucket. GetObjectCommand returns a stream.
   try {
     const params = {
       Bucket: srcBucket,
       Key: srcKey
     };
     var response = await s3.send(new GetObjectCommand(params));
     var stream = response.Body;
     
   // Convert stream to buffer to pass to sharp resize function.
     if (stream instanceof Readable) {
       var content_buffer = Buffer.concat(await stream.toArray());
       
     } else {
       throw new Error('Unknown object stream type');
     }
   
   
   } catch (error) {
     console.log(error);
     return;
   }
   
     
   // set thumbnail width. Resize will set the height automatically to maintain aspect ratio.
   const width  = 200;
   
   // Use the sharp module to resize the image and save in a buffer.
   try {    
     var output_buffer = await sharp(content_buffer).resize(width).toBuffer();
   
   } catch (error) {
     console.log(error);
     return;
   }
   
   // Upload the thumbnail image to the destination bucket
   try {
     const destparams = {
       Bucket: dstBucket,
       Key: dstKey,
       Body: output_buffer,
       ContentType: "image"
     };
   
     const putResult = await s3.send(new PutObjectCommand(destparams));
   
     } catch (error) {
       console.log(error);
       return;
     }
   
     console.log('Successfully resized ' + srcBucket + '/' + srcKey +
       ' and uploaded to ' + dstBucket + '/' + dstKey);
     };
   ```

1. In your `lambda-s3` directory, install the sharp library using npm. Note that the latest version of sharp (0.33) isn't compatible with Lambda. Install version 0.32.6 to complete this tutorial.

   ```
   npm install sharp@0.32.6
   ```

   The npm `install` command creates a `node_modules` directory for your modules. After this step, your directory structure should look like the following.

   ```
   lambda-s3
   |- index.mjs
   |- node_modules
   |  |- base64js
   |  |- bl
   |  |- buffer
   ...
   |- package-lock.json
   |- package.json
   ```

1. Create a .zip deployment package containing your function code and its dependencies. In MacOS and Linux, run the following command.

   ```
   zip -r function.zip .
   ```

   In Windows, use your preferred zip utility to create a .zip file. Ensure that your `index.mjs`, `package.json`, and `package-lock.json` files and your `node_modules` directory are all at the root of your .zip file.

------
#### [ Python ]

**To create the deployment package (Python)**

1. Save the example code as a file named `lambda_function.py`.

   ```
   import boto3
   import os
   import sys
   import uuid
   from urllib.parse import unquote_plus
   from PIL import Image
   import PIL.Image
               
   s3_client = boto3.client('s3')
               
   def resize_image(image_path, resized_path):
     with Image.open(image_path) as image:
       image.thumbnail(tuple(x / 2 for x in image.size))
       image.save(resized_path)
               
   def lambda_handler(event, context):
     for record in event['Records']:
       bucket = record['s3']['bucket']['name']
       key = unquote_plus(record['s3']['object']['key'])
       tmpkey = key.replace('/', '')
       download_path = '/tmp/{}{}'.format(uuid.uuid4(), tmpkey)
       upload_path = '/tmp/resized-{}'.format(tmpkey)
       s3_client.download_file(bucket, key, download_path)
       resize_image(download_path, upload_path)
       s3_client.upload_file(upload_path, '{}-resized'.format(bucket), 'resized-{}'.format(key))
   ```

1. In the same directory in which you created your `lambda_function.py` file, create a new directory named `package` and install the [Pillow (PIL)](https://pypi.org/project/Pillow/) library and the Amazon SDK for Python (Boto3). Although the Lambda Python runtime includes a version of the Boto3 SDK, we recommend that you add all of your function's dependencies to your deployment package, even if they are included in the runtime. For more information, see [Runtime dependencies in Python](https://docs.amazonaws.cn/lambda/latest/dg/python-package.html#python-package-dependencies).

   ```
   mkdir package
   pip install \
   --platform manylinux2014_x86_64 \
   --target=package \
   --implementation cp \
   --python-version 3.12 \
   --only-binary=:all: --upgrade \
   pillow boto3
   ```

   The Pillow library contains C/C\$1\$1 code. By using the `--platform manylinux_2014_x86_64` and `--only-binary=:all:` options, pip will download and install a version of Pillow that contains pre-compiled binaries compatible with the Amazon Linux 2 operating system. This ensures that your deployment package will work in the Lambda execution environment, regardless of the operating system and architecture of your local build machine.

1. Create a .zip file containing your application code and the Pillow and Boto3 libraries. In Linux or MacOS, run the following commands from your command line interface.

   ```
   cd package
   zip -r ../lambda_function.zip .
   cd ..
   zip lambda_function.zip lambda_function.py
   ```

    In Windows, use your preferred zip tool to create the `lambda_function.zip` file. Make sure that your `lambda_function.py` file and the folders containing your dependencies are all at the root of the .zip file.

You can also create your deployment package using a Python virtual environment. See [Working with .zip file archives for Python Lambda functions](python-package.md)

------

## Create the Lambda function


![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/services-s3-tutorial/s3thumb_tut_steps6.png)


You can create your Lambda function using either the Amazon CLI or the Lambda console. Follow the instructions for your chosen language to create the function.

------
#### [ Amazon Web Services Management Console ]

**To create the function (console)**

To create your Lambda function using the console, you first create a basic function containing some ‘Hello world’ code. You then replace this code with your own function code by uploading the.zip or JAR file you created in the previous step.

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Make sure you're working in the same Amazon Web Services Region you created your Amazon S3 bucket in. You can change your region using the drop-down list at the top of the screen.  
![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/console_region_select.png)

1. Choose **Create function**.

1. Choose **Author from scratch**.

1. Under **Basic information**, do the following:

   1. For **Function name**, enter `CreateThumbnail`.

   1. For **Runtime**, choose either **Node.js 22.x** or **Python 3.12** according to the language you chose for your function.

   1. For **Architecture**, choose **x86\$164**.

1. In the **Change default execution role** tab, do the following:

   1. Expand the tab, then choose **Use an existing role**.

   1. Select the `LambdaS3Role` you created earlier.

1. Choose **Create function**.

**To upload the function code (console)**

1. In the **Code source** pane, choose **Upload from**.

1. Choose **.zip file**. 

1. Choose **Upload**.

1. In the file selector, select your .zip file and choose **Open**.

1. Choose **Save**.

------
#### [ Amazon CLI ]

**To create the function (Amazon CLI)**
+ Run the CLI command for the language you chose. For the `role` parameter, make sure to replace `123456789012` with your own Amazon Web Services account ID. For the `region` parameter, replace `us-east-1` with the region you created your Amazon S3 buckets in.
  + For **Node.js**, run the following command from the directory containing your `function.zip` file.

    ```
    aws lambda create-function --function-name CreateThumbnail \
    --zip-file fileb://function.zip --handler index.handler --runtime nodejs24.x \
    --timeout 10 --memory-size 1024 \
    --role arn:aws:iam::123456789012:role/LambdaS3Role --region us-east-1
    ```
  + For **Python**, run the following command from the directory containing your `lambda_function.zip` file.

    ```
    aws lambda create-function --function-name CreateThumbnail \
    --zip-file fileb://lambda_function.zip --handler lambda_function.lambda_handler \
    --runtime python3.14 --timeout 10 --memory-size 1024 \
    --role arn:aws:iam::123456789012:role/LambdaS3Role --region us-east-1
    ```

------

## Configure Amazon S3 to invoke the function


![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/services-s3-tutorial/s3thumb_tut_steps7.png)


For your Lambda function to run when you upload an image to your source bucket, you need to configure a trigger for your function. You can configure the Amazon S3 trigger using either the console or the Amazon CLI.

**Important**  
This procedure configures the Amazon S3 bucket to invoke your function every time that an object is created in the bucket. Be sure to configure this only on the source bucket. If your Lambda function creates objects in the same bucket that invokes it, your function can be [invoked continuously in a loop](https://serverlessland.com/content/service/lambda/guides/aws-lambda-operator-guide/recursive-runaway). This can result in un expected charges being billed to your Amazon Web Services account.

------
#### [ Amazon Web Services Management Console ]

**To configure the Amazon S3 trigger (console)**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console and choose your function (`CreateThumbnail`).

1. Choose **Add trigger**.

1. Select **S3**.

1. Under **Bucket**, select your source bucket.

1. Under **Event types**, select **All object create events**.

1. Under **Recursive invocation**, select the check box to acknowledge that using the same Amazon S3 bucket for input and output is not recommended. You can learn more about recursive invocation patterns in Lambda by reading [Recursive patterns that cause run-away Lambda functions](https://serverlessland.com/content/service/lambda/guides/aws-lambda-operator-guide/recursive-runaway) in Serverless Land.

1. Choose **Add**.

   When you create a trigger using the Lambda console, Lambda automatically creates a [resource based policy](https://docs.amazonaws.cn/lambda/latest/dg/access-control-resource-based.html) to give the service you select permission to invoke your function. 

------
#### [ Amazon CLI ]

**To configure the Amazon S3 trigger (Amazon CLI)**

1. For your Amazon S3 source bucket to invoke your function when you add an image file, you first need to configure permissions for your function using a [resource based policy](https://docs.amazonaws.cn/lambda/latest/dg/access-control-resource-based.html). A resource-based policy statement gives other Amazon Web Services services permission to invoke your function. To give Amazon S3 permission to invoke your function, run the following CLI command. Be sure to replace the `source-account` parameter with your own Amazon Web Services account ID and to use your own source bucket name.

   ```
   aws lambda add-permission --function-name CreateThumbnail \
   --principal s3.amazonaws.com --statement-id s3invoke --action "lambda:InvokeFunction" \
   --source-arn arn:aws:s3:::amzn-s3-demo-source-bucket \
   --source-account 123456789012
   ```

   The policy you define with this command allows Amazon S3 to invoke your function only when an action takes place on your source bucket.
**Note**  
Although Amazon S3 bucket names are globally unique, when using resource-based policies it is best practice to specify that the bucket must belong to your account. This is because if you delete a bucket, it is possible for another Amazon Web Services account to create a bucket with the same Amazon Resource Name (ARN).

1. Save the following JSON in a file named `notification.json`. When applied to your source bucket, this JSON configures the bucket to send a notification to your Lambda function every time a new object is added. Replace the Amazon Web Services account number and Amazon Web Services Region in the Lambda function ARN with your own account number and region.

   ```
   {
   "LambdaFunctionConfigurations": [
       {
         "Id": "CreateThumbnailEventConfiguration",
         "LambdaFunctionArn": "arn:aws:lambda:us-east-1:123456789012:function:CreateThumbnail",
         "Events": [ "s3:ObjectCreated:Put" ]
       }
     ]
   }
   ```

1. Run the following CLI command to apply the notification settings in the JSON file you created to your source bucket. Replace `amzn-s3-demo-source-bucket` with the name of your own source bucket.

   ```
   aws s3api put-bucket-notification-configuration --bucket amzn-s3-demo-source-bucket \
   --notification-configuration file://notification.json
   ```

   To learn more about the `put-bucket-notification-configuration` command and the `notification-configuration` option, see [put-bucket-notification-configuration](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-bucket-notification-configuration.html) in the *Amazon CLI Command Reference*.

------

## Test your Lambda function with a dummy event


![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/services-s3-tutorial/s3thumb_tut_steps8.png)


Before you test your whole setup by adding an image file to your Amazon S3 source bucket, you test that your Lambda function is working correctly by invoking it with a dummy event. An event in Lambda is a JSON-formatted document that contains data for your function to process. When your function is invoked by Amazon S3, the event sent to your function contains information such as the bucket name, bucket ARN, and object key.

------
#### [ Amazon Web Services Management Console ]

**To test your Lambda function with a dummy event (console)**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console and choose your function (`CreateThumbnail`).

1. Choose the **Test** tab.

1. To create your test event, in the **Test event** pane, do the following:

   1. Under **Test event action**, select **Create new event**.

   1. For **Event name**, enter **myTestEvent**.

   1. For **Template**, select **S3 Put**.

   1. Replace the values for the following parameters with your own values.
      + For `awsRegion`, replace `us-east-1` with the Amazon Web Services Region you created your Amazon S3 buckets in.
      + For `name`, replace `amzn-s3-demo-bucket` with the name of your own Amazon S3 source bucket.
      + For `key`, replace `test%2Fkey` with the filename of the test object you uploaded to your source bucket in the step [Upload a test image to your source bucket](#with-s3-tutorial-test-image).

      ```
      {
        "Records": [
          {
            "eventVersion": "2.0",
            "eventSource": "aws:s3",
            "awsRegion": "us-east-1",
            "eventTime": "1970-01-01T00:00:00.000Z",
            "eventName": "ObjectCreated:Put",
            "userIdentity": {
              "principalId": "EXAMPLE"
            },
            "requestParameters": {
              "sourceIPAddress": "127.0.0.1"
            },
            "responseElements": {
              "x-amz-request-id": "EXAMPLE123456789",
              "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH"
            },
            "s3": {
              "s3SchemaVersion": "1.0",
              "configurationId": "testConfigRule",
              "bucket": {
                "name": "amzn-s3-demo-bucket",
                "ownerIdentity": {
                  "principalId": "EXAMPLE"
                },
                "arn": "arn:aws:s3:::amzn-s3-demo-bucket"
              },
              "object": {
                "key": "test%2Fkey",
                "size": 1024,
                "eTag": "0123456789abcdef0123456789abcdef",
                "sequencer": "0A1B2C3D4E5F678901"
              }
            }
          }
        ]
      }
      ```

   1. Choose **Save**.

1. In the **Test event** pane, choose **Test**.

1. To check the your function has created a resized verison of your image and stored it in your target Amazon S3 bucket, do the following:

   1. Open the [Buckets page](https://console.amazonaws.cn/s3/buckets) of the Amazon S3 console.

   1. Choose your target bucket and confirm that your resized file is listed in the **Objects** pane.

------
#### [ Amazon CLI ]

**To test your Lambda function with a dummy event (Amazon CLI)**

1. Save the following JSON in a file named `dummyS3Event.json`. Replace the values for the following parameters with your own values:
   + For `awsRegion`, replace `us-east-1` with the Amazon Web Services Region you created your Amazon S3 buckets in.
   + For `name`, replace `amzn-s3-demo-bucket` with the name of your own Amazon S3 source bucket.
   + For `key`, replace `test%2Fkey` with the filename of the test object you uploaded to your source bucket in the step [Upload a test image to your source bucket](#with-s3-tutorial-test-image).

   ```
   {
     "Records": [
       {
         "eventVersion": "2.0",
         "eventSource": "aws:s3",
         "awsRegion": "us-east-1",
         "eventTime": "1970-01-01T00:00:00.000Z",
         "eventName": "ObjectCreated:Put",
         "userIdentity": {
           "principalId": "EXAMPLE"
         },
         "requestParameters": {
           "sourceIPAddress": "127.0.0.1"
         },
         "responseElements": {
           "x-amz-request-id": "EXAMPLE123456789",
           "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH"
         },
         "s3": {
           "s3SchemaVersion": "1.0",
           "configurationId": "testConfigRule",
           "bucket": {
             "name": "amzn-s3-demo-bucket",
             "ownerIdentity": {
               "principalId": "EXAMPLE"
             },
             "arn": "arn:aws:s3:::amzn-s3-demo-bucket"
           },
           "object": {
             "key": "test%2Fkey",
             "size": 1024,
             "eTag": "0123456789abcdef0123456789abcdef",
             "sequencer": "0A1B2C3D4E5F678901"
           }
         }
       }
     ]
   }
   ```

1. From the directory you saved your `dummyS3Event.json` file in, invoke the function by running the following CLI command. This command invokes your Lambda function synchronously by specifying `RequestResponse` as the value of the invocation-type parameter. To learn more about synchronous and asynchronous invocation, see [Invoking Lambda functions](https://docs.amazonaws.cn/lambda/latest/dg/lambda-invocation.html).

   ```
   aws lambda invoke --function-name CreateThumbnail \
   --invocation-type RequestResponse --cli-binary-format raw-in-base64-out \
   --payload file://dummyS3Event.json outputfile.txt
   ```

   The cli-binary-format option is required if you are using version 2 of the Amazon CLI. To make this the default setting, run `aws configure set cli-binary-format raw-in-base64-out`. For more information, see [Amazon CLI supported global command line options](https://docs.amazonaws.cn/cli/latest/userguide/cli-configure-options.html#cli-configure-options-list).

1. Verify that your function has created a thumbnail version of your image and saved it to your target Amazon S3 bucket. Run the following CLI command, replacing `amzn-s3-demo-source-bucket-resized` with the name of your own destination bucket.

   ```
   aws s3api list-objects-v2 --bucket amzn-s3-demo-source-bucket-resized
   ```

   You should see output similar to the following. The `Key` parameter shows the filename of your resized image file.

   ```
   {
       "Contents": [
           {
               "Key": "resized-HappyFace.jpg",
               "LastModified": "2023-06-06T21:40:07+00:00",
               "ETag": "\"d8ca652ffe83ba6b721ffc20d9d7174a\"",
               "Size": 2633,
               "StorageClass": "STANDARD"
           }
       ]
   }
   ```

------

## Test your function using the Amazon S3 trigger


![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/services-s3-tutorial/s3thumb_tut_steps9.png)


Now that you’ve confirmed your Lambda function is operating correctly, you’re ready to test your complete setup by adding an image file to your Amazon S3 source bucket. When you add your image to the source bucket, your Lambda function should be automatically invoked. Your function creates a resized version of the file and stores it in your target bucket.

------
#### [ Amazon Web Services Management Console ]

**To test your Lambda function using the Amazon S3 trigger (console)**

1. To upload an image to your Amazon S3 bucket, do the following:

   1. Open the [Buckets](https://console.amazonaws.cn/s3/buckets) page of the Amazon S3 console and choose your source bucket.

   1. Choose **Upload**.

   1. Choose **Add files** and use the file selector to choose the image file you want to upload. Your image object can be any .jpg or .png file.

   1. Choose **Open**, then choose **Upload**.

1. Verify that Lambda has saved a resized version of your image file in your target bucket by doing the following:

   1. Navigate back to the [Buckets](https://console.amazonaws.cn/s3/buckets) page of the Amazon S3 console and choose your destination bucket.

   1. In the **Objects** pane, you should now see two resized image files, one from each test of your Lambda function. To download your resized image, select the file, then choose **Download**.

------
#### [ Amazon CLI ]

**To test your Lambda function using the Amazon S3 trigger (Amazon CLI)**

1. From the directory containing the image you want to upload, run the following CLI command. Replace the `--bucket` parameter with the name of your source bucket. For the `--key` and `--body` parameters, use the filename of your test image. Your test image can be any .jpg or .png file.

   ```
   aws s3api put-object --bucket amzn-s3-demo-source-bucket --key SmileyFace.jpg --body ./SmileyFace.jpg
   ```

1. Verify that your function has created a thumbnail version of your image and saved it to your target Amazon S3 bucket. Run the following CLI command, replacing `amzn-s3-demo-source-bucket-resized` with the name of your own destination bucket.

   ```
   aws s3api list-objects-v2 --bucket amzn-s3-demo-source-bucket-resized
   ```

   If your function runs successfully, you’ll see output similar to the following. Your target bucket should now contain two resized files.

   ```
   {
       "Contents": [
           {
               "Key": "resized-HappyFace.jpg",
               "LastModified": "2023-06-07T00:15:50+00:00",
               "ETag": "\"7781a43e765a8301713f533d70968a1e\"",
               "Size": 2763,
               "StorageClass": "STANDARD"
           },
           {
               "Key": "resized-SmileyFace.jpg",
               "LastModified": "2023-06-07T00:13:18+00:00",
               "ETag": "\"ca536e5a1b9e32b22cd549e18792cdbc\"",
               "Size": 1245,
               "StorageClass": "STANDARD"
           }
       ]
   }
   ```

------

## Clean up your resources


You can now delete the resources that you created for this tutorial, unless you want to retain them. By deleting Amazon resources that you're no longer using, you prevent unnecessary charges to your Amazon Web Services account.

**To delete the Lambda function**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Select the function that you created.

1. Choose **Actions**, **Delete**.

1. Type **confirm** in the text input field and choose **Delete**.

**To delete the policy that you created**

1. Open the [Policies page](https://console.amazonaws.cn/iam/home#/policies) of the IAM console.

1. Select the policy that you created (**AWSLambdaS3Policy**).

1. Choose **Policy actions**, **Delete**.

1. Choose **Delete**.

**To delete the execution role**

1. Open the [Roles page](https://console.amazonaws.cn/iam/home#/roles) of the IAM console.

1. Select the execution role that you created.

1. Choose **Delete**.

1. Enter the name of the role in the text input field and choose **Delete**.

**To delete the S3 bucket**

1. Open the [Amazon S3 console.](https://console.amazonaws.cn//s3/home#)

1. Select the bucket you created.

1. Choose **Delete**.

1. Enter the name of the bucket in the text input field.

1. Choose **Delete bucket**.

# Use Secrets Manager secrets in Lambda functions
Secrets Manager

Amazon Secrets Manager helps you manage credentials, API keys, and other secrets that your Lambda functions need. You have two main approaches for retrieving secrets in your Lambda functions, both offering better performance and lower costs compared to retrieving secrets directly using the Amazon SDK:
+ **Amazon parameters and secrets Lambda extension** - A runtime-agnostic solution that provides a simple HTTP interface for retrieving secrets
+ **Powertools for Amazon Lambda parameters utility** - A code-integrated solution that supports multiple providers (Secrets Manager, Parameter Store, AppConfig) with built-in transformations

Both approaches maintain local caches of secrets, eliminating the need for your function to call Secrets Manager for every invocation. When your function requests a secret, the cache is checked first. If the secret is available and hasn't expired, it's returned immediately. Otherwise, it's retrieved from Secrets Manager, cached, and returned. This caching mechanism results in faster response times and reduced costs by minimizing API calls.

## Choosing an approach


Consider these factors when choosing between the extension and PowerTools:

Use the Amazon parameters and secrets Lambda extension when:  
+ You want a runtime-agnostic solution that works with any Lambda runtime
+ You prefer not to add code dependencies to your function
+ You only need to retrieve secrets from Secrets Manager or Parameter Store

Use Powertools for Amazon Lambda parameters utility when:  
+ You want an integrated development experience with your application code
+ You need support for multiple providers (Secrets Manager, Parameter Store, AppConfig)
+ You want built-in data transformations (JSON parsing, base64 decoding)
+ You're using Python, TypeScript, Java, or .NET runtimes

## When to use Secrets Manager with Lambda
When to use Secrets Manager

Common scenarios for using Secrets Manager with Lambda include:
+ Storing database credentials that your function uses to connect to Amazon RDS or other databases
+ Managing API keys for external services your function calls
+ Storing encryption keys or other sensitive configuration data
+ Rotating credentials automatically without needing to update your function code

## Using the Amazon parameters and secrets Lambda extension


The Amazon parameters and secrets Lambda extension uses a simple HTTP interface compatible with any Lambda runtime. By default, it caches secrets for 300 seconds (5 minutes) and can hold up to 1,000 secrets. You can [customize these settings with environment variables](#lambda-secrets-manager-env-vars).

### Use Secrets Manager in a Lambda function
Use Secrets Manager in a function

This section assumes that you already have a Secrets Manager secret. To create a secret, see [Create an Amazon Secrets Manager secret](https://docs.amazonaws.cn/secretsmanager/latest/userguide/create_secret.html).

#### Create the deployment package


Choose your preferred runtime and follow the steps to create a function that retrieves secrets from Secrets Manager. The example function retrieves a secret from Secrets Manager and can be used to access database credentials, API keys, or other sensitive configuration data in your applications.

------
#### [ Python ]

**To create a Python function**

1. Create and navigate to a new project directory. Example:

   ```
   mkdir my_function
   cd my_function
   ```

1. Create a file named `lambda_function.py` with the following code. For `secret_name`, use the name or Amazon Resource Name (ARN) of your secret.

   ```
   import json
   import os
   import requests
   
   def lambda_handler(event, context):
       try:
           # Replace with the name or ARN of your secret
           secret_name = "arn:aws-cn:secretsmanager:us-east-1:111122223333:secret:SECRET_NAME"
           
           secrets_extension_endpoint = f"http://localhost:2773/secretsmanager/get?secretId={secret_name}"
           headers = {"X-Aws-Parameters-Secrets-Token": os.environ.get('AWS_SESSION_TOKEN')}
           
           response = requests.get(secrets_extension_endpoint, headers=headers)
           print(f"Response status code: {response.status_code}")
           
           secret = json.loads(response.text)["SecretString"]
           print(f"Retrieved secret: {secret}")
           
           return {
               'statusCode': response.status_code,
               'body': json.dumps({
                   'message': 'Successfully retrieved secret',
                   'secretRetrieved': True
               })
           }
       
       except Exception as e:
           print(f"Error: {str(e)}")
           return {
               'statusCode': 500,
               'body': json.dumps({
                   'message': 'Error retrieving secret',
                   'error': str(e)
               })
           }
   ```

1. Create a file named `requirements.txt` with this content:

   ```
   requests
   ```

1. Install the dependencies:

   ```
   pip install -r requirements.txt -t .
   ```

1. Create a .zip file containing all files:

   ```
   zip -r function.zip .
   ```

------
#### [ Node.js ]

**To create a Node.js function**

1. Create and navigate to a new project directory. Example:

   ```
   mkdir my_function
   cd my_function
   ```

1. Create a file named `index.mjs` with the following code. For `secret_name`, use the name or Amazon Resource Name (ARN) of your secret.

   ```
   import http from 'http';
   
   export const handler = async (event) => {
       try {
           // Replace with the name or ARN of your secret
           const secretName = "arn:aws-cn:secretsmanager:us-east-1:111122223333:secret:SECRET_NAME";
           const options = {
               hostname: 'localhost',
               port: 2773,
               path: `/secretsmanager/get?secretId=${secretName}`,
               headers: {
                   'X-Aws-Parameters-Secrets-Token': process.env.AWS_SESSION_TOKEN
               }
           };
   
           const response = await new Promise((resolve, reject) => {
               http.get(options, (res) => {
                   let data = '';
                   res.on('data', (chunk) => { data += chunk; });
                   res.on('end', () => {
                       resolve({ 
                           statusCode: res.statusCode, 
                           body: data 
                       });
                   });
               }).on('error', reject);
           });
   
           const secret = JSON.parse(response.body).SecretString;
           console.log('Retrieved secret:', secret);
   
           return {
               statusCode: response.statusCode,
               body: JSON.stringify({
                   message: 'Successfully retrieved secret',
                   secretRetrieved: true
               })
           };
       } catch (error) {
           console.error('Error:', error);
           return {
               statusCode: 500,
               body: JSON.stringify({
                   message: 'Error retrieving secret',
                   error: error.message
               })
           };
       }
   };
   ```

1. Create a .zip file containing the `index.mjs` file:

   ```
   zip -r function.zip index.mjs
   ```

------
#### [ Java ]

**To create a Java function**

1. Create a Maven project:

   ```
   mvn archetype:generate \
       -DgroupId=example \
       -DartifactId=lambda-secrets-demo \
       -DarchetypeArtifactId=maven-archetype-quickstart \
       -DarchetypeVersion=1.4 \
       -DinteractiveMode=false
   ```

1. Navigate to the project directory:

   ```
   cd lambda-secrets-demo
   ```

1. Open the `pom.xml` and replace the contents with the following:

   ```
   <project xmlns="http://maven.apache.org/POM/4.0.0"
            xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
            xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
       <modelVersion>4.0.0</modelVersion>
   
       <groupId>example</groupId>
       <artifactId>lambda-secrets-demo</artifactId>
       <version>1.0-SNAPSHOT</version>
   
       <properties>
           <maven.compiler.source>11</maven.compiler.source>
           <maven.compiler.target>11</maven.compiler.target>
       </properties>
   
       <dependencies>
           <dependency>
               <groupId>com.amazonaws</groupId>
               <artifactId>aws-lambda-java-core</artifactId>
               <version>1.2.1</version>
           </dependency>
       </dependencies>
   
       <build>
           <plugins>
               <plugin>
                   <groupId>org.apache.maven.plugins</groupId>
                   <artifactId>maven-shade-plugin</artifactId>
                   <version>3.2.4</version>
                   <executions>
                       <execution>
                           <phase>package</phase>
                           <goals>
                               <goal>shade</goal>
                           </goals>
                           <configuration>
                               <createDependencyReducedPom>false</createDependencyReducedPom>
                               <finalName>function</finalName>
                           </configuration>
                       </execution>
                   </executions>
               </plugin>
           </plugins>
       </build>
   </project>
   ```

1. Rename the `/lambda-secrets-demo/src/main/java/example/App.java` to `Hello.java` to match Lambda's default Java handler name (`example.Hello::handleRequest`):

   ```
   mv src/main/java/example/App.java src/main/java/example/Hello.java
   ```

1. Open the `Hello.java` file and replace its contents with the following. For `secretName`, use the name or Amazon Resource Name (ARN) of your secret. 

   ```
   package example;
   
   import com.amazonaws.services.lambda.runtime.Context;
   import com.amazonaws.services.lambda.runtime.RequestHandler;
   import java.net.URI;
   import java.net.http.HttpClient;
   import java.net.http.HttpRequest;
   import java.net.http.HttpResponse;
   
   public class Hello implements RequestHandler<Object, String> {
       private final HttpClient client = HttpClient.newHttpClient();
   
       @Override
       public String handleRequest(Object input, Context context) {
           try {
               // Replace with the name or ARN of your secret
               String secretName = "arn:aws-cn:secretsmanager:us-east-1:111122223333:secret:SECRET_NAME";
               String endpoint = "http://localhost:2773/secretsmanager/get?secretId=" + secretName;
   
               HttpRequest request = HttpRequest.newBuilder()
                   .uri(URI.create(endpoint))
                   .header("X-Aws-Parameters-Secrets-Token", System.getenv("AWS_SESSION_TOKEN"))
                   .GET()
                   .build();
   
               HttpResponse<String> response = client.send(request, 
                   HttpResponse.BodyHandlers.ofString());
   
               String secret = response.body();
               secret = secret.substring(secret.indexOf("SecretString") + 15);
               secret = secret.substring(0, secret.indexOf("\""));
   
               System.out.println("Retrieved secret: " + secret);
               return String.format(
                   "{\"statusCode\": %d, \"body\": \"%s\"}",
                   response.statusCode(), "Successfully retrieved secret"
               );
   
           } catch (Exception e) {
               e.printStackTrace();
               return String.format(
                   "{\"body\": \"Error retrieving secret: %s\"}", 
                   e.getMessage()
               );
           }
       }
   }
   ```

1. Remove the test directory. Maven creates this by default, but we don't need it for this example.

   ```
   rm -rf src/test
   ```

1. Build the project:

   ```
   mvn package
   ```

1. Download the JAR file (`target/function.jar`) for later use.

------

#### Create the function


1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Choose **Create function**.

1. Select **Author from scratch**.

1. For **Function name**, enter **secret-retrieval-demo**.

1. Choose your preferred **Runtime**.

1. Choose **Create function**.

**To upload the deployment package**

1. In the function's **Code** tab, choose **Upload from** and select **.zip file** (for Python and Node.js) or **.jar file** (for Java).

1. Upload the deployment package you created earlier.

1. Choose **Save**.

#### Add the extension


**To add the Amazon Parameters and Secrets Lambda extension as a layer**

1. In the function's **Code** tab, scroll down to **Layers**.

1. Choose **Add a layer**.

1. Select **Amazon layers**.

1. Choose **Amazon-Parameters-and-Secrets-Lambda-Extension**.

1. Select the latest version.

1. Choose **Add**.

#### Add permissions


**To add Secrets Manager permissions to your execution role**

1. Choose the **Configuration** tab, and then choose **Permissions**.

1. Under **Role name**, choose the link to your execution role. This link opens the role in the IAM console.  
![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/execution-role-console.png)

1. Choose **Add permissions**, and then choose **Create inline policy**.  
![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/create-inline-policy.png)

1. Choose the **JSON** tab and add the following policy. For `Resource`, enter the ARN of your secret.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": "secretsmanager:GetSecretValue",
               "Resource": "arn:aws-cn:secretsmanager:us-east-1:111122223333:secret:SECRET_NAME"
           }
       ]
   }
   ```

------

1. Choose **Next**.

1. Enter a name for the policy.

1. Choose **Create policy**.

#### Test the function


**To test the function**

1. Return to the Lambda console.

1. Select the **Test** tab.

1. Choose **Test**. You should see the following response:  
![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/execution-results-secret.png)

### Environment variables


The Amazon Parameters and Secrets Lambda extension uses the following default settings. You can override these settings by creating the corresponding [environment variables](configuration-envvars.md#create-environment-variables). To view the current settings for a function, set `PARAMETERS_SECRETS_EXTENSION_LOG_LEVEL` to `DEBUG`. The extension will log its configuration information to CloudWatch Logs at the start of each function invocation.


| Setting | Default value | Valid values | Environment variable | Details | 
| --- | --- | --- | --- | --- | 
| HTTP port | 2773 | 1 - 65535 | PARAMETERS\$1SECRETS\$1EXTENSION\$1HTTP\$1PORT | Port for the local HTTP server | 
| Cache enabled | TRUE | TRUE \$1 FALSE | PARAMETERS\$1SECRETS\$1EXTENSION\$1CACHE\$1ENABLED | Enable or disable the cache | 
| Cache size | 1000 | 0 - 1000 | PARAMETERS\$1SECRETS\$1EXTENSION\$1CACHE\$1SIZE | Set to 0 to disable caching | 
| Secrets Manager TTL | 300 seconds | 0 - 300 seconds | SECRETS\$1MANAGER\$1TTL | Time-to-live for cached secrets. Set to 0 to disable caching. This variable is ignored if the value for PARAMETERS\$1SECRETS\$1EXTENSION\$1CACHE\$1SIZE is 0. | 
| Parameter Store TTL | 300 seconds | 0 - 300 seconds | SSM\$1PARAMETER\$1STORE\$1TTL | Time-to-live for cached parameters. Set to 0 to disable caching. This variable is ignored if the value for PARAMETERS\$1SECRETS\$1EXTENSION\$1CACHE\$1SIZE is 0. | 
| Log level | INFO | DEBUG \$1 INFO \$1 WARN \$1 ERROR \$1 NONE | PARAMETERS\$1SECRETS\$1EXTENSION\$1LOG\$1LEVEL | The level of detail reported in logs for the extension | 
| Max connections | 3 | 1 or greater | PARAMETERS\$1SECRETS\$1EXTENSION\$1MAX\$1CONNECTIONS | Maximum number of HTTP connections for requests to Parameter Store or Secrets Manager | 
| Secrets Manager timeout | 0 (no timeout) | All whole numbers | SECRETS\$1MANAGER\$1TIMEOUT\$1MILLIS | Timeout for requests to Secrets Manager (in milliseconds) | 
| Parameter Store timeout | 0 (no timeout) | All whole numbers | SSM\$1PARAMETER\$1STORE\$1TIMEOUT\$1MILLIS | Timeout for requests to Parameter Store (in milliseconds) | 

### Working with secret rotation
Secret rotation

If you rotate secrets frequently, the default 300-second cache duration might cause your function to use outdated secrets. You have two options to ensure your function uses the latest secret value:
+ Reduce the cache TTL by setting the `SECRETS_MANAGER_TTL` environment variable to a lower value (in seconds). For example, setting it to `60` ensures your function will never use a secret that's more than one minute old.
+ Use the `AWSCURRENT` or `AWSPREVIOUS` staging labels in your secret request to ensure you get the specific version you want:

  ```
  secretsmanager/get?secretId=YOUR_SECRET_NAME&versionStage=AWSCURRENT
  ```

Choose the approach that best balances your needs for performance and freshness. A lower TTL means more frequent calls to Secrets Manager but ensures you're working with the most recent secret values.

## Using the parameters utility from Powertools for Amazon Lambda


The parameters utility from Powertools for Amazon Lambda provides a unified interface for retrieving secrets from multiple providers including Secrets Manager, parameter store, and AppConfig. It handles caching, transformations, and provides a more integrated development experience compared to the extension approach.

### Benefits of the parameters utility

+ **Multiple providers** - Retrieve parameters from Secrets Manager, Parameter Store, and AppConfig using the same interface
+ **Built-in transformations** - Automatic JSON parsing, base64 decoding, and other data transformations
+ **Integrated caching** - Configurable caching with TTL support to reduce API calls
+ **Type safety** - Strong typing support in TypeScript and other supported runtimes
+ **Error handling** - Built-in retry logic and error handling

### Code examples


The following examples show how to retrieve secrets using the Parameters utility in different runtimes:

**Python**  
For complete examples and setup instructions, see the [Parameters utility documentation](https://docs.powertools.aws.dev/lambda/python/latest/utilities/parameters/).
Retrieving secrets from Secrets Manager with Powertools for Amazon Lambda Parameters utility.  

```
from aws_lambda_powertools import Logger
from aws_lambda_powertools.utilities import parameters

logger = Logger()

def lambda_handler(event, context):
    try:
        # Get secret with caching (default TTL: 5 seconds)
        secret_value = parameters.get_secret("my-secret-name")
        
        # Get secret with custom TTL
        secret_with_ttl = parameters.get_secret("my-secret-name", max_age=300)
        
        # Get secret and transform JSON
        secret_json = parameters.get_secret("my-json-secret", transform="json")
        
        logger.info("Successfully retrieved secrets")
        
        return {
            'statusCode': 200,
            'body': 'Successfully retrieved secrets'
        }
        
    except Exception as e:
        logger.error(f"Error retrieving secret: {str(e)}")
        return {
            'statusCode': 500,
            'body': f'Error: {str(e)}'
        }
```

**TypeScript**  
For complete examples and setup instructions, see the [Parameters utility documentation](https://docs.aws.amazon.com/powertools/typescript/2.1.1/utilities/parameters/).
Retrieving secrets from Secrets Manager with Powertools for Amazon Lambda Parameters utility.  

```
import { Logger } from '@aws-lambda-powertools/logger';
import { getSecret } from '@aws-lambda-powertools/parameters/secrets';
import type { Context } from 'aws-lambda';

const logger = new Logger();

export const handler = async (event: any, context: Context) => {
    try {
        // Get secret with caching (default TTL: 5 seconds)
        const secretValue = await getSecret('my-secret-name');
        
        // Get secret with custom TTL
        const secretWithTtl = await getSecret('my-secret-name', { maxAge: 300 });
        
        // Get secret and transform JSON
        const secretJson = await getSecret('my-json-secret', { transform: 'json' });
        
        logger.info('Successfully retrieved secrets');
        
        return {
            statusCode: 200,
            body: 'Successfully retrieved secrets'
        };
        
    } catch (error) {
        logger.error('Error retrieving secret', { error });
        return {
            statusCode: 500,
            body: `Error: ${error}`
        };
    }
};
```

**Java**  
For complete examples and setup instructions, see the [Parameters utility documentation](https://docs.powertools.aws.dev/lambda/java/latest/utilities/parameters/).
Retrieving secrets from Secrets Manager with Powertools for Amazon Lambda Parameters utility.  

```
import software.amazon.lambda.powertools.logging.Logging;
import software.amazon.lambda.powertools.parameters.SecretsProvider;
import software.amazon.lambda.powertools.parameters.ParamManager;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;

public class SecretHandler implements RequestHandler<Object, String> {
    
    private final SecretsProvider secretsProvider = ParamManager.getSecretsProvider();
    
    @Logging
    @Override
    public String handleRequest(Object input, Context context) {
        try {
            // Get secret with caching (default TTL: 5 seconds)
            String secretValue = secretsProvider.get("my-secret-name");
            
            // Get secret with custom TTL (300 seconds)
            String secretWithTtl = secretsProvider.withMaxAge(300).get("my-secret-name");
            
            // Get secret and transform JSON
            MySecret secretJson = secretsProvider.get("my-json-secret", MySecret.class);
            
            return "Successfully retrieved secrets";
            
        } catch (Exception e) {
            return "Error retrieving secret: " + e.getMessage();
        }
    }
    
    public static class MySecret {
        // Define your secret structure here
    }
}
```

**.NET**  
For complete examples and setup instructions, see the [Parameters utility documentation](https://docs.aws.amazon.com/powertools/typescript/latest/features/parameters/).
Retrieving secrets from Secrets Manager with Powertools for Amazon Lambda Parameters utility.  

```
using AWS.Lambda.Powertools.Logging;
using AWS.Lambda.Powertools.Parameters;
using Amazon.Lambda.Core;

[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))]

public class Function
{
    private readonly ISecretsProvider _secretsProvider;
    
    public Function()
    {
        _secretsProvider = ParametersManager.SecretsProvider;
    }
    
    [Logging]
    public async Task<string> FunctionHandler(object input, ILambdaContext context)
    {
        try
        {
            // Get secret with caching (default TTL: 5 seconds)
            var secretValue = await _secretsProvider.GetAsync("my-secret-name");
            
            // Get secret with custom TTL
            var secretWithTtl = await _secretsProvider.WithMaxAge(TimeSpan.FromMinutes(5))
                .GetAsync("my-secret-name");
            
            // Get secret and transform JSON
            var secretJson = await _secretsProvider.GetAsync<MySecret>("my-json-secret");
            
            return "Successfully retrieved secrets";
        }
        catch (Exception e)
        {
            return $"Error retrieving secret: {e.Message}";
        }
    }
    
    public class MySecret
    {
        // Define your secret structure here
    }
}
```

### Setup and permissions


To use the Parameters utility, you need to:

1. Install Powertools for Amazon Lambda for your runtime. For details, see [Powertools for Amazon Lambda](powertools-for-lambda.md).

1. Add the necessary IAM permissions to your function's execution role. Refer to [Managing permissions in Amazon Lambda](lambda-permissions.md) for details.

1. Configure any optional settings through [environment variables](configuration-envvars.md).

The required IAM permissions are the same as for the extension approach. The utility will automatically handle caching and API calls to Secrets Manager based on your configuration.

# Using Lambda with Amazon SQS
SQS

**Note**  
If you want to send data to a target other than a Lambda function or enrich the data before sending it, see [ Amazon EventBridge Pipes](https://docs.amazonaws.cn/eventbridge/latest/userguide/eb-pipes.html).

You can use a Lambda function to process messages in an Amazon Simple Queue Service (Amazon SQS) queue. Lambda supports both [ standard queues](https://docs.amazonaws.cn/AWSSimpleQueueService/latest/SQSDeveloperGuide/standard-queues.html) and [ first-in, first-out (FIFO) queues](https://docs.amazonaws.cn/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html) for [event source mappings](invocation-eventsourcemapping.md). You can also use provisioned mode to allocate dedicated polling resources for your Amazon SQS event source mappings. The Lambda function and the Amazon SQS queue must be in the same Amazon Web Services Region, although they can be in [different Amazon Web Services accounts](with-sqs-cross-account-example.md).

When processing Amazon SQS messages, you need to implement partial batch response logic to prevent successfully processed messages from being retried when some messages in a batch fail. The [Batch Processor utility](https://docs.powertools.aws.dev/lambda/python/latest/utilities/batch/) from Powertools for Amazon Lambda simplifies this implementation by automatically handling partial batch response logic, reducing development time and improving reliability.

**Topics**
+ [

## Understanding polling and batching behavior for Amazon SQS event source mappings
](#sqs-polling-behavior)
+ [

## Using provisioned mode with Amazon SQS event source mappings
](#sqs-provisioned-mode)
+ [

## Configuring provisioned mode for Amazon SQS event source mapping
](#sqs-configuring-provisioned-mode)
+ [

## Example standard queue message event
](#example-standard-queue-message-event)
+ [

## Example FIFO queue message event
](#sample-fifo-queues-message-event)
+ [

# Creating and configuring an Amazon SQS event source mapping
](services-sqs-configure.md)
+ [

# Configuring scaling behavior for SQS event source mappings
](services-sqs-scaling.md)
+ [

# Handling errors for an SQS event source in Lambda
](services-sqs-errorhandling.md)
+ [

# Lambda parameters for Amazon SQS event source mappings
](services-sqs-parameters.md)
+ [

# Using event filtering with an Amazon SQS event source
](with-sqs-filtering.md)
+ [

# Tutorial: Using Lambda with Amazon SQS
](with-sqs-example.md)
+ [

# Tutorial: Using a cross-account Amazon SQS queue as an event source
](with-sqs-cross-account-example.md)

## Understanding polling and batching behavior for Amazon SQS event source mappings


With Amazon SQS event source mappings, Lambda polls the queue and invokes your function [ synchronously](invocation-sync.md) with an event. Each event can contain a batch of multiple messages from the queue. Lambda receives these events one batch at a time, and invokes your function once for each batch. When your function successfully processes a batch, Lambda deletes its messages from the queue.

When Lambda receives a batch, the messages stay in the queue but are hidden for the length of the queue's [ visibility timeout](https://docs.amazonaws.cn/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html). If your function successfully processes all messages in the batch, Lambda deletes the messages from the queue. By default, if your function encounters an error while processing a batch, all messages in that batch become visible in the queue again after the visibility timeout expires. For this reason, your function code must be able to process the same message multiple times without unintended side effects.

**Warning**  
Lambda event source mappings process each event at least once, and duplicate processing of records can occur. To avoid potential issues related to duplicate events, we strongly recommend that you make your function code idempotent. To learn more, see [How do I make my Lambda function idempotent](https://repost.aws/knowledge-center/lambda-function-idempotent) in the Amazon Knowledge Center.

To prevent Lambda from processing a message multiple times, you can either configure your event source mapping to include [batch item failures](services-sqs-errorhandling.md#services-sqs-batchfailurereporting) in your function response, or you can use the [DeleteMessage](https://docs.amazonaws.cn/AWSSimpleQueueService/latest/APIReference/API_DeleteMessage.html) API to remove messages from the queue as your Lambda function successfully processes them.

For more information about configuration parameters that Lambda supports for SQS event source mappings, see [Creating an SQS event source mapping](services-sqs-configure.md#events-sqs-eventsource).

## Using provisioned mode with Amazon SQS event source mappings


For workloads where you need to fine-tune the throughput of your event source mapping, you can use provisioned mode. In provisioned mode, you define minimum and maximum limits for the amount of provisioned event pollers. These provisioned event pollers are dedicated to your event source mapping, and can handle unexpected message spikes through responsive autoscaling. Amazon SQS event source mapping configured with Provisioned Mode scales 3x faster (up to 1,000 concurrent invokes per minute) and supports 16x higher concurrency (up to 20,000 concurrent invokes) than default Amazon SQS event source mapping capability. We recommend that you use provisioned mode for Amazon SQS event- driven workloads that have strict performance requirements, such as financial services firms processing market data feeds, e-commerce platforms providing real-time personalized recommendations, and gaming companies managing live player interactions. Using provisioned mode incurs additional costs. For detailed pricing, see [Amazon Lambda pricing](https://aws.amazon.com/lambda/pricing/).

Each event poller in provisioned mode can handle up to 1 MB/s of throughput, up to 10 concurrent invokes, or up to 10 Amazon SQS polling API calls per second. The range of accepted values for the minimum number of event pollers (MinimumPollers) is between 2 and 200, with default of 2. The range of accepted values for the maximum number of event pollers (MaximumPollers) is between 2 and 2,000, with default of 200. MaximumPollers must be greater than or equal to MinimumPollers.

### Determining required event pollers


To estimate the number of event pollers required to ensure optimal message processing performance when using provisioned mode for SQS ESM, gather the following metrics for your application: peak SQS events per second requiring low-latency processing, average SQS event payload size, average Lambda function duration, and configured batch size.

First you can estimate the number of SQS events per second (EPS) supported by an event poller for your workload using the following formula:

```
EPS per event poller = 
        minimum(
            ceiling(1024 / average event size in KB),
            ceiling(10 / average function duration in seconds) * batch size, 
            min(100, 10 * batch size)
                )
```

Then, you can calculate the number of minimum pollers required using below formula. This calculation ensures you provision sufficient capacity to handle your peak traffic requirements.

```
Required event pollers = (Peak number of events per second in Queue) / EPS per event poller
```

Consider a workload with a default batch size of 10, average event size of 3 KB, average function duration of 100 ms, and a requirement to handle 1,000 events per second. In this scenario, each event poller will support approximately 100 events per second (EPS). Therefore, you should set minimum pollers to 10 to adequately handle your peak traffic requirements. If your workload has the same characteristics but with average function duration of 1 second, each poller will support only 10 EPS, requiring you to configure 100 minimum pollers to support 1,000 events per second at low latency.

We recommend using default batch size of 10 or higher to maximize the efficiency of provisioned mode event pollers. Higher batch sizes allow each poller to process more events per invocation, for improved throughput and cost efficiency. When planning your event poller capacity, account for potential traffic spikes and consider setting your minimumPollers value slightly higher than the calculated minimum to provide a buffer. Additionally, monitor your workload characteristics over time, as changes in message size, function duration, or traffic patterns may necessitate adjustments to your event poller configuration to maintain optimal performance and cost efficiency. For precise capacity planning, we recommend testing your specific workload to determine the actual EPS each event poller can drive.

## Configuring provisioned mode for Amazon SQS event source mapping


You can configure provisioned mode for your Amazon SQS event source mapping using the console or the Lambda API.

**To configure provisioned mode for an existing Amazon SQS event source mapping (console)**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Choose the function with the Amazon SQS event source mapping that you want to configure provisioned mode for.

1. Choose **Configuration**, then choose **Triggers**.

1. Choose the Amazon SQS event source mapping that you want to configure provisioned mode for, then choose **Edit**.

1. Under **Event source mapping configuration**, choose **Configure provisioned mode**.
   + For **Minimum event pollers**, enter a value between 2 and 200. If you don't specify a value, Lambda chooses a default value of 2.
   + For **Maximum event pollers**, enter a value between 2 and 2,000. This value must be greater than or equal to your value for **Minimum event pollers**. If you don't specify a value, Lambda chooses a default value of 200.

1. Choose **Save**.

You can configure provisioned mode programmatically using the `ProvisionedPollerConfig` object in your `EventSourceMappingConfiguration`. For example, the following `UpdateEventSourceMapping` CLI command configures a `MinimumPollers` value of 5, and a `MaximumPollers` value of 100.

```
aws lambda update-event-source-mapping \
    --uuid a1b2c3d4-5678-90ab-cdef-EXAMPLE11111 \
    --provisioned-poller-config '{"MinimumPollers": 5, "MaximumPollers": 100}'
```

After configuring provisioned mode, you can observe the usage of event pollers for your workload by monitoring the `ProvisionedPollers` metric. For more information, see Event source mapping metrics.

To disable provisioned mode and return to default (on-demand) mode, you can use the following `UpdateEventSourceMapping` CLI command:

```
aws lambda update-event-source-mapping \
    --uuid a1b2c3d4-5678-90ab-cdef-EXAMPLE11111 \
    --provisioned-poller-config '{}'
```

**Note**  
Provisioned mode cannot be used in conjunction with the maximum concurrency setting. When using provisioned mode, you control maximum concurrency through the maximum number of event pollers.

For more information on configuring provisioned mode, see [Creating and configuring an Amazon SQS event source mapping](services-sqs-configure.md).

## Example standard queue message event


**Example Amazon SQS message event (standard queue)**  

```
{
    "Records": [
        {
            "messageId": "059f36b4-87a3-44ab-83d2-661975830a7d",
            "receiptHandle": "AQEBwJnKyrHigUMZj6rYigCgxlaS3SLy0a...",
            "body": "Test message.",
            "attributes": {
                "ApproximateReceiveCount": "1",
                "SentTimestamp": "1545082649183",
                "SenderId": "AIDAIENQZJOLO23YVJ4VO",
                "ApproximateFirstReceiveTimestamp": "1545082649185"
            },
            "messageAttributes": {
                "myAttribute": {
                    "stringValue": "myValue", 
                    "stringListValues": [], 
                    "binaryListValues": [], 
                    "dataType": "String"
                }
            },
            "md5OfBody": "e4e68fb7bd0e697a0ae8f1bb342846b3",
            "eventSource": "aws:sqs",
            "eventSourceARN": "arn:aws-cn:sqs:us-west-2:123456789012:my-queue",
            "awsRegion": "us-west-2"
        },
        {
            "messageId": "2e1424d4-f796-459a-8184-9c92662be6da",
            "receiptHandle": "AQEBzWwaftRI0KuVm4tP+/7q1rGgNqicHq...",
            "body": "Test message.",
            "attributes": {
                "ApproximateReceiveCount": "1",
                "SentTimestamp": "1545082650636",
                "SenderId": "AIDAIENQZJOLO23YVJ4VO",
                "ApproximateFirstReceiveTimestamp": "1545082650649"
            },
            "messageAttributes": {},
            "md5OfBody": "e4e68fb7bd0e697a0ae8f1bb342846b3",
            "eventSource": "aws:sqs",
            "eventSourceARN": "arn:aws-cn:sqs:us-west-2:123456789012:my-queue",
            "awsRegion": "us-west-2"
        }
    ]
}
```

By default, Lambda polls up to 10 messages in your queue at once and sends that batch to your function. To avoid invoking the function with a small number of records, you can configure the event source to buffer records for up to 5 minutes by configuring a batch window. Before invoking the function, Lambda continues to poll messages from the standard queue until the batch window expires, the [invocation payload size quota](gettingstarted-limits.md) is reached, or the configured maximum batch size is reached.

If you're using a batch window and your SQS queue contains very low traffic, Lambda might wait for up to 20 seconds before invoking your function. This is true even if you set a batch window lower than 20 seconds. 

**Note**  
In Java, you might experience null pointer errors when deserializing JSON. This could be due to how case of "Records" and "eventSourceARN" is converted by the JSON object mapper.

## Example FIFO queue message event


For FIFO queues, records contain additional attributes that are related to deduplication and sequencing.

**Example Amazon SQS message event (FIFO queue)**  

```
{
    "Records": [
        {
            "messageId": "11d6ee51-4cc7-4302-9e22-7cd8afdaadf5",
            "receiptHandle": "AQEBBX8nesZEXmkhsmZeyIE8iQAMig7qw...",
            "body": "Test message.",
            "attributes": {
                "ApproximateReceiveCount": "1",
                "SentTimestamp": "1573251510774",
                "SequenceNumber": "18849496460467696128",
                "MessageGroupId": "1",
                "SenderId": "AIDAIO23YVJENQZJOL4VO",
                "MessageDeduplicationId": "1",
                "ApproximateFirstReceiveTimestamp": "1573251510774"
            },
            "messageAttributes": {},
            "md5OfBody": "e4e68fb7bd0e697a0ae8f1bb342846b3",
            "eventSource": "aws:sqs",
            "eventSourceARN": "arn:aws-cn:sqs:us-west-2:123456789012:fifo.fifo",
            "awsRegion": "us-west-2"
        }
    ]
}
```

# Creating and configuring an Amazon SQS event source mapping
Create mapping

To process Amazon SQS messages with Lambda, configure your queue with the appropriate settings, then create a Lambda event source mapping.

**Topics**
+ [

## Configuring a queue to use with Lambda
](#events-sqs-queueconfig)
+ [

## Setting up Lambda execution role permissions
](#events-sqs-permissions)
+ [

## Creating an SQS event source mapping
](#events-sqs-eventsource)

## Configuring a queue to use with Lambda


If you don't already have an existing Amazon SQS queue, [create one](https://docs.amazonaws.cn/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-configure-create-queue.html) to serve as an event source for your Lambda function. The Lambda function and the Amazon SQS queue must be in the same Amazon Web Services Region, although they can be in [different Amazon Web Services accounts](with-sqs-cross-account-example.md).

To allow your function time to process each batch of records, set the source queue's [ visibility timeout](https://docs.amazonaws.cn/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html) to at least six times the [configuration timeout](configuration-timeout.md) on your function. The extra time allows Lambda to retry if your function is throttled while processing a previous batch.

**Note**  
Your function's timeout must be less than or equal to the queue's visibility timeout. Lambda validates this requirement when you create or update an event source mapping and will return an error if the function timeout exceeds the queue's visibility timeout.

By default, if Lambda encounters an error at any point while processing a batch, all messages in that batch return to the queue. After the [ visibility timeout](https://docs.amazonaws.cn/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html), the messages become visible to Lambda again. You can configure your event source mapping to use [ partial batch responses](services-sqs-errorhandling.md#services-sqs-batchfailurereporting) to return only the failed messages back to the queue. In addition, if your function fails to process a message multiple times, Amazon SQS can send it to a [ dead-letter queue](https://docs.amazonaws.cn/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html). We recommend setting the `maxReceiveCount` on your source queue's [ redrive policy](https://docs.amazonaws.cn/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html#policies-for-dead-letter-queues) to at least 5. This gives Lambda a few chances to retry before sending failed messages directly to the dead-letter queue.

## Setting up Lambda execution role permissions


The [ AWSLambdaSQSQueueExecutionRole](https://docs.amazonaws.cn/aws-managed-policy/latest/reference/AWSLambdaSQSQueueExecutionRole.html) Amazon managed policy includes the permissions that Lambda needs to read from your Amazon SQS queue. You can add this managed policy to your function's [execution role](lambda-intro-execution-role.md).

Optionally, if you're using an encrypted queue, you also need to add the following permission to your execution role:
+ [kms:Decrypt](https://docs.amazonaws.cn/kms/latest/APIReference/API_Decrypt.html)

## Creating an SQS event source mapping


Create an event source mapping to tell Lambda to send items from your queue to a Lambda function. You can create multiple event source mappings to process items from multiple queues with a single function. When Lambda invokes the target function, the event can contain multiple items, up to a configurable maximum *batch size*.

To configure your function to read from Amazon SQS, attach the [ AWSLambdaSQSQueueExecutionRole](https://docs.amazonaws.cn/aws-managed-policy/latest/reference/AWSLambdaSQSQueueExecutionRole.html) Amazon managed policy to your execution role. Then, create an **SQS** event source mapping from the console using the following steps.

**To add permissions and create a trigger**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Choose the name of a function.

1. Choose the **Configuration** tab, and then choose **Permissions**.

1. Under **Role name**, choose the link to your execution role. This link opens the role in the IAM console.  
![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/execution-role.png)

1. Choose **Add permissions**, and then choose **Attach policies**.  
![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/attach-policies.png)

1. In the search field, enter `AWSLambdaSQSQueueExecutionRole`. Add this policy to your execution role. This is an Amazon managed policy that contains the permissions your function needs to read from an Amazon SQS queue. For more information about this policy, see [ AWSLambdaSQSQueueExecutionRole](https://docs.amazonaws.cn/aws-managed-policy/latest/reference/AWSLambdaSQSQueueExecutionRole.html) in the *Amazon Managed Policy Reference*.

1. Go back to your function in the Lambda console. Under **Function overview**, choose **Add trigger**.  
![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/add-trigger.png)

1. Choose a trigger type.

1. Configure the required options, and then choose **Add**.

Lambda supports the following configuration options for Amazon SQS event sources:

**SQS queue**  
The Amazon SQS queue to read records from. The Lambda function and the Amazon SQS queue must be in the same Amazon Web Services Region, although they can be in [different Amazon Web Services accounts](with-sqs-cross-account-example.md).

**Enable trigger**  
The status of the event source mapping. **Enable trigger** is selected by default.

**Batch size**  
The maximum number of records to send to the function in each batch. For a standard queue, this can be up to 10,000 records. For a FIFO queue, the maximum is 10. For a batch size over 10, you must also set the batch window (`MaximumBatchingWindowInSeconds`) to at least 1 second.  
Configure your [ function timeout](https://serverlessland.com/content/service/lambda/guides/aws-lambda-operator-guide/configurations#timeouts) to allow enough time to process an entire batch of items. If items take a long time to process, choose a smaller batch size. A large batch size can improve efficiency for workloads that are very fast or have a lot of overhead. If you configure [reserved concurrency](configuration-concurrency.md) on your function, set a minimum of five concurrent executions to reduce the chance of throttling errors when Lambda invokes your function.  
Lambda passes all of the records in the batch to the function in a single call, as long as the total size of the events doesn't exceed the [ invocation payload size quota](gettingstarted-limits.md) for synchronous invocation (6 MB). Both Lambda and Amazon SQS generate metadata for each record. This additional metadata is counted towards the total payload size and can cause the total number of records sent in a batch to be lower than your configured batch size. The metadata fields that Amazon SQS sends can be variable in length. For more information about the Amazon SQS metadata fields, see the [ReceiveMessage](https://docs.amazonaws.cn/AWSSimpleQueueService/latest/APIReference/API_ReceiveMessage.html) API operation documentation in the *Amazon Simple Queue Service API Reference*.

**Batch window**  
The maximum amount of time to gather records before invoking the function, in seconds. This applies only to standard queues.  
If you're using a batch window greater than 0 seconds, you must account for the increased processing time in your queue's [ visibility timeout](https://docs.amazonaws.cn/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html). We recommend setting your queue's visibility timeout to six times your [function timeout](configuration-timeout.md), plus the value of `MaximumBatchingWindowInSeconds`. This allows time for your Lambda function to process each batch of events and to retry in the event of a throttling error.  
When messages become available, Lambda starts processing messages in batches. Lambda starts processing five batches at a time with five concurrent invocations of your function. If messages are still available, Lambda adds up to 300 concurrent invokes of your function a minute, up to a maximum of 1,250 concurrent invokes. When using provisioned mode, each event poller can handle up to 1 MB/s of throughput, up to 10 concurrent invokes, or up to 10 Amazon SQS polling API calls per second. Lambda scales the number of event pollers between your configured minimum and maximum, quickly adding up to 1,000 concurrent invokes per minute to provide low-latency processing of your Amazon SQS events. You control scaling and concurrency through these minimum and maximum event poller settings. To learn more about function scaling and concurrency, see [Understanding Lambda function scaling](lambda-concurrency.md).  
To process more messages, you can optimize your Lambda function for higher throughput. For more information, see [ Understanding how Amazon Lambda scales with Amazon SQS standard queues](https://amazonaws-china.com/blogs/compute/understanding-how-aws-lambda-scales-when-subscribed-to-amazon-sqs-queues/#:~:text=If there are more messages,messages from the SQS queue.).

**Filter criteria**  
Add filter criteria to control which events Lambda sends to your function for processing. For more information, see [Control which events Lambda sends to your function](invocation-eventfiltering.md).

**Maximum concurrency**  
The maximum number of concurrent functions that the event source can invoke. Cannot be used with Provisioned Mode enabled. For more information, see [Configuring maximum concurrency for Amazon SQS event sources](services-sqs-scaling.md#events-sqs-max-concurrency).

**Provisioned Mode**  
When enabled, allocates dedicated polling resources for your event source mapping. You can configure the minimum (2-200) and maximum (2-2000) number of event pollers. Each event poller can handle up to 1 MB/sec of throughput, up to 10 concurrent invokes, or up to 10 Amazon SQS polling API calls per second.  
Note: You cannot use Provisioned Mode and Maximum concurrency together. When Provisioned Mode is enabled, use the maximum pollers setting to control concurrency.

# Configuring scaling behavior for SQS event source mappings
Scaling behavior

You can control the scaling behavior of your Amazon SQS event source mappings either through maximum concurrency settings or by enabling provisioned mode. These are mutually exclusive options.

By default, Lambda automatically scales event pollers based on message volume. When you enable provisioned mode, you allocate a minimum and maximum number of dedicated polling resources that remain ready to handle expected traffic patterns. This allows you to optimize your event source mapping's performance in two ways:
+ Standard mode (Default): Lambda automatically manages scaling, starting with a small number of pollers and scaling up or down based on workload.
+ Provisioned mode: You configure dedicated polling resources with minimum and maximum limits, enabling 3 times faster scaling and up to 16 times higher processing capacity.

For standard queues, Lambda uses [ long polling](https://docs.amazonaws.cn/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-short-and-long-polling.html#sqs-long-polling) to poll a queue until it becomes active. When messages are available, Lambda starts processing five batches at a time with five concurrent invocations of your function. If messages are still available, Lambda increases the number of processes that are reading batches by up to 300 more concurrent invokes per minute. The maximum number of invokes that an event source mapping can process simultaneously is 1,250. When traffic is low, Lambda scales back the processing to five concurrent invokes, and can optimize to as few as 2 concurrent invokes to reduce the Amazon SQS calls and corresponding costs. However, this optimization is not available when you enable the maximum concurrency setting.

For FIFO queues, Lambda sends messages to your function in the order that it receives them. When you send a message to a FIFO queue, you specify a [message group ID](https://docs.amazonaws.cn/AWSSimpleQueueService/latest/SQSDeveloperGuide/using-messagegroupid-property.html). Amazon SQS ensures that messages in the same group are delivered to Lambda in order. When Lambda reads your messages into batches, each batch may contain messages from more than one message group, but the order of the messages is maintained. If your function returns an error, the function attempts all retries on the affected messages before Lambda receives additional messages from the same group.

When using provisioned mode, each event poller can handle up to 1 MB/sec of throughput, up to 10 concurrent invokes, or up to 10 Amazon SQS polling API calls per second. Lambda scales the number of event pollers between your configured minimum and maximum, quickly adding up to 1,000 concurrency per minute to provide consistent, low-latency processing of your Amazon SQS events. Using provisioned mode incurs additional costs. For detailed pricing, see [Amazon Lambda pricing](https://aws.amazon.com/lambda/pricing/). Each event poller uses [long polling](https://docs.amazonaws.cn/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-short-and-long-polling.html) to your SQS queue with up to 10 polls per second, which incur SQS API requests cost. See [Amazon SQS pricing](https://aws.amazon.com/sqs/pricing/ ) for details. You control scaling and concurrency through these minimum and maximum event poller settings, rather than using the maximum concurrency setting, as these options cannot be used together.

**Note**  
You cannot use the maximum concurrency setting and provisioned mode at the same time. When provisioned mode is enabled, you control the scaling and concurrency of your Amazon SQS event source mapping through the minimum and maximum number of event pollers.

## Configuring maximum concurrency for Amazon SQS event sources
Maximum concurrency

You can use the maximum concurrency setting to control scaling behavior for your SQS event sources. Note that maximum concurrency cannot be used with provisioned mode enabled. The maximum concurrency setting limits the number of concurrent instances of the function that an Amazon SQS event source can invoke. Maximum concurrency is an event source-level setting. If you have multiple Amazon SQS event sources mapped to one function, each event source can have a separate maximum concurrency setting. You can use maximum concurrency to prevent one queue from using all of the function's [reserved concurrency](configuration-concurrency.md) or the rest of the [account's concurrency quota](gettingstarted-limits.md). There is no charge for configuring maximum concurrency on an Amazon SQS event source.

Importantly, maximum concurrency and reserved concurrency are two independent settings. Don't set maximum concurrency higher than the function's reserved concurrency. If you configure maximum concurrency, make sure that your function's reserved concurrency is greater than or equal to the total maximum concurrency for all Amazon SQS event sources on the function. Otherwise, Lambda may throttle your messages.

When your account's concurrency quota is set to the default value of 1,000, an Amazon SQS event source mapping can scale to invoke function instances up to this value, unless you specify a maximum concurrency.

If you receive an increase to your account's default concurrency quota, Lambda may not be able to invoke concurrent functions instances up to your new quota. By default, Lambda can scale to invoke up to 1,250 concurrent function instances for an Amazon SQS event source mapping. If this is insufficient for your use case, contact Amazon support to discuss an increase to your account's Amazon SQS event source mapping concurrency.

**Note**  
For FIFO queues, concurrent invocations are capped either by the number of [message group IDs](https://docs.amazonaws.cn/AWSSimpleQueueService/latest/SQSDeveloperGuide/using-messagegroupid-property.html) (`messageGroupId`) or the maximum concurrency setting—whichever is lower. For example, if you have six message group IDs and maximum concurrency is set to 10, your function can have a maximum of six concurrent invocations.

You can configure maximum concurrency on new and existing Amazon SQS event source mappings.

**Configure maximum concurrency using the Lambda console**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Choose the name of a function.

1. Under **Function overview**, choose **SQS**. This opens the **Configuration** tab.

1. Select the Amazon SQS trigger and choose **Edit**.

1. For **Maximum concurrency**, enter a number between 2 and 1,000. To turn off maximum concurrency, leave the box empty.

1. Choose **Save**.

**Configure maximum concurrency using the Amazon Command Line Interface (Amazon CLI)**  
Use the [update-event-source-mapping](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-event-source-mapping.html) command with the `--scaling-config` option. Example:

```
aws lambda update-event-source-mapping \
    --uuid "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE" \
    --scaling-config '{"MaximumConcurrency":5}'
```

To turn off maximum concurrency, enter an empty value for `--scaling-config`:

```
aws lambda update-event-source-mapping \
    --uuid "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE" \
    --scaling-config "{}"
```

**Configure maximum concurrency using the Lambda API**  
Use the [CreateEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html) or [UpdateEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_UpdateEventSourceMapping.html) action with a [ScalingConfig](https://docs.amazonaws.cn/lambda/latest/api/API_ScalingConfig.html) object.

# Handling errors for an SQS event source in Lambda
Error handling

To handle errors related to an SQS event source, Lambda automatically uses a retry strategy with a backoff strategy. You can also customize error handling behavior by configuring your SQS event source mapping to return [partial batch responses](#services-sqs-batchfailurereporting).

## Backoff strategy for failed invocations


When an invocation fails, Lambda attempts to retry the invocation while implementing a backoff strategy. The backoff strategy differs slightly depending on whether Lambda encountered the failure due to an error in your function code, or due to throttling.
+  If your **function code** caused the error, Lambda will stop processing and retrying the invocation. In the meantime, Lambda gradually backs off, reducing the amount of concurrency allocated to your Amazon SQS event source mapping. After your queue's visibility timeout runs out, the message will again reappear in the queue. 
+ If the invocation fails due to **throttling**, Lambda gradually backs off retries by reducing the amount of concurrency allocated to your Amazon SQS event source mapping. Lambda continues to retry the message until the message's timestamp exceeds your queue's visibility timeout, at which point Lambda drops the message.

## Implementing partial batch responses


When your Lambda function encounters an error while processing a batch, all messages in that batch become visible in the queue again by default, including messages that Lambda processed successfully. As a result, your function can end up processing the same message several times.

To avoid reprocessing successfully processed messages in a failed batch, you can configure your event source mapping to make only the failed messages visible again. This is called a partial batch response. To turn on partial batch responses, specify `ReportBatchItemFailures` for the [FunctionResponseTypes](https://docs.amazonaws.cn/lambda/latest/api/API_UpdateEventSourceMapping.html#lambda-UpdateEventSourceMapping-request-FunctionResponseTypes) action when configuring your event source mapping. This lets your function return a partial success, which can help reduce the number of unnecessary retries on records.

**Note**  
The [Batch Processor utility](https://docs.powertools.aws.dev/lambda/python/latest/utilities/batch/) from Powertools for Amazon Lambda handles all of the partial batch response logic automatically. This utility simplifies implementing batch processing patterns and reduces the custom code needed to handle batch item failures correctly. It is available for Python, Java, Typescript, and .NET.

When `ReportBatchItemFailures` is activated, Lambda doesn't [scale down message polling](#services-sqs-backoff-strategy) when function invocations fail. If you expect some messages to fail—and you don't want those failures to impact the message processing rate—use `ReportBatchItemFailures`.

**Note**  
Keep the following in mind when using partial batch responses:  
If your function throws an exception, the entire batch is considered a complete failure.
If you're using this feature with a FIFO queue, your function should stop processing messages after the first failure and return all failed and unprocessed messages in `batchItemFailures`. This helps preserve the ordering of messages in your queue.

**To activate partial batch reporting**

1. Review the [Best practices for implementing partial batch responses](https://docs.amazonaws.cn/prescriptive-guidance/latest/lambda-event-filtering-partial-batch-responses-for-sqs/best-practices-partial-batch-responses.html).

1. Run the following command to activate `ReportBatchItemFailures` for your function. To retrieve your event source mapping's UUID, run the [list-event-source-mappings](https://docs.amazonaws.cn/cli/latest/reference/lambda/list-event-source-mappings.html) Amazon CLI command.

   ```
   aws lambda update-event-source-mapping \
   --uuid "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE" \
   --function-response-types "ReportBatchItemFailures"
   ```

1. Update your function code to catch all exceptions and return failed messages in a `batchItemFailures` JSON response. The `batchItemFailures` response must include a list of message IDs, as `itemIdentifier` JSON values.

   For example, suppose you have a batch of five messages, with message IDs `id1`, `id2`, `id3`, `id4`, and `id5`. Your function successfully processes `id1`, `id3`, and `id5`. To make messages `id2` and `id4` visible again in your queue, your function should return the following response: 

   ```
   { 
     "batchItemFailures": [ 
           {
               "itemIdentifier": "id2"
           },
           {
               "itemIdentifier": "id4"
           }
       ]
   }
   ```

   Here are some examples of function code that return the list of failed message IDs in the batch:

------
#### [ .NET ]

**Amazon SDK for .NET**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/lambda-function-sqs-report-batch-item-failures) repository. 
Reporting SQS batch item failures with Lambda using .NET.  

   ```
   // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
   // SPDX-License-Identifier: Apache-2.0
   using Amazon.Lambda.Core;
   using Amazon.Lambda.SQSEvents;
   
   // Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
   [assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))]
   namespace sqsSample;
   
   public class Function
   {
       public async Task<SQSBatchResponse> FunctionHandler(SQSEvent evnt, ILambdaContext context)
       {
           List<SQSBatchResponse.BatchItemFailure> batchItemFailures = new List<SQSBatchResponse.BatchItemFailure>();
           foreach(var message in evnt.Records)
           {
               try
               {
                   //process your message
                   await ProcessMessageAsync(message, context);
               }
               catch (System.Exception)
               {
                   //Add failed message identifier to the batchItemFailures list
                   batchItemFailures.Add(new SQSBatchResponse.BatchItemFailure{ItemIdentifier=message.MessageId}); 
               }
           }
           return new SQSBatchResponse(batchItemFailures);
       }
   
       private async Task ProcessMessageAsync(SQSEvent.SQSMessage message, ILambdaContext context)
       {
           if (String.IsNullOrEmpty(message.Body))
           {
               throw new Exception("No Body in SQS Message.");
           }
           context.Logger.LogInformation($"Processed message {message.Body}");
           // TODO: Do interesting work based on the new message
           await Task.CompletedTask;
       }
   }
   ```

------
#### [ Go ]

**SDK for Go V2**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/lambda-function-sqs-report-batch-item-failures) repository. 
Reporting SQS batch item failures with Lambda using Go.  

   ```
   // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
   // SPDX-License-Identifier: Apache-2.0
   package main
   
   import (
   	"context"
   	"fmt"
   	"github.com/aws/aws-lambda-go/events"
   	"github.com/aws/aws-lambda-go/lambda"
   )
   
   func handler(ctx context.Context, sqsEvent events.SQSEvent) (map[string]interface{}, error) {
   	batchItemFailures := []map[string]interface{}{}
   
   	for _, message := range sqsEvent.Records {
   		if len(message.Body) > 0 {
   			// Your message processing condition here
   			fmt.Printf("Successfully processed message: %s\n", message.Body)
   		} else {
   			// Message processing failed
   			fmt.Printf("Failed to process message %s\n", message.MessageId)
   			batchItemFailures = append(batchItemFailures, map[string]interface{}{"itemIdentifier": message.MessageId})
   		}
   	}
   
   	sqsBatchResponse := map[string]interface{}{
   		"batchItemFailures": batchItemFailures,
   	}
   	return sqsBatchResponse, nil
   }
   
   func main() {
   	lambda.Start(handler)
   }
   ```

------
#### [ Java ]

**SDK for Java 2.x**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/lambda-function-sqs-report-batch-item-failures) repository. 
Reporting SQS batch item failures with Lambda using Java.  

   ```
   // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
   // SPDX-License-Identifier: Apache-2.0
   import com.amazonaws.services.lambda.runtime.Context;
   import com.amazonaws.services.lambda.runtime.RequestHandler;
   import com.amazonaws.services.lambda.runtime.events.SQSEvent;
   import com.amazonaws.services.lambda.runtime.events.SQSBatchResponse;
    
   import java.util.ArrayList;
   import java.util.List;
    
   public class ProcessSQSMessageBatch implements RequestHandler<SQSEvent, SQSBatchResponse> {
       @Override
       public SQSBatchResponse handleRequest(SQSEvent sqsEvent, Context context) {
            List<SQSBatchResponse.BatchItemFailure> batchItemFailures = new ArrayList<SQSBatchResponse.BatchItemFailure>();
   
            for (SQSEvent.SQSMessage message : sqsEvent.getRecords()) {
                try {
                    //process your message
                } catch (Exception e) {
                    //Add failed message identifier to the batchItemFailures list
                    batchItemFailures.add(new SQSBatchResponse.BatchItemFailure(message.getMessageId()));
                }
            }
            return new SQSBatchResponse(batchItemFailures);
        }
   }
   ```

------
#### [ JavaScript ]

**SDK for JavaScript (v3)**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/lambda-function-sqs-report-batch-item-failures) repository. 
Reporting SQS batch item failures with Lambda using JavaScript.  

   ```
   // Node.js 20.x Lambda runtime, AWS SDK for Javascript V3
   export const handler = async (event, context) => {
       const batchItemFailures = [];
       for (const record of event.Records) {
           try {
               await processMessageAsync(record, context);
           } catch (error) {
               batchItemFailures.push({ itemIdentifier: record.messageId });
           }
       }
       return { batchItemFailures };
   };
   
   async function processMessageAsync(record, context) {
       if (record.body && record.body.includes("error")) {
           throw new Error("There is an error in the SQS Message.");
       }
       console.log(`Processed message: ${record.body}`);
   }
   ```
Reporting SQS batch item failures with Lambda using TypeScript.  

   ```
   // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
   // SPDX-License-Identifier: Apache-2.0
   import { SQSEvent, SQSBatchResponse, Context, SQSBatchItemFailure, SQSRecord } from 'aws-lambda';
   
   export const handler = async (event: SQSEvent, context: Context): Promise<SQSBatchResponse> => {
       const batchItemFailures: SQSBatchItemFailure[] = [];
   
       for (const record of event.Records) {
           try {
               await processMessageAsync(record);
           } catch (error) {
               batchItemFailures.push({ itemIdentifier: record.messageId });
           }
       }
   
       return {batchItemFailures: batchItemFailures};
   };
   
   async function processMessageAsync(record: SQSRecord): Promise<void> {
       if (record.body && record.body.includes("error")) {
           throw new Error('There is an error in the SQS Message.');
       }
       console.log(`Processed message ${record.body}`);
   }
   ```

------
#### [ PHP ]

**SDK for PHP**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/lambda-function-sqs-report-batch-item-failures) repository. 
Reporting SQS batch item failures with Lambda using PHP.  

   ```
   // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
   // SPDX-License-Identifier: Apache-2.0
   <?php
   
   use Bref\Context\Context;
   use Bref\Event\Sqs\SqsEvent;
   use Bref\Event\Sqs\SqsHandler;
   use Bref\Logger\StderrLogger;
   
   require __DIR__ . '/vendor/autoload.php';
   
   class Handler extends SqsHandler
   {
       private StderrLogger $logger;
       public function __construct(StderrLogger $logger)
       {
           $this->logger = $logger;
       }
   
       /**
        * @throws JsonException
        * @throws \Bref\Event\InvalidLambdaEvent
        */
       public function handleSqs(SqsEvent $event, Context $context): void
       {
           $this->logger->info("Processing SQS records");
           $records = $event->getRecords();
   
           foreach ($records as $record) {
               try {
                   // Assuming the SQS message is in JSON format
                   $message = json_decode($record->getBody(), true);
                   $this->logger->info(json_encode($message));
                   // TODO: Implement your custom processing logic here
               } catch (Exception $e) {
                   $this->logger->error($e->getMessage());
                   // failed processing the record
                   $this->markAsFailed($record);
               }
           }
           $totalRecords = count($records);
           $this->logger->info("Successfully processed $totalRecords SQS records");
       }
   }
   
   $logger = new StderrLogger();
   return new Handler($logger);
   ```

------
#### [ Python ]

**SDK for Python (Boto3)**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/lambda-function-sqs-report-batch-item-failures) repository. 
Reporting SQS batch item failures with Lambda using Python.  

   ```
   # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
   # SPDX-License-Identifier: Apache-2.0
   
   def lambda_handler(event, context):
       if event:
           batch_item_failures = []
           sqs_batch_response = {}
        
           for record in event["Records"]:
               try:
                   print(f"Processed message: {record['body']}")
               except Exception as e:
                   batch_item_failures.append({"itemIdentifier": record['messageId']})
           
           sqs_batch_response["batchItemFailures"] = batch_item_failures
           return sqs_batch_response
   ```

------
#### [ Ruby ]

**SDK for Ruby**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-sqs-to-lambda-with-batch-item-handling) repository. 
Reporting SQS batch item failures with Lambda using Ruby.  

   ```
   # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
   # SPDX-License-Identifier: Apache-2.0
   require 'json'
   
   def lambda_handler(event:, context:)
     if event
       batch_item_failures = []
       sqs_batch_response = {}
   
       event["Records"].each do |record|
         begin
           # process message
         rescue StandardError => e
           batch_item_failures << {"itemIdentifier" => record['messageId']}
         end
       end
   
       sqs_batch_response["batchItemFailures"] = batch_item_failures
       return sqs_batch_response
     end
   end
   ```

------
#### [ Rust ]

**SDK for Rust**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/lambda-function-sqs-report-batch-item-failures) repository. 
Reporting SQS batch item failures with Lambda using Rust.  

   ```
   // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
   // SPDX-License-Identifier: Apache-2.0
   use aws_lambda_events::{
       event::sqs::{SqsBatchResponse, SqsEvent},
       sqs::{BatchItemFailure, SqsMessage},
   };
   use lambda_runtime::{run, service_fn, Error, LambdaEvent};
   
   async fn process_record(_: &SqsMessage) -> Result<(), Error> {
       Err(Error::from("Error processing message"))
   }
   
   async fn function_handler(event: LambdaEvent<SqsEvent>) -> Result<SqsBatchResponse, Error> {
       let mut batch_item_failures = Vec::new();
       for record in event.payload.records {
           match process_record(&record).await {
               Ok(_) => (),
               Err(_) => batch_item_failures.push(BatchItemFailure {
                   item_identifier: record.message_id.unwrap(),
               }),
           }
       }
   
       Ok(SqsBatchResponse {
           batch_item_failures,
       })
   }
   
   #[tokio::main]
   async fn main() -> Result<(), Error> {
       run(service_fn(function_handler)).await
   }
   ```

------

If the failed events do not return to the queue, see [How do I troubleshoot Lambda function SQS ReportBatchItemFailures?](https://aws.amazon.com/premiumsupport/knowledge-center/lambda-sqs-report-batch-item-failures/) in the Amazon Knowledge Center.

### Success and failure conditions


Lambda treats a batch as a complete success if your function returns any of the following:
+ An empty `batchItemFailures` list
+ A null `batchItemFailures` list
+ An empty `EventResponse`
+ A null `EventResponse`

Lambda treats a batch as a complete failure if your function returns any of the following:
+ An invalid JSON response
+ An empty string `itemIdentifier`
+ A null `itemIdentifier`
+ An `itemIdentifier` with a bad key name
+ An `itemIdentifier` value with a message ID that doesn't exist

### CloudWatch metrics


To determine whether your function is correctly reporting batch item failures, you can monitor the `NumberOfMessagesDeleted` and `ApproximateAgeOfOldestMessage` Amazon SQS metrics in Amazon CloudWatch.
+ `NumberOfMessagesDeleted` tracks the number of messages removed from your queue. If this drops to 0, this is a sign that your function response is not correctly returning failed messages.
+ `ApproximateAgeOfOldestMessage` tracks how long the oldest message has stayed in your queue. A sharp increase in this metric can indicate that your function is not correctly returning failed messages.

### Using Powertools for Amazon Lambda batch processor


The batch processor utility from Powertools for Amazon Lambda automatically handles partial batch response logic, reducing the complexity of implementing batch failure reporting. Here are examples using the batch processor:

**Python**  
For complete examples and setup instructions, see the [batch processor documentation](https://docs.powertools.aws.dev/lambda/python/latest/utilities/batch/).
Processing Amazon SQS messages with Amazon Lambda batch processor.  

```
import json
from aws_lambda_powertools import Logger
from aws_lambda_powertools.utilities.batch import BatchProcessor, EventType, process_partial_response
from aws_lambda_powertools.utilities.data_classes import SQSEvent
from aws_lambda_powertools.utilities.typing import LambdaContext

processor = BatchProcessor(event_type=EventType.SQS)
logger = Logger()

def record_handler(record):
    logger.info(record)
    # Your business logic here
    # Raise an exception to mark this record as failed
    
def lambda_handler(event, context: LambdaContext):
    return process_partial_response(
        event=event, 
        record_handler=record_handler, 
        processor=processor,
        context=context
    )
```

**TypeScript**  
For complete examples and setup instructions, see the [batch processor documentation](https://docs.aws.amazon.com/powertools/typescript/latest/features/batch/).
Processing Amazon SQS messages with Amazon Lambda batch processor.  

```
import { BatchProcessor, EventType, processPartialResponse } from '@aws-lambda-powertools/batch';
import { Logger } from '@aws-lambda-powertools/logger';
import type { SQSEvent, Context } from 'aws-lambda';

const processor = new BatchProcessor(EventType.SQS);
const logger = new Logger();

const recordHandler = async (record: any): Promise<void> => {
    logger.info('Processing record', { record });
    // Your business logic here
    // Throw an error to mark this record as failed
};

export const handler = async (event: SQSEvent, context: Context) => {
    return processPartialResponse(event, recordHandler, processor, {
        context,
    });
};
```

# Lambda parameters for Amazon SQS event source mappings
Parameters

All Lambda event source types share the same [CreateEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html) and [UpdateEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_UpdateEventSourceMapping.html) API operations. However, only some of the parameters apply to Amazon SQS.


| Parameter | Required | Default | Notes | 
| --- | --- | --- | --- | 
|  BatchSize  |  N  |  10  |  For standard queues, the maximum is 10,000. For FIFO queues, the maximum is 10.  | 
|  Enabled  |  N  |  true  | none  | 
|  EventSourceArn  |  Y  | N/A |  The ARN of the data stream or a stream consumer  | 
|  FunctionName  |  Y  | N/A  | none  | 
|  FilterCriteria  |  N  |  N/A   |  [Control which events Lambda sends to your function](invocation-eventfiltering.md)  | 
|  FunctionResponseTypes  |  N  | N/A  |  To let your function report specific failures in a batch, include the value `ReportBatchItemFailures` in `FunctionResponseTypes`. For more information, see [Implementing partial batch responses](services-sqs-errorhandling.md#services-sqs-batchfailurereporting).  | 
|  MaximumBatchingWindowInSeconds  |  N  |  0  | Batching window is not supported for FIFO queues | 
|  ProvisionedPollerConfig  |  N  |  N/A  |  Configures the minimum (2-200) and maximum (2-2000) number of dedicated event pollers for the SQS event source mapping. Each poller can handle up to 1 MB/sec of throughput and 10 concurrent invokes.  | 
|  ScalingConfig  |  N  |  N/A   |  [Configuring maximum concurrency for Amazon SQS event sources](services-sqs-scaling.md#events-sqs-max-concurrency)  | 

# Using event filtering with an Amazon SQS event source
Event filtering

You can use event filtering to control which records from a stream or queue Lambda sends to your function. For general information about how event filtering works, see [Control which events Lambda sends to your function](invocation-eventfiltering.md).

This section focuses on event filtering for Amazon SQS event sources.

**Note**  
Amazon SQS event source mappings only support filtering on the `body` key.

**Topics**
+ [

## Amazon SQS event filtering basics
](#filtering-SQS)

## Amazon SQS event filtering basics


Suppose your Amazon SQS queue contains messages in the following JSON format.

```
{
    "RecordNumber": 1234,
    "TimeStamp": "yyyy-mm-ddThh:mm:ss",
    "RequestCode": "AAAA"
}
```

An example record for this queue would look as follows.

```
{
    "messageId": "059f36b4-87a3-44ab-83d2-661975830a7d",
    "receiptHandle": "AQEBwJnKyrHigUMZj6rYigCgxlaS3SLy0a...",
    "body": "{\n "RecordNumber": 1234,\n "TimeStamp": "yyyy-mm-ddThh:mm:ss",\n "RequestCode": "AAAA"\n}",
    "attributes": {
        "ApproximateReceiveCount": "1",
        "SentTimestamp": "1545082649183",
        "SenderId": "AIDAIENQZJOLO23YVJ4VO",
        "ApproximateFirstReceiveTimestamp": "1545082649185"
        },
    "messageAttributes": {},
    "md5OfBody": "e4e68fb7bd0e697a0ae8f1bb342846b3",
    "eventSource": "aws:sqs",
    "eventSourceARN": "arn:aws:sqs:us-west-2:123456789012:my-queue",
    "awsRegion": "us-west-2"
}
```

To filter based on the contents of your Amazon SQS messages, use the `body` key in the Amazon SQS message record. Suppose you want to process only those records where the `RequestCode` in your Amazon SQS message is “BBBB.” The `FilterCriteria` object would be as follows.

```
{
    "Filters": [
        {
            "Pattern": "{ \"body\" : { \"RequestCode\" : [ \"BBBB\" ] } }"
        }
    ]
}
```

For added clarity, here is the value of the filter's `Pattern` expanded in plain JSON. 

```
{
    "body": {
        "RequestCode": [ "BBBB" ]
        }
}
```

You can add your filter using the console, Amazon CLI or an Amazon SAM template.

------
#### [ Console ]

To add this filter using the console, follow the instructions in [Attaching filter criteria to an event source mapping (console)](invocation-eventfiltering.md#filtering-console) and enter the following string for the **Filter criteria**.

```
{ "body" : { "RequestCode" : [ "BBBB" ] } }
```

------
#### [ Amazon CLI ]

To create a new event source mapping with these filter criteria using the Amazon Command Line Interface (Amazon CLI), run the following command.

```
aws lambda create-event-source-mapping \
    --function-name my-function \
    --event-source-arn arn:aws:sqs:us-east-2:123456789012:my-queue \
    --filter-criteria '{"Filters": [{"Pattern": "{ \"body\" : { \"RequestCode\" : [ \"BBBB\" ] } }"}]}'
```

To add these filter criteria to an existing event source mapping, run the following command.

```
aws lambda update-event-source-mapping \
    --uuid "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE" \
    --filter-criteria '{"Filters": [{"Pattern": "{ \"body\" : { \"RequestCode\" : [ \"BBBB\" ] } }"}]}'
```

------
#### [ Amazon SAM ]

To add this filter using Amazon SAM, add the following snippet to the YAML template for your event source.

```
FilterCriteria:
  Filters:
    - Pattern: '{ "body" : { "RequestCode" : [ "BBBB" ] } }'
```

------

Suppose you want your function to process only those records where `RecordNumber` is greater than 9999. The `FilterCriteria` object would be as follows.

```
{
    "Filters": [
        {
            "Pattern": "{ \"body\" : { \"RecordNumber\" : [ { \"numeric\": [ \">\", 9999 ] } ] } }"
        }
    ]
}
```

For added clarity, here is the value of the filter's `Pattern` expanded in plain JSON. 

```
{
    "body": {
        "RecordNumber": [
            {
                "numeric": [ ">", 9999 ]
            }
        ]
    }
}
```

You can add your filter using the console, Amazon CLI or an Amazon SAM template.

------
#### [ Console ]

To add this filter using the console, follow the instructions in [Attaching filter criteria to an event source mapping (console)](invocation-eventfiltering.md#filtering-console) and enter the following string for the **Filter criteria**.

```
{ "body" : { "RecordNumber" : [ { "numeric": [ ">", 9999 ] } ] } }
```

------
#### [ Amazon CLI ]

To create a new event source mapping with these filter criteria using the Amazon Command Line Interface (Amazon CLI), run the following command.

```
aws lambda create-event-source-mapping \
    --function-name my-function \
    --event-source-arn arn:aws:sqs:us-east-2:123456789012:my-queue \
    --filter-criteria '{"Filters": [{"Pattern": "{ \"body\" : { \"RecordNumber\" : [ { \"numeric\": [ \">\", 9999 ] } ] } }"}]}'
```

To add these filter criteria to an existing event source mapping, run the following command.

```
aws lambda update-event-source-mapping \
    --uuid "a1b2c3d4-5678-90ab-cdef-11111EXAMPLE" \
    --filter-criteria '{"Filters": [{"Pattern": "{ \"body\" : { \"RecordNumber\" : [ { \"numeric\": [ \">\", 9999 ] } ] } }"}]}'
```

------
#### [ Amazon SAM ]

To add this filter using Amazon SAM, add the following snippet to the YAML template for your event source.

```
FilterCriteria:
  Filters:
    - Pattern: '{ "body" : { "RecordNumber" : [ { "numeric": [ ">", 9999 ] } ] } }'
```

------

For Amazon SQS, the message body can be any string. However, this can be problematic if your `FilterCriteria` expect `body` to be in a valid JSON format. The reverse scenario is also true—if the incoming message body is in JSON format but your filter criteria expects `body` to be a plain string, this can lead to unintended behavior.

To avoid this issue, ensure that the format of body in your `FilterCriteria` matches the expected format of `body` in messages that you receive from your queue. Before filtering your messages, Lambda automatically evaluates the format of the incoming message body and of your filter pattern for `body`. If there is a mismatch, Lambda drops the message. The following table summarizes this evaluation:


| Incoming message `body` format | Filter pattern `body` format | Resulting action | 
| --- | --- | --- | 
|  Plain string  |  Plain string  |  Lambda filters based on your filter criteria.  | 
|  Plain string  |  No filter pattern for data properties  |  Lambda filters (on the other metadata properties only) based on your filter criteria.  | 
|  Plain string  |  Valid JSON  |  Lambda drops the message.  | 
|  Valid JSON  |  Plain string  |  Lambda drops the message.  | 
|  Valid JSON  |  No filter pattern for data properties  |  Lambda filters (on the other metadata properties only) based on your filter criteria.  | 
|  Valid JSON  |  Valid JSON  |  Lambda filters based on your filter criteria.  | 

# Tutorial: Using Lambda with Amazon SQS
Tutorial

In this tutorial, you create a Lambda function that consumes messages from an [Amazon Simple Queue Service (Amazon SQS)](https://docs.amazonaws.cn/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html) queue. The Lambda function runs whenever a new message is added to the queue. The function writes the messages to an Amazon CloudWatch Logs stream. The following diagram shows the Amazon resources you use to complete the tutorial.

![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/sqs_tut_resources.png)


To complete this tutorial, you carry out the following steps:

1. Create a Lambda function that writes messages to CloudWatch Logs.

1. Create an Amazon SQS queue.

1. Create a Lambda event source mapping. The event source mapping reads the Amazon SQS queue and invokes your Lambda function when a new message is added.

1. Test the setup by adding messages to your queue and monitoring the results in CloudWatch Logs.

## Prerequisites


### Install the Amazon Command Line Interface


If you have not yet installed the Amazon Command Line Interface, follow the steps at [Installing or updating the latest version of the Amazon CLI](https://docs.amazonaws.cn/cli/latest/userguide/getting-started-install.html) to install it.

The tutorial requires a command line terminal or shell to run commands. In Linux and macOS, use your preferred shell and package manager.

**Note**  
In Windows, some Bash CLI commands that you commonly use with Lambda (such as `zip`) are not supported by the operating system's built-in terminals. To get a Windows-integrated version of Ubuntu and Bash, [install the Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/install-win10). 

## Create the execution role


![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/sqs_tut_steps1.png)


An [execution role](lambda-intro-execution-role.md) is an Amazon Identity and Access Management (IAM) role that grants a Lambda function permission to access Amazon Web Services services and resources. To allow your function to read items from Amazon SQS, attach the **AWSLambdaSQSQueueExecutionRole** permissions policy.

**To create an execution role and attach an Amazon SQS permissions policy**

1. Open the [Roles page](https://console.amazonaws.cn/iam/home#/roles) of the IAM console.

1. Choose **Create role**.

1. For **Trusted entity type**, choose **Amazon service**.

1. For **Use case**, choose **Lambda**.

1. Choose **Next**.

1. In the **Permissions policies** search box, enter **AWSLambdaSQSQueueExecutionRole**.

1. Select the **AWSLambdaSQSQueueExecutionRole** policy, and then choose **Next**.

1. Under **Role details**, for **Role name**, enter **lambda-sqs-role**, then choose **Create role**.

After role creation, note down the Amazon Resource Name (ARN) of your execution role. You'll need it in later steps.

## Create the function


![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/sqs_tut_steps2.png)


Create a Lambda function that processes your Amazon SQS messages. The function code logs the body of the Amazon SQS message to CloudWatch Logs.

This tutorial uses the Node.js 24 runtime, but we've also provided example code in other runtime languages. You can select the tab in the following box to see code for the runtime you're interested in. The JavaScript code you'll use in this step is in the first example shown in the **JavaScript** tab.

------
#### [ .NET ]

**Amazon SDK for .NET**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-sqs-to-lambda) repository. 
Consuming an SQS event with Lambda using .NET.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
﻿using Amazon.Lambda.Core;
using Amazon.Lambda.SQSEvents;


// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))]

namespace SqsIntegrationSampleCode
{
    public async Task FunctionHandler(SQSEvent evnt, ILambdaContext context)
    {
        foreach (var message in evnt.Records)
        {
            await ProcessMessageAsync(message, context);
        }

        context.Logger.LogInformation("done");
    }

    private async Task ProcessMessageAsync(SQSEvent.SQSMessage message, ILambdaContext context)
    {
        try
        {
            context.Logger.LogInformation($"Processed message {message.Body}");

            // TODO: Do interesting work based on the new message
            await Task.CompletedTask;
        }
        catch (Exception e)
        {
            //You can use Dead Letter Queue to handle failures. By configuring a Lambda DLQ.
            context.Logger.LogError($"An error occurred");
            throw;
        }

    }
}
```

------
#### [ Go ]

**SDK for Go V2**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-sqs-to-lambda) repository. 
Consuming an SQS event with Lambda using Go.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
package integration_sqs_to_lambda

import (
	"fmt"
	"github.com/aws/aws-lambda-go/events"
	"github.com/aws/aws-lambda-go/lambda"
)

func handler(event events.SQSEvent) error {
	for _, record := range event.Records {
		err := processMessage(record)
		if err != nil {
			return err
		}
	}
	fmt.Println("done")
	return nil
}

func processMessage(record events.SQSMessage) error {
	fmt.Printf("Processed message %s\n", record.Body)
	// TODO: Do interesting work based on the new message
	return nil
}

func main() {
	lambda.Start(handler)
}
```

------
#### [ Java ]

**SDK for Java 2.x**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-sqs-to-lambda) repository. 
Consuming an SQS event with Lambda using Java.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.amazonaws.services.lambda.runtime.events.SQSEvent;
import com.amazonaws.services.lambda.runtime.events.SQSEvent.SQSMessage;

public class Function implements RequestHandler<SQSEvent, Void> {
    @Override
    public Void handleRequest(SQSEvent sqsEvent, Context context) {
        for (SQSMessage msg : sqsEvent.getRecords()) {
            processMessage(msg, context);
        }
        context.getLogger().log("done");
        return null;
    }

    private void processMessage(SQSMessage msg, Context context) {
        try {
            context.getLogger().log("Processed message " + msg.getBody());

            // TODO: Do interesting work based on the new message

        } catch (Exception e) {
            context.getLogger().log("An error occurred");
            throw e;
        }

    }
}
```

------
#### [ JavaScript ]

**SDK for JavaScript (v3)**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/blob/main/integration-sqs-to-lambda) repository. 
Consuming an SQS event with Lambda using JavaScript.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
exports.handler = async (event, context) => {
  for (const message of event.Records) {
    await processMessageAsync(message);
  }
  console.info("done");
};

async function processMessageAsync(message) {
  try {
    console.log(`Processed message ${message.body}`);
    // TODO: Do interesting work based on the new message
    await Promise.resolve(1); //Placeholder for actual async work
  } catch (err) {
    console.error("An error occurred");
    throw err;
  }
}
```
Consuming an SQS event with Lambda using TypeScript.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
import { SQSEvent, Context, SQSHandler, SQSRecord } from "aws-lambda";

export const functionHandler: SQSHandler = async (
  event: SQSEvent,
  context: Context
): Promise<void> => {
  for (const message of event.Records) {
    await processMessageAsync(message);
  }
  console.info("done");
};

async function processMessageAsync(message: SQSRecord): Promise<any> {
  try {
    console.log(`Processed message ${message.body}`);
    // TODO: Do interesting work based on the new message
    await Promise.resolve(1); //Placeholder for actual async work
  } catch (err) {
    console.error("An error occurred");
    throw err;
  }
}
```

------
#### [ PHP ]

**SDK for PHP**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-sqs-to-lambda) repository. 
Consuming an SQS event with Lambda using PHP.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
<?php

# using bref/bref and bref/logger for simplicity

use Bref\Context\Context;
use Bref\Event\InvalidLambdaEvent;
use Bref\Event\Sqs\SqsEvent;
use Bref\Event\Sqs\SqsHandler;
use Bref\Logger\StderrLogger;

require __DIR__ . '/vendor/autoload.php';

class Handler extends SqsHandler
{
    private StderrLogger $logger;
    public function __construct(StderrLogger $logger)
    {
        $this->logger = $logger;
    }

    /**
     * @throws InvalidLambdaEvent
     */
    public function handleSqs(SqsEvent $event, Context $context): void
    {
        foreach ($event->getRecords() as $record) {
            $body = $record->getBody();
            // TODO: Do interesting work based on the new message
        }
    }
}

$logger = new StderrLogger();
return new Handler($logger);
```

------
#### [ Python ]

**SDK for Python (Boto3)**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-sqs-to-lambda) repository. 
Consuming an SQS event with Lambda using Python.  

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
def lambda_handler(event, context):
    for message in event['Records']:
        process_message(message)
    print("done")

def process_message(message):
    try:
        print(f"Processed message {message['body']}")
        # TODO: Do interesting work based on the new message
    except Exception as err:
        print("An error occurred")
        raise err
```

------
#### [ Ruby ]

**SDK for Ruby**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-sqs-to-lambda) repository. 
Consuming an SQS event with Lambda using Ruby.  

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
def lambda_handler(event:, context:)
  event['Records'].each do |message|
    process_message(message)
  end
  puts "done"
end

def process_message(message)
  begin
    puts "Processed message #{message['body']}"
    # TODO: Do interesting work based on the new message
  rescue StandardError => err
    puts "An error occurred"
    raise err
  end
end
```

------
#### [ Rust ]

**SDK for Rust**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-sqs-to-lambda) repository. 
Consuming an SQS event with Lambda using Rust.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
use aws_lambda_events::event::sqs::SqsEvent;
use lambda_runtime::{run, service_fn, Error, LambdaEvent};

async fn function_handler(event: LambdaEvent<SqsEvent>) -> Result<(), Error> {
    event.payload.records.iter().for_each(|record| {
        // process the record
        tracing::info!("Message body: {}", record.body.as_deref().unwrap_or_default())
    });

    Ok(())
}

#[tokio::main]
async fn main() -> Result<(), Error> {
    tracing_subscriber::fmt()
        .with_max_level(tracing::Level::INFO)
        // disable printing the name of the module in every log line.
        .with_target(false)
        // disabling time is handy because CloudWatch will add the ingestion time.
        .without_time()
        .init();

    run(service_fn(function_handler)).await
}
```

------

**To create a Node.js Lambda function**

1. Create a directory for the project, and then switch to that directory.

   ```
   mkdir sqs-tutorial
   cd sqs-tutorial
   ```

1. Copy the sample JavaScript code into a new file named `index.js`.

1. Create a deployment package using the following `zip` command.

   ```
   zip function.zip index.js
   ```

1. Create a Lambda function using the [create-function](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/create-function.html) Amazon CLI command. For the `role` parameter, enter the ARN of the execution role that you created earlier.
**Note**  
The Lambda function and the Amazon SQS queue must be in the same Amazon Web Services Region.

   ```
   aws lambda create-function --function-name ProcessSQSRecord \
   --zip-file fileb://function.zip --handler index.handler --runtime nodejs24.x \
   --role arn:aws-cn:iam::111122223333:role/lambda-sqs-role
   ```

## Test the function


![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/sqs_tut_steps3.png)


Invoke your Lambda function manually using the `invoke` Amazon CLI command and a sample Amazon SQS event.

**To invoke the Lambda function with a sample event**

1. Save the following JSON as a file named `input.json`. This JSON simulates an event that Amazon SQS might send to your Lambda function, where `"body"` contains the actual message from the queue. In this example, the message is `"test"`.  
**Example Amazon SQS event**  

   This is a test event—you don't need to change the message or the account number.

   ```
   {
       "Records": [
           {
               "messageId": "059f36b4-87a3-44ab-83d2-661975830a7d",
               "receiptHandle": "AQEBwJnKyrHigUMZj6rYigCgxlaS3SLy0a...",
               "body": "test",
               "attributes": {
                   "ApproximateReceiveCount": "1",
                   "SentTimestamp": "1545082649183",
                   "SenderId": "AIDAIENQZJOLO23YVJ4VO",
                   "ApproximateFirstReceiveTimestamp": "1545082649185"
               },
               "messageAttributes": {},
               "md5OfBody": "098f6bcd4621d373cade4e832627b4f6",
               "eventSource": "aws:sqs",
               "eventSourceARN": "arn:aws-cn:sqs:us-east-1:111122223333:my-queue",
               "awsRegion": "us-east-1"
           }
       ]
   }
   ```

1. Run the following [invoke](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/invoke.html) Amazon CLI command. This command returns CloudWatch logs in the response. For more information about retrieving logs, see [Access logs with the Amazon CLI](monitoring-cloudwatchlogs-view.md#monitoring-cloudwatchlogs-cli).

   ```
   aws lambda invoke --function-name ProcessSQSRecord --payload file://input.json out --log-type Tail \
   --query 'LogResult' --output text --cli-binary-format raw-in-base64-out | base64 --decode
   ```

   The **cli-binary-format** option is required if you're using Amazon CLI version 2. To make this the default setting, run `aws configure set cli-binary-format raw-in-base64-out`. For more information, see [Amazon CLI supported global command line options](https://docs.amazonaws.cn/cli/latest/userguide/cli-configure-options.html#cli-configure-options-list) in the *Amazon Command Line Interface User Guide for Version 2*.

1. Find the `INFO` log in the response. This is where the Lambda function logs the message body. You should see logs that look like this:

   ```
   2023-09-11T22:45:04.271Z	348529ce-2211-4222-9099-59d07d837b60	INFO	Processed message test
   2023-09-11T22:45:04.288Z	348529ce-2211-4222-9099-59d07d837b60	INFO	done
   ```

## Create an Amazon SQS queue


![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/sqs_tut_steps4.png)


Create an Amazon SQS queue that the Lambda function can use as an event source. The Lambda function and the Amazon SQS queue must be in the same Amazon Web Services Region.

**To create a queue**

1. Open the [Amazon SQS console](https://console.amazonaws.cn/sqs).

1. Choose **Create queue**.

1. Enter a name for the queue. Leave all other options at the default settings.

1. Choose **Create queue**.

After creating the queue, note down its ARN. You need this in the next step when you associate the queue with your Lambda function.

## Configure the event source


![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/sqs_tut_steps5.png)


Connect the Amazon SQS queue to your Lambda function by creating an [event source mapping](invocation-eventsourcemapping.md). The event source mapping reads the Amazon SQS queue and invokes your Lambda function when a new message is added.

To create a mapping between your Amazon SQS queue and your Lambda function, use the [create-event-source-mapping](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/create-event-source-mapping.html) Amazon CLI command. Example:

```
aws lambda create-event-source-mapping --function-name ProcessSQSRecord  --batch-size 10 \
--event-source-arn arn:aws-cn:sqs:us-east-1:111122223333:my-queue
```

To get a list of your event source mappings, use the [list-event-source-mappings](https://awscli.amazonaws.com/v2/documentation/api/2.1.29/reference/lambda/list-event-source-mappings.html) command. Example:

```
aws lambda list-event-source-mappings --function-name ProcessSQSRecord
```

## Send a test message


![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/sqs_tut_steps6.png)


**To send an Amazon SQS message to the Lambda function**

1. Open the [Amazon SQS console](https://console.amazonaws.cn/sqs).

1. Choose the queue that you created earlier.

1. Choose **Send and receive messages**.

1. Under **Message body**, enter a test message, such as "this is a test message."

1. Choose **Send message**.

Lambda polls the queue for updates. When there is a new message, Lambda invokes your function with this new event data from the queue. If the function handler returns without exceptions, Lambda considers the message successfully processed and begins reading new messages in the queue. After successfully processing a message, Lambda automatically deletes it from the queue. If the handler throws an exception, Lambda considers the batch of messages not successfully processed, and Lambda invokes the function with the same batch of messages.

## Check the CloudWatch logs
Check the logs

![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/sqs_tut_steps7.png)


**To confirm that the function processed the message**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Choose the **ProcessSQSRecord** function.

1. Choose **Monitor**.

1. Choose **View CloudWatch logs**.

1. In the CloudWatch console, choose the **Log stream** for the function.

1. Find the `INFO` log. This is where the Lambda function logs the message body. You should see the message that you sent from the Amazon SQS queue. Example:

   ```
   2023-09-11T22:49:12.730Z b0c41e9c-0556-5a8b-af83-43e59efeec71 INFO Processed message this is a test message.
   ```

## Clean up your resources


You can now delete the resources that you created for this tutorial, unless you want to retain them. By deleting Amazon resources that you're no longer using, you prevent unnecessary charges to your Amazon Web Services account.

**To delete the execution role**

1. Open the [Roles page](https://console.amazonaws.cn/iam/home#/roles) of the IAM console.

1. Select the execution role that you created.

1. Choose **Delete**.

1. Enter the name of the role in the text input field and choose **Delete**.

**To delete the Lambda function**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Select the function that you created.

1. Choose **Actions**, **Delete**.

1. Type **confirm** in the text input field and choose **Delete**.

**To delete the Amazon SQS queue**

1. Sign in to the Amazon Web Services Management Console and open the Amazon SQS console at [https://console.amazonaws.cn/sqs/](https://console.amazonaws.cn/sqs/).

1. Select the queue you created.

1. Choose **Delete**.

1. Enter **confirm** in the text input field.

1. Choose **Delete**.

# Tutorial: Using a cross-account Amazon SQS queue as an event source
SQS cross-account tutorial

In this tutorial, you create a Lambda function that consumes messages from an Amazon Simple Queue Service (Amazon SQS) queue in a different Amazon account. This tutorial involves two Amazon accounts: **Account A** refers to the account that contains your Lambda function, and **Account B** refers to the account that contains the Amazon SQS queue.

## Prerequisites


### Install the Amazon Command Line Interface


If you have not yet installed the Amazon Command Line Interface, follow the steps at [Installing or updating the latest version of the Amazon CLI](https://docs.amazonaws.cn/cli/latest/userguide/getting-started-install.html) to install it.

The tutorial requires a command line terminal or shell to run commands. In Linux and macOS, use your preferred shell and package manager.

**Note**  
In Windows, some Bash CLI commands that you commonly use with Lambda (such as `zip`) are not supported by the operating system's built-in terminals. To get a Windows-integrated version of Ubuntu and Bash, [install the Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/install-win10). 

## Create the execution role (Account A)


In **Account A**, create an [execution role](lambda-intro-execution-role.md) that gives your function permission to access the required Amazon resources.

**To create an execution role**

1. Open the [Roles page](https://console.amazonaws.cn/iam/home#/roles) in the Amazon Identity and Access Management (IAM) console.

1. Choose **Create role**.

1. Create a role with the following properties.
   + **Trusted entity** – **Amazon Lambda**
   + **Permissions** – **AWSLambdaSQSQueueExecutionRole**
   + **Role name** – **cross-account-lambda-sqs-role**

The **AWSLambdaSQSQueueExecutionRole** policy has the permissions that the function needs to read items from Amazon SQS and to write logs to Amazon CloudWatch Logs.

## Create the function (Account A)


In **Account A**, create a Lambda function that processes your Amazon SQS messages. The Lambda function and the Amazon SQS queue must be in the same Amazon Web Services Region.

The following Node.js code example writes each message to a log in CloudWatch Logs.

**Example index.mjs**  

```
export const handler = async function(event, context) {
  event.Records.forEach(record => {
    const { body } = record;
    console.log(body);
  });
  return {};
}
```

**To create the function**
**Note**  
Following these steps creates a Node.js function. For other languages, the steps are similar, but some details are different.

1. Save the code example as a file named `index.mjs`.

1. Create a deployment package.

   ```
   zip function.zip index.mjs
   ```

1. Create the function using the `create-function` Amazon Command Line Interface (Amazon CLI) command. Replace `arn:aws-cn:iam::111122223333:role/cross-account-lambda-sqs-role` with the ARN of the execution role that you created earlier.

   ```
   aws lambda create-function --function-name CrossAccountSQSExample \
   --zip-file fileb://function.zip --handler index.handler --runtime nodejs24.x \
   --role arn:aws-cn:iam::111122223333:role/cross-account-lambda-sqs-role
   ```

## Test the function (Account A)


In **Account A**, test your Lambda function manually using the `invoke` Amazon CLI command and a sample Amazon SQS event.

If the handler returns normally without exceptions, Lambda considers the message to be successfully processed and begins reading new messages in the queue. After successfully processing a message, Lambda automatically deletes it from the queue. If the handler throws an exception, Lambda considers the batch of messages not successfully processed, and Lambda invokes the function with the same batch of messages.

1. Save the following JSON as a file named `input.txt`.

   ```
   {
       "Records": [
           {
               "messageId": "059f36b4-87a3-44ab-83d2-661975830a7d",
               "receiptHandle": "AQEBwJnKyrHigUMZj6rYigCgxlaS3SLy0a...",
               "body": "test",
               "attributes": {
                   "ApproximateReceiveCount": "1",
                   "SentTimestamp": "1545082649183",
                   "SenderId": "AIDAIENQZJOLO23YVJ4VO",
                   "ApproximateFirstReceiveTimestamp": "1545082649185"
               },
               "messageAttributes": {},
               "md5OfBody": "098f6bcd4621d373cade4e832627b4f6",
               "eventSource": "aws:sqs",
               "eventSourceARN": "arn:aws-cn:sqs:us-east-1:111122223333:example-queue",
               "awsRegion": "us-east-1"
           }
       ]
   }
   ```

   The preceding JSON simulates an event that Amazon SQS might send to your Lambda function, where `"body"` contains the actual message from the queue.

1. Run the following `invoke` Amazon CLI command.

   ```
   aws lambda invoke --function-name CrossAccountSQSExample \
   --cli-binary-format raw-in-base64-out \
   --payload file://input.txt outputfile.txt
   ```

   The **cli-binary-format** option is required if you're using Amazon CLI version 2. To make this the default setting, run `aws configure set cli-binary-format raw-in-base64-out`. For more information, see [Amazon CLI supported global command line options](https://docs.amazonaws.cn/cli/latest/userguide/cli-configure-options.html#cli-configure-options-list) in the *Amazon Command Line Interface User Guide for Version 2*.

1. Verify the output in the file `outputfile.txt`.

## Create an Amazon SQS queue (Account B)


In **Account B**, create an Amazon SQS queue that the Lambda function in **Account A** can use as an event source. The Lambda function and the Amazon SQS queue must be in the same Amazon Web Services Region.

**To create a queue**

1. Open the [Amazon SQS console](https://console.amazonaws.cn/sqs).

1. Choose **Create queue**.

1. Create a queue with the following properties.
   + **Type** – **Standard**
   + **Name** – **LambdaCrossAccountQueue**
   + **Configuration** – Keep the default settings.
   + **Access policy** – Choose **Advanced**. Paste in the following JSON policy. Replace the following values:
     + `111122223333`: Amazon Web Services account ID for **Account A**
     + `444455556666`: Amazon Web Services account ID for **Account B**

------
#### [ JSON ]

****  

     ```
     {
         "Version":"2012-10-17",		 	 	 
         "Id": "Queue1_Policy_UUID",
         "Statement": [
             {
                 "Sid": "Queue1_AllActions",
                 "Effect": "Allow",
                 "Principal": {
                     "AWS": [
                         "arn:aws-cn:iam::111122223333:role/cross-account-lambda-sqs-role"
                     ]
                 },
                 "Action": "sqs:*",
                 "Resource": "arn:aws-cn:sqs:us-east-1:444455556666:LambdaCrossAccountQueue"
             }
         ]
     }
     ```

------

     This policy grants the Lambda execution role in **Account A** permissions to consume messages from this Amazon SQS queue.

1. After creating the queue, record its Amazon Resource Name (ARN). You need this in the next step when you associate the queue with your Lambda function.

## Configure the event source (Account A)


In **Account A**, create an event source mapping between the Amazon SQS queue in **Account B** and your Lambda function by running the following `create-event-source-mapping` Amazon CLI command. Replace `arn:aws-cn:sqs:us-east-1:444455556666:LambdaCrossAccountQueue` with the ARN of the Amazon SQS queue that you created in the previous step.

```
aws lambda create-event-source-mapping --function-name CrossAccountSQSExample --batch-size 10 \
--event-source-arn arn:aws-cn:sqs:us-east-1:444455556666:LambdaCrossAccountQueue
```

To get a list of your event source mappings, run the following command.

```
aws lambda list-event-source-mappings --function-name CrossAccountSQSExample \
--event-source-arn arn:aws-cn:sqs:us-east-1:444455556666:LambdaCrossAccountQueue
```

## Test the setup


You can now test the setup as follows:

1. In **Account B**, open the [Amazon SQS console](https://console.amazonaws.cn/sqs).

1. Choose **LambdaCrossAccountQueue**, which you created earlier.

1. Choose **Send and receive messages**.

1. Under **Message body**, enter a test message.

1. Choose **Send message**.

Your Lambda function in **Account A** should receive the message. Lambda will continue to poll the queue for updates. When there is a new message, Lambda invokes your function with this new event data from the queue. Your function runs and creates logs in Amazon CloudWatch. You can view the logs in the [CloudWatch console](https://console.amazonaws.cn/cloudwatch).

## Clean up your resources


You can now delete the resources that you created for this tutorial, unless you want to retain them. By deleting Amazon resources that you're no longer using, you prevent unnecessary charges to your Amazon Web Services account.

In **Account A**, clean up your execution role and Lambda function.

**To delete the execution role**

1. Open the [Roles page](https://console.amazonaws.cn/iam/home#/roles) of the IAM console.

1. Select the execution role that you created.

1. Choose **Delete**.

1. Enter the name of the role in the text input field and choose **Delete**.

**To delete the Lambda function**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Select the function that you created.

1. Choose **Actions**, **Delete**.

1. Type **confirm** in the text input field and choose **Delete**.

In **Account B**, clean up the Amazon SQS queue.

**To delete the Amazon SQS queue**

1. Sign in to the Amazon Web Services Management Console and open the Amazon SQS console at [https://console.amazonaws.cn/sqs/](https://console.amazonaws.cn/sqs/).

1. Select the queue you created.

1. Choose **Delete**.

1. Enter **confirm** in the text input field.

1. Choose **Delete**.

# Orchestrating Lambda functions with Step Functions
Step Functions

Amazon Step Functions provides visual workflow orchestration for coordinating Lambda functions with other Amazon services. With native integrations to 220\$1 Amazon services and fully managed, zero-maintenance infrastructure, Step Functions is ideal when you need visual workflow design and fully-managed service integrations.

For orchestration using standard programming languages within Lambda where workflow logic lives alongside business logic, consider [Lambda durable functions](durable-functions.md). For help choosing between these options, see [Durable functions or Step Functions](durable-step-functions.md).

For example, processing an order might require validating the order details, checking inventory levels, processing payment, and generating an invoice. Write separate Lambda functions for each task and use Step Functions to manage the workflow. Step Functions coordinates the flow of data between your functions and handles errors at each step. This separation makes your workflows easier to visualize, modify, and maintain as they grow more complex.

## When to use Step Functions with Lambda
When to use Step Functions

The following scenarios are good examples of when Step Functions is a particularly good fit for orchestrating Lambda-based applications.
+ [Sequential processing](#sequential-processing)
+ [Complex error handling](#complex-error-handling)
+ [Conditional workflows and human approvals](#conditional-workflows-human-approvals)
+ [Parallel processing](#parallel-processing)

### Sequential processing


Sequential processing is when one task must complete before the next task can begin. For example, in an order processing system, payment processing can't begin until order validation is complete, and invoice generation must wait for payment confirmation. Write separate Lambda functions for each task and use Step Functions to manage the sequence and handle data flow between functions.

#### Anti-pattern example


A single Lambda function manages the entire order processing workflow by:
+ Invoking other Lambda functions in sequence
+ Parsing and validating responses from each function
+ Implementing error handling and recovery logic
+ Managing the flow of data between functions

#### Recommended approach


Use two Lambda functions: one to validate the order and one to process the payment. Step Functions coordinates these functions by:
+ Running tasks in the correct sequence
+ Passing data between functions
+ Implementing error handling at each step
+ Using [Choice](https://docs.amazonaws.cn/step-functions/latest/dg/state-choice.html) states to ensure only valid orders proceed to payment

**Example workflow graph**  

![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/sequential_workflow.png)


**Note**  
**Code-first alternative:** For sequential processing with code-based checkpointing and retry, see [Lambda durable functions steps](durable-basic-concepts.md).

### Complex error handling


While Lambda provides [retry capabilities for asynchronous invocations and event source mappings](invocation-retries.md), Step Functions offers more sophisticated error handling for complex workflows. You can [configure automatic retries](https://docs.amazonaws.cn/step-functions/latest/dg/concepts-error-handling.html#error-handling-retrying-after-an-error) with exponential backoff and set different retry policies for different types of errors. When retries are exhausted, use `Catch` to route errors to a [fallback state](https://docs.amazonaws.cn/step-functions/latest/dg/concepts-error-handling.html#error-handling-fallback-states). This is particularly useful when you need workflow-level error handling that coordinates multiple functions and services.

To learn more about handling Lambda function errors in a state machine, see [Handling errors](https://catalog.workshops.aws/stepfunctions/handling-errors) in *The Amazon Step Functions Workshop*.

#### Anti-pattern example


A single Lambda function handles all of the following:
+ Attempts to call a payment processing service
+ If the payment service is unavailable, the function waits and tries again later.
+ Implements a custom exponential backoff for the wait time
+ After all attempts fail, catch the error and choose another flow

#### Recommended approach


Use a single Lambda function focused solely on payment processing. Step Functions manages error handling by:
+ Automatically [retrying failed tasks with configurable backoff periods](https://docs.amazonaws.cn/step-functions/latest/dg/concepts-error-handling.html#error-handling-retrying-after-an-error)
+ Applying different retry policies based on error types
+ Routing different types of errors to appropriate fallback states
+ Maintaining error handling state and history

**Example workflow graph**  

![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/error_handling_workflow.png)


**Note**  
**Code-first alternative:** Durable functions provide try-catch error handling with configurable retry strategies. See [Error handling in durable functions](durable-execution-sdk-retries.md).

### Conditional workflows and human approvals


Use the Step Functions [Choice state](https://docs.amazonaws.cn/step-functions/latest/dg/state-choice.html) to route workflows based on function output and the [waitForTaskToken suffix](https://docs.amazonaws.cn/step-functions/latest/dg/connect-to-resource.html#connect-wait-token) to pause workflows for human decisions. For example, to process a credit limit increase request, use a Lambda function to evaluate risk factors. Then, use Step Functions to route high-risk requests to manual approval and low-risk requests to automatic approval.

To deploy an example workflow that uses a callback task token integration pattern, see [Callback with Task Token](https://catalog.workshops.aws/stepfunctions/integrating-services/3-callback-token) in *The Amazon Step Functions Workshop*. 

#### Anti-pattern example


A single Lambda function manages a complex approval workflow by:
+ Implementing nested conditional logic to evaluate credit requests
+ Invoking different approval functions based on request amounts
+ Managing multiple approval paths and decision points
+ Tracking the state of pending approvals
+ Implementing timeout and notification logic for approvals

#### Recommended approach


Use three Lambda functions: one to evaluate the risk of each request, one to approve low-risk requests, and one to route high-risk requests to a manager for review. Step Functions manages the workflow by:
+ Using [Choice](https://docs.amazonaws.cn/step-functions/latest/dg/state-choice.html) states to route requests based on amount and risk level
+ Pausing execution while waiting for human approval
+ Managing timeouts for pending approvals
+ Providing visibility into the current state of each request

**Example workflow graph**  

![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/conditional_workflow.png)


**Note**  
**Code-first alternative:** Durable functions support callbacks for human-in-the-loop workflows. See [Callbacks in durable functions](durable-execution-sdk.md).

### Parallel processing


Step Functions provides three ways to handle parallel processing:
+ The [Parallel state](https://docs.amazonaws.cn/step-functions/latest/dg/state-parallel.html) executes multiple branches of your workflow simultaneously. Use this when you need to run different functions in parallel, such as generating thumbnails while extracting image metadata.
+ The [Inline Map state](https://docs.amazonaws.cn/step-functions/latest/dg/state-map-inline.html) processes arrays of data with up to 40 concurrent iterations. Use this for small to medium datasets where you need to perform the same operation on each item.
+ The [Distributed Map state](https://docs.amazonaws.cn/step-functions/latest/dg/state-map-distributed.html) handles large-scale parallel processing with up to 10,000 concurrent executions, supporting both JSON arrays and Amazon Simple Storage Service (Amazon S3) data sources. Use this when processing large datasets or when you need higher concurrency.

#### Anti-pattern example


A single Lambda function attempts to manage parallel processing by:
+ Simultaneously invoking multiple image processing functions
+ Implementing custom parallel execution logic
+ Managing timeouts and error handling for each parallel task
+ Collecting and aggregating results from all functions

#### Recommended approach


Use three Lambda functions: one to create a thumbnail image, one to add a watermark, and one to extract the metadata. Step Functions manages these functions by:
+ Running all functions simultaneously using the [Parallel ](https://docs.amazonaws.cn/step-functions/latest/dg/state-parallel.html) state
+ Collecting results from each function into an ordered array
+ Managing timeouts and error handling across all parallel executions
+ Proceeding only when all parallel branches complete

**Example workflow graph**  

![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/parallel_workflow.png)


**Note**  
**Code-first alternative:** Durable functions provide `parallel()` and `map()` operations. See [Parallel execution](durable-execution-sdk.md).

## When not to use Step Functions with Lambda
When not to use Step Functions

Not all Lambda-based applications benefit from using Step Functions. Consider these scenarios when choosing your application architecture.
+ [Simple applications](#simple-applications)
+ [Complex data processing](#complex-data-processing)
+ [CPU-intensive workloads](#cpu-intensive)

### Simple applications


**Note**  
For workflows that don't require visual design or extensive service integrations, [Lambda durable functions](durable-functions.md) may be a simpler alternative that keeps workflow logic in code within Lambda.

For applications that don't require complex orchestration, using Step Functions might add unnecessary complexity. For example, if you're simply processing messages from an Amazon SQS queue or responding to Amazon EventBridge events, you can configure these services to invoke your Lambda functions directly. Similarly, if your application consists of only one or two Lambda functions with straightforward error handling, direct Lambda invocation or event-driven architectures might be simpler to deploy and maintain.

### Complex data processing


You can use the Step Functions [Distributed Map](https://docs.amazonaws.cn/step-functions/latest/dg/state-map-distributed.html) state to concurrently process large Amazon S3 datasets with Lambda functions. This is effective for many large-scale parallel workloads, including processing semi-structured data like JSON or CSV files. However, for more complex data transformations or advanced analytics, consider these alternatives:
+ **Data transformation pipelines**: Use Amazon Glue for ETL jobs that process structured or semi-structured data from multiple sources. Amazon Glue is particularly useful when you need built-in data catalog and schema management capabilities.
+ **Data analytics:** Use Amazon EMR for petabyte-scale data analytics, especially when you need Apache Hadoop ecosystem tools or for machine learning workloads that exceed Lambda's [memory](configuration-memory.md) limits.

### CPU-intensive workloads


While Step Functions can orchestrate CPU-intensive tasks, Lambda functions may not be suitable for these workloads due to their limited CPU resources. For computationally intensive operations within your workflows, consider these alternatives:
+ **Container orchestration:** Use Step Functions to manage Amazon Elastic Container Service (Amazon ECS) tasks for more consistent and scalable compute resources.
+ **Batch processing:** Integrate Amazon Batch with Step Functions for managing compute-intensive batch jobs that require sustained CPU usage.

# Invoke a Lambda function with Amazon S3 batch events
S3 Batch

You can use Amazon S3 batch operations to invoke a Lambda function on a large set of Amazon S3 objects. Amazon S3 tracks the progress of batch operations, sends notifications, and stores a completion report that shows the status of each action. 

To run a batch operation, you create an Amazon S3 [batch operations job](https://docs.amazonaws.cn/AmazonS3/latest/dev/batch-ops-operations.html). When you create the job, you provide a manifest (the list of objects) and configure the action to perform on those objects. 

When the batch job starts, Amazon S3 invokes the Lambda function [synchronously](invocation-sync.md) for each object in the manifest. The event parameter includes the names of the bucket and the object. 

The following example shows the event that Amazon S3 sends to the Lambda function for an object that is named **customerImage1.jpg** in the **amzn-s3-demo-bucket** bucket.

**Example Amazon S3 batch request event**  

```
{
"invocationSchemaVersion": "1.0",
    "invocationId": "YXNkbGZqYWRmaiBhc2RmdW9hZHNmZGpmaGFzbGtkaGZza2RmaAo",
    "job": {
        "id": "f3cc4f60-61f6-4a2b-8a21-d07600c373ce"
    },
    "tasks": [
        {
            "taskId": "dGFza2lkZ29lc2hlcmUK",
            "s3Key": "customerImage1.jpg",
            "s3VersionId": "1",
            "s3BucketArn": "arn:aws:s3:::amzn-s3-demo-bucket"
        }
    ]  
}
```

Your Lambda function must return a JSON object with the fields as shown in the following example. You can copy the `invocationId` and `taskId` from the event parameter. You can return a string in the `resultString`. Amazon S3 saves the `resultString` values in the completion report. 

**Example Amazon S3 batch request response**  

```
{
  "invocationSchemaVersion": "1.0",
  "treatMissingKeysAs" : "PermanentFailure",
  "invocationId" : "YXNkbGZqYWRmaiBhc2RmdW9hZHNmZGpmaGFzbGtkaGZza2RmaAo",
  "results": [
    {
      "taskId": "dGFza2lkZ29lc2hlcmUK",
      "resultCode": "Succeeded",
      "resultString": "[\"Alice\", \"Bob\"]"
    }
  ]
}
```

## Invoking Lambda functions from Amazon S3 batch operations


You can invoke the Lambda function with an unqualified or qualified function ARN. If you want to use the same function version for the entire batch job, configure a specific function version in the `FunctionARN` parameter when you create your job. If you configure an alias or the \$1LATEST qualifier, the batch job immediately starts calling the new version of the function if the alias or \$1LATEST is updated during the job execution. 

Note that you can't reuse an existing Amazon S3 event-based function for batch operations. This is because the Amazon S3 batch operation passes a different event parameter to the Lambda function and expects a return message with a specific JSON structure.

In the [resource-based policy](access-control-resource-based.md) that you create for the Amazon S3 batch job, ensure that you set permission for the job to invoke your Lambda function.

In the execution role for the function, set a [trust policy for Amazon S3 to assume the role when it runs your function](https://docs.amazonaws.cn/AmazonS3/latest/userguide/batch-ops-iam-role-policies.html).

If your function uses the Amazon SDK to manage Amazon S3 resources, you need to add Amazon S3 permissions in the execution role. 

When the job runs, Amazon S3 starts multiple function instances to process the Amazon S3 objects in parallel, up to the [concurrency limit](lambda-concurrency.md) of the function. Amazon S3 limits the initial ramp-up of instances to avoid excess cost for smaller jobs. 

If the Lambda function returns a `TemporaryFailure` response code, Amazon S3 retries the operation. 

For more information about Amazon S3 batch operations, see [Performing batch operations](https://docs.amazonaws.cn/AmazonS3/latest/dev/batch-ops.html) in the *Amazon S3 Developer Guide*. 

For an example of how to use a Lambda function in Amazon S3 batch operations, see [Invoking a Lambda function from Amazon S3 batch operations](https://docs.amazonaws.cn/AmazonS3/latest/dev/batch-ops-invoke-lambda.html) in the *Amazon S3 Developer Guide*. 

# Invoking Lambda functions with Amazon SNS notifications
SNS

You can use a Lambda function to process Amazon Simple Notification Service (Amazon SNS) notifications. Amazon SNS supports Lambda functions as a target for messages sent to a topic. You can subscribe your function to topics in the same account or in other Amazon accounts. For a detailed walkthrough, see [Tutorial: Using Amazon Lambda with Amazon Simple Notification Service](with-sns-example.md).

Lambda supports SNS triggers for standard SNS topics only. FIFO topics aren't supported.

Lambda processes SNS messages asynchronously by queuing the messages and handling retries. If Amazon SNS can't reach Lambda or the message is rejected, Amazon SNS retries at increasing intervals over several hours. For details, see [Reliability](https://www.amazonaws.cn/sns/faqs/#Reliability) in the Amazon SNS FAQs.

**Warning**  
Lambda asynchronous invocations process each event at least once, and duplicate processing of records can occur. To avoid potential issues related to duplicate events, we strongly recommend that you make your function code idempotent. To learn more, see [ How do I make my Lambda function idempotent](https://repost.aws/knowledge-center/lambda-function-idempotent) in the Amazon Knowledge Center.

## Idempotency utility from Powertools for Amazon Lambda


The idempotency utility from Powertools for Amazon Lambda makes your Lambda functions idempotent. It is available for Python, TypeScript, Java, and .NET. For more information, see [Idempotency utility](https://docs.powertools.aws.dev/lambda/python/latest/utilities/idempotency/) in the *Powertools for Amazon Lambda (Python) documentation*, [Idempotency Utility](https://docs.aws.amazon.com/powertools/typescript/2.1.1/utilities/idempotency/) in the *Powertools for Amazon Lambda (TypeScript) documentation*, [Idempotency Utility](https://docs.powertools.aws.dev/lambda/java/latest/utilities/idempotency/) in the *Powertools for Amazon Lambda (Java) documentation*, and [Idempotency Utility](https://docs.powertools.aws.dev/lambda/dotnet/utilities/idempotency/) in the *Powertools for Amazon Lambda (.NET) documentation*.

**Topics**
+ [

## Idempotency utility from Powertools for Amazon Lambda
](#services-sns-powertools-idempotency)
+ [

## Adding an Amazon SNS topic trigger for a Lambda function using the console
](#sns-trigger-console)
+ [

## Manually adding an Amazon SNS topic trigger for a Lambda function
](#sns-trigger-manual)
+ [

## Sample SNS event shape
](#sns-sample-event)
+ [

# Tutorial: Using Amazon Lambda with Amazon Simple Notification Service
](with-sns-example.md)

## Adding an Amazon SNS topic trigger for a Lambda function using the console


To add an SNS topic as a trigger for a Lambda function, the easiest way is to use the Lambda console. When you add the trigger via the console, Lambda automatically sets up the necessary permissions and subscriptions to start receiving events from the SNS topic.

**To add an SNS topic as a trigger for a Lambda function (console)**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Choose the name of a function you want to add the trigger for.

1. Choose **Configuration**, and then choose **Triggers**.

1. Choose **Add trigger**.

1. Under **Trigger configuration**, in the dropdown menu, choose **SNS**.

1. For **SNS topic**, choose the SNS topic to subscribe to.

## Manually adding an Amazon SNS topic trigger for a Lambda function


To set up an SNS trigger for a Lambda function manually, you need to complete the following steps:
+ Define a resource-based policy for your function to allow SNS to invoke it.
+ Subscribe your Lambda function to the Amazon SNS topic.
**Note**  
If your SNS topic and your Lambda function are in different Amazon accounts, you also need to grant extra permissions to allow cross-account subscriptions to the SNS topic. For more information, see [Grant cross-account permission for Amazon SNS subscription](with-sns-example.md#with-sns-subscription-grant-permission).

You can use the Amazon Command Line Interface (Amazon CLI) to complete both of these steps. First, to define a resource-based policy for a Lambda function that allows SNS invocations, use the following Amazon CLI command. Be sure to replace the value of `--function-name` with your Lambda function name, and the value of `--source-arn` with your SNS topic ARN.

```
aws lambda add-permission --function-name example-function \
    --source-arn arn:aws:sns:us-east-1:123456789012:sns-topic-for-lambda \
    --statement-id function-with-sns --action "lambda:InvokeFunction" \
    --principal sns.amazonaws.com
```

To subscribe your function to the SNS topic, use the following Amazon CLI command. Replace the value of `--topic-arn` with your SNS topic ARN, and the value of `--notification-endpoint` with your Lambda function ARN.

```
aws sns subscribe --protocol lambda \
    --region us-east-1 \
    --topic-arn arn:aws:sns:us-east-1:123456789012:sns-topic-for-lambda \
    --notification-endpoint arn:aws:lambda:us-east-1:123456789012:function:example-function
```

## Sample SNS event shape


Amazon SNS invokes your function [asynchronously](invocation-async.md) with an event that contains a message and metadata.

**Example Amazon SNS message event**  

```
{
  "Records": [
    {
      "EventVersion": "1.0",
      "EventSubscriptionArn": "arn:aws-cn:sns:cn-north-1:123456789012:sns-lambda:21be56ed-a058-49f5-8c98-aedd2564c486",
      "EventSource": "aws:sns",
      "Sns": {
        "SignatureVersion": "1",
        "Timestamp": "2019-01-02T12:45:07.000Z",
        "Signature": "tcc6faL2yUC6dgZdmrwh1Y4cGa/ebXEkAi6RibDsvpi+tE/1+82j...65r==",
        "SigningCertUrl": "https://sns.cn-north-1.amazonaws.com.cn/SimpleNotificationService-ac565b8b1a6c5d002d285f9598aa1d9b.pem",
        "MessageId": "95df01b4-ee98-5cb9-9903-4c221d41eb5e",
        "Message": "Hello from SNS!",
        "MessageAttributes": {
          "Test": {
            "Type": "String",
            "Value": "TestString"
          },
          "TestBinary": {
            "Type": "Binary",
            "Value": "TestBinary"
          }
        },
        "Type": "Notification",
        "UnsubscribeUrl": "https://sns.cn-north-1.amazonaws.com.cn/?Action=Unsubscribe&amp;SubscriptionArn=arn:aws-cn:sns:cn-north-1:123456789012:test-lambda:21be56ed-a058-49f5-8c98-aedd2564c486",
        "TopicArn":"arn:aws-cn:sns:cn-north-1:123456789012:sns-lambda",
        "Subject": "TestInvoke"
      }
    }
  ]
}
```

# Tutorial: Using Amazon Lambda with Amazon Simple Notification Service
Tutorial

In this tutorial, you use a Lambda function in one Amazon Web Services account to subscribe to an Amazon Simple Notification Service (Amazon SNS) topic in a separate Amazon Web Services account. When you publish messages to your Amazon SNS topic, your Lambda function reads the contents of the message and outputs it to Amazon CloudWatch Logs. To complete this tutorial, you use the Amazon Command Line Interface (Amazon CLI).

![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/services-sns-tutorial/sns_tut_resources.png)


To complete this tutorial, you perform the following steps:
+ In **account A**, create an Amazon SNS topic.
+ In **account B**, create a Lambda function that will read messages from the topic.
+ In **account B**, create a subscription to the topic.
+ Publish messages to the Amazon SNS topic in **account A** and confirm that the Lambda function in **account B** outputs them to CloudWatch Logs.

By completing these steps, you will learn how to configure an Amazon SNS topic to invoke a Lambda function. You will also learn how to create an Amazon Identity and Access Management (IAM) policy that gives permission for a resource in another Amazon Web Services account to invoke Lambda.

In the tutorial, you use two separate Amazon Web Services accounts. The Amazon CLI commands illustrate this by using two named profiles called `accountA` and `accountB`, each configured for use with a different Amazon Web Services account. To learn how to configure the Amazon CLI to use different profiles, see [Configuration and credential file settings](https://docs.amazonaws.cn/cli/latest/userguide/cli-configure-files.html) in the *Amazon Command Line Interface User Guide for Version 2*. Be sure to configure the same default Amazon Web Services Region for both profiles.

If the Amazon CLI profiles you create for the two Amazon Web Services accounts use different names, or if you use the default profile and one named profile, modify the Amazon CLI commands in the following steps as needed.

## Prerequisites


### Install the Amazon Command Line Interface


If you have not yet installed the Amazon Command Line Interface, follow the steps at [Installing or updating the latest version of the Amazon CLI](https://docs.amazonaws.cn/cli/latest/userguide/getting-started-install.html) to install it.

The tutorial requires a command line terminal or shell to run commands. In Linux and macOS, use your preferred shell and package manager.

**Note**  
In Windows, some Bash CLI commands that you commonly use with Lambda (such as `zip`) are not supported by the operating system's built-in terminals. To get a Windows-integrated version of Ubuntu and Bash, [install the Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/install-win10). 

## Create an Amazon SNS topic (account A)


![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/services-sns-tutorial/sns_tut_steps_1.png)


**To create the topic**
+ In **account A**, create an Amazon SNS standard topic using the following Amazon CLI command.

  ```
  aws sns create-topic --name sns-topic-for-lambda --profile accountA
  ```

  You should see output similar to the following.

  ```
  {
      "TopicArn": "arn:aws:sns:us-west-2:123456789012:sns-topic-for-lambda"
  }
  ```

  Make a note of the Amazon Resource Name (ARN) of your topic. You’ll need it later in the tutorial when you add permissions to your Lambda function to subscribe to the topic.

## Create a function execution role (account B)


![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/services-sns-tutorial/sns_tut_steps_2.png)


An execution role is an IAM role that grants a Lambda function permission to access Amazon Web Services services and resources. Before you create your function in **account B**, you create a role that gives the function basic permissions to write logs to CloudWatch Logs. We’ll add the permissions to read from your Amazon SNS topic in a later step.

**To create an execution role**

1. In **account B** open the [roles page](https://console.amazonaws.cn/iam/home#/roles) in the IAM console.

1. Choose **Create role**.

1. For **Trusted entity type**, choose **Amazon service**.

1. For **Use case**, choose **Lambda**.

1. Choose **Next**.

1. Add a basic permissions policy to the role by doing the following:

   1. In the **Permissions policies** search box, enter **AWSLambdaBasicExecutionRole**.

   1. Choose **Next**.

1. Finalize the role creation by doing the following:

   1. Under **Role details**, enter **lambda-sns-role** for **Role name**.

   1. Choose **Create role**.

## Create a Lambda function (account B)


![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/services-sns-tutorial/sns_tut_steps_3.png)


Create a Lambda function that processes your Amazon SNS messages. The function code logs the message contents of each record to Amazon CloudWatch Logs.

This tutorial uses the Node.js 24 runtime, but we've also provided example code in other runtime languages. You can select the tab in the following box to see code for the runtime you're interested in. The JavaScript code you'll use in this step is in the first example shown in the **JavaScript** tab.

------
#### [ .NET ]

**Amazon SDK for .NET**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-sns-to-lambda) repository. 
Consuming an SNS event with Lambda using .NET.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
using Amazon.Lambda.Core;
using Amazon.Lambda.SNSEvents;


// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))]

namespace SnsIntegration;

public class Function
{
    public async Task FunctionHandler(SNSEvent evnt, ILambdaContext context)
    {
        foreach (var record in evnt.Records)
        {
            await ProcessRecordAsync(record, context);
        }
        context.Logger.LogInformation("done");
    }

    private async Task ProcessRecordAsync(SNSEvent.SNSRecord record, ILambdaContext context)
    {
        try
        {
            context.Logger.LogInformation($"Processed record {record.Sns.Message}");

            // TODO: Do interesting work based on the new message
            await Task.CompletedTask;
        }
        catch (Exception e)
        {
            //You can use Dead Letter Queue to handle failures. By configuring a Lambda DLQ.
            context.Logger.LogError($"An error occurred");
            throw;
        }
    }
}
```

------
#### [ Go ]

**SDK for Go V2**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-sns-to-lambda) repository. 
Consuming an SNS event with Lambda using Go.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
package main

import (
	"context"
	"fmt"

	"github.com/aws/aws-lambda-go/events"
	"github.com/aws/aws-lambda-go/lambda"
)

func handler(ctx context.Context, snsEvent events.SNSEvent) {
	for _, record := range snsEvent.Records {
		processMessage(record)
	}
	fmt.Println("done")
}

func processMessage(record events.SNSEventRecord) {
	message := record.SNS.Message
	fmt.Printf("Processed message: %s\n", message)
	// TODO: Process your record here
}

func main() {
	lambda.Start(handler)
}
```

------
#### [ Java ]

**SDK for Java 2.x**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-sns-to-lambda) repository. 
Consuming an SNS event with Lambda using Java.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
package example;

import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.LambdaLogger;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.amazonaws.services.lambda.runtime.events.SNSEvent;
import com.amazonaws.services.lambda.runtime.events.SNSEvent.SNSRecord;


import java.util.Iterator;
import java.util.List;

public class SNSEventHandler implements RequestHandler<SNSEvent, Boolean> {
    LambdaLogger logger;

    @Override
    public Boolean handleRequest(SNSEvent event, Context context) {
        logger = context.getLogger();
        List<SNSRecord> records = event.getRecords();
        if (!records.isEmpty()) {
            Iterator<SNSRecord> recordsIter = records.iterator();
            while (recordsIter.hasNext()) {
                processRecord(recordsIter.next());
            }
        }
        return Boolean.TRUE;
    }

    public void processRecord(SNSRecord record) {
        try {
            String message = record.getSNS().getMessage();
            logger.log("message: " + message);
        } catch (Exception e) {
            throw new RuntimeException(e);
        }
    }

}
```

------
#### [ JavaScript ]

**SDK for JavaScript (v3)**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/blob/main/integration-sns-to-lambda) repository. 
Consuming an SNS event with Lambda using JavaScript.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
exports.handler = async (event, context) => {
  for (const record of event.Records) {
    await processMessageAsync(record);
  }
  console.info("done");
};

async function processMessageAsync(record) {
  try {
    const message = JSON.stringify(record.Sns.Message);
    console.log(`Processed message ${message}`);
    await Promise.resolve(1); //Placeholder for actual async work
  } catch (err) {
    console.error("An error occurred");
    throw err;
  }
}
```
Consuming an SNS event with Lambda using TypeScript.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
import { SNSEvent, Context, SNSHandler, SNSEventRecord } from "aws-lambda";

export const functionHandler: SNSHandler = async (
  event: SNSEvent,
  context: Context
): Promise<void> => {
  for (const record of event.Records) {
    await processMessageAsync(record);
  }
  console.info("done");
};

async function processMessageAsync(record: SNSEventRecord): Promise<any> {
  try {
    const message: string = JSON.stringify(record.Sns.Message);
    console.log(`Processed message ${message}`);
    await Promise.resolve(1); //Placeholder for actual async work
  } catch (err) {
    console.error("An error occurred");
    throw err;
  }
}
```

------
#### [ PHP ]

**SDK for PHP**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-sns-to-lambda) repository. 
Consuming an SNS event with Lambda using PHP.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
<?php

/* 
Since native PHP support for AWS Lambda is not available, we are utilizing Bref's PHP functions runtime for AWS Lambda.
For more information on Bref's PHP runtime for Lambda, refer to: https://bref.sh/docs/runtimes/function

Another approach would be to create a custom runtime. 
A practical example can be found here: https://aws.amazon.com/blogs/apn/aws-lambda-custom-runtime-for-php-a-practical-example/
*/

// Additional composer packages may be required when using Bref or any other PHP functions runtime.
// require __DIR__ . '/vendor/autoload.php';

use Bref\Context\Context;
use Bref\Event\Sns\SnsEvent;
use Bref\Event\Sns\SnsHandler;

class Handler extends SnsHandler
{
    public function handleSns(SnsEvent $event, Context $context): void
    {
        foreach ($event->getRecords() as $record) {
            $message = $record->getMessage();

            // TODO: Implement your custom processing logic here
            // Any exception thrown will be logged and the invocation will be marked as failed

            echo "Processed Message: $message" . PHP_EOL;
        }
    }
}

return new Handler();
```

------
#### [ Python ]

**SDK for Python (Boto3)**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-sns-to-lambda) repository. 
Consuming an SNS event with Lambda using Python.  

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
def lambda_handler(event, context):
    for record in event['Records']:
        process_message(record)
    print("done")

def process_message(record):
    try:
        message = record['Sns']['Message']
        print(f"Processed message {message}")
        # TODO; Process your record here
        
    except Exception as e:
        print("An error occurred")
        raise e
```

------
#### [ Ruby ]

**SDK for Ruby**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-sns-to-lambda) repository. 
Consuming an SNS event with Lambda using Ruby.  

```
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0
def lambda_handler(event:, context:)
  event['Records'].map { |record| process_message(record) }
end

def process_message(record)
  message = record['Sns']['Message']
  puts("Processing message: #{message}")
rescue StandardError => e
  puts("Error processing message: #{e}")
  raise
end
```

------
#### [ Rust ]

**SDK for Rust**  
 There's more on GitHub. Find the complete example and learn how to set up and run in the [Serverless examples](https://github.com/aws-samples/serverless-snippets/tree/main/integration-sns-to-lambda) repository. 
Consuming an SNS event with Lambda using Rust.  

```
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0
use aws_lambda_events::event::sns::SnsEvent;
use aws_lambda_events::sns::SnsRecord;
use lambda_runtime::{run, service_fn, Error, LambdaEvent};
use tracing::info;

// Built with the following dependencies:
//  aws_lambda_events = { version = "0.10.0", default-features = false, features = ["sns"] }
//  lambda_runtime = "0.8.1"
//  tokio = { version = "1", features = ["macros"] }
//  tracing = { version = "0.1", features = ["log"] }
//  tracing-subscriber = { version = "0.3", default-features = false, features = ["fmt"] }

async fn function_handler(event: LambdaEvent<SnsEvent>) -> Result<(), Error> {
    for event in event.payload.records {
        process_record(&event)?;
    }
    
    Ok(())
}

fn process_record(record: &SnsRecord) -> Result<(), Error> {
    info!("Processing SNS Message: {}", record.sns.message);

    // Implement your record handling code here.

    Ok(())
}

#[tokio::main]
async fn main() -> Result<(), Error> {
    tracing_subscriber::fmt()
        .with_max_level(tracing::Level::INFO)
        .with_target(false)
        .without_time()
        .init();

    run(service_fn(function_handler)).await
}
```

------

**To create the function**

1. Create a directory for the project, and then switch to that directory.

   ```
   mkdir sns-tutorial
   cd sns-tutorial
   ```

1. Copy the sample JavaScript code into a new file named `index.js`.

1. Create a deployment package using the following `zip` command.

   ```
   zip function.zip index.js
   ```

1. Run the following Amazon CLI command to create your Lambda function in **account B**.

   ```
   aws lambda create-function --function-name Function-With-SNS \
       --zip-file fileb://function.zip --handler index.handler --runtime nodejs24.x \
       --role arn:aws:iam::<AccountB_ID>:role/lambda-sns-role  \
       --timeout 60 --profile accountB
   ```

   You should see output similar to the following.

   ```
   {
       "FunctionName": "Function-With-SNS",
       "FunctionArn": "arn:aws:lambda:us-west-2:123456789012:function:Function-With-SNS",
       "Runtime": "nodejs24.x",
       "Role": "arn:aws:iam::123456789012:role/lambda_basic_role",
       "Handler": "index.handler",
       ...
       "RuntimeVersionConfig": {
           "RuntimeVersionArn": "arn:aws:lambda:us-west-2::runtime:7d5f06b69c951da8a48b926ce280a9daf2e8bb1a74fc4a2672580c787d608206"
       }
   }
   ```

1. Record the Amazon Resource Name (ARN) of your function. You’ll need it later in the tutorial when you add permissions to allow Amazon SNS to invoke your function.

## Add permissions to function (account B)


![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/services-sns-tutorial/sns_tut_steps_4.png)


For Amazon SNS to invoke your function, you need to grant it permission in a statement on a [resource-based policy](access-control-resource-based.md). You add this statement using the Amazon CLI `add-permission` command.

**To grant Amazon SNS permission to invoke your function**
+ In **account B**, run the following Amazon CLI command using the ARN for your Amazon SNS topic you recorded earlier.

  ```
  aws lambda add-permission --function-name Function-With-SNS \
      --source-arn arn:aws:sns:us-east-1:<AccountA_ID>:sns-topic-for-lambda \
      --statement-id function-with-sns --action "lambda:InvokeFunction" \
      --principal sns.amazonaws.com --profile accountB
  ```

  You should see output similar to the following.

  ```
  {
      "Statement": "{\"Condition\":{\"ArnLike\":{\"AWS:SourceArn\":
        \"arn:aws:sns:us-east-1:<AccountA_ID>:sns-topic-for-lambda\"}},
        \"Action\":[\"lambda:InvokeFunction\"],
        \"Resource\":\"arn:aws:lambda:us-east-1:<AccountB_ID>:function:Function-With-SNS\",
        \"Effect\":\"Allow\",\"Principal\":{\"Service\":\"sns.amazonaws.com\"},
        \"Sid\":\"function-with-sns\"}"
  }
  ```

**Note**  
If the account with the Amazon SNS topic is hosted in an [opt-in Amazon Web Services Region](https://docs.amazonaws.cn/accounts/latest/reference/manage-acct-regions.html), you need to specify the region in the principal. For example, if you're working with an Amazon SNS topic in the Asia Pacific (Hong Kong) region, you need to specify `sns.ap-east-1.amazonaws.com` instead of `sns.amazonaws.com` for the principal. 

## Grant cross-account permission for Amazon SNS subscription (account A)


![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/services-sns-tutorial/sns_tut_steps_5.png)


For your Lambda function in **account B** to subscribe to the Amazon SNS topic you created in **account A**, you need to grant permission for **account B** to subscribe to your topic. You grant this permission using the Amazon CLI `add-permission` command. 

**To grant permission for account B to subscribe to the topic**
+ In **account A**, run the following Amazon CLI command. Use the ARN for the Amazon SNS topic you recorded earlier.

  ```
  aws sns add-permission --label lambda-access --aws-account-id <AccountB_ID> \
      --topic-arn arn:aws:sns:us-east-1:<AccountA_ID>:sns-topic-for-lambda \  
      --action-name Subscribe ListSubscriptionsByTopic --profile accountA
  ```

## Create a subscription (account B)


![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/services-sns-tutorial/sns_tut_steps_6.png)


In **account B**, you now subscribe your Lambda function to the Amazon SNS topic you created at the beginning of the tutorial in **account A**. When a message is sent to this topic (`sns-topic-for-lambda`), Amazon SNS invokes your Lambda function `Function-With-SNS` in **account B**. 

**To create a subscription**
+ In **account B**, run the following Amazon CLI command. Use your default region you created your topic in and the ARNs for your topic and Lambda function.

  ```
  aws sns subscribe --protocol lambda \
      --region us-east-1 \
      --topic-arn arn:aws:sns:us-east-1:<AccountA_ID>:sns-topic-for-lambda \
      --notification-endpoint arn:aws:lambda:us-east-1:<AccountB_ID>:function:Function-With-SNS \
      --profile accountB
  ```

  You should see output similar to the following.

  ```
  {
      "SubscriptionArn": "arn:aws:sns:us-east-1:<AccountA_ID>:sns-topic-for-lambda:5d906xxxx-7c8x-45dx-a9dx-0484e31c98xx"
  }
  ```

## Publish messages to topic (account A and account B)


![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/services-sns-tutorial/sns_tut_steps_7.png)


Now that your Lambda function in **account B** is subscribed to your Amazon SNS topic in **account A**, it’s time to test your setup by publishing messages to your topic. To confirm that Amazon SNS has invoked your Lambda function, you use CloudWatch Logs to view your function’s output.

**To publish a message to your topic and view your function's output**

1. Enter `Hello World` into a text file and save it as `message.txt`.

1. From the same directory you saved your text file in, run the following Amazon CLI command in **account A**. Use the ARN for your own topic.

   ```
   aws sns publish --message file://message.txt --subject Test \
       --topic-arn arn:aws:sns:us-east-1:<AccountA_ID>:sns-topic-for-lambda \
       --profile accountA
   ```

   This will return a message ID with a unique identifier, indicating that Amazon SNS has accepted the message. Amazon SNS then attempts to deliver the message to the topic’s subscribers. To confirm that Amazon SNS has invoked your Lambda function, use CloudWatch Logs to view your function’s output:

1. In **account B**, open the [Log groups](https://console.amazonaws.cn/cloudwatch/home#logsV2:log-groups) page of the Amazon CloudWatch console.

1. Choose the log group for your function (`/aws/lambda/Function-With-SNS`).

1. Choose the most recent log stream.

1. If your function was correctly invoked, you’ll see output similar to the following showing the contents of the message you published to your topic.

   ```
   2023-07-31T21:42:51.250Z c1cba6b8-ade9-4380-aa32-d1a225da0e48 INFO Processed message Hello World
   2023-07-31T21:42:51.250Z c1cba6b8-ade9-4380-aa32-d1a225da0e48 INFO done
   ```

## Clean up your resources


You can now delete the resources that you created for this tutorial, unless you want to retain them. By deleting Amazon resources that you're no longer using, you prevent unnecessary charges to your Amazon Web Services account.

In **Account A**, clean up your Amazon SNS topic.

**To delete the Amazon SNS topic**

1. Open the [Topics page](https://console.amazonaws.cn//sns/home#topics:) of the Amazon SNS console.

1. Select the topic you created.

1. Choose **Delete**.

1. Enter **delete me** in the text input field.

1. Choose **Delete**.

In **Account B**, clean up your execution role, Lambda function, and Amazon SNS subscription.

**To delete the execution role**

1. Open the [Roles page](https://console.amazonaws.cn/iam/home#/roles) of the IAM console.

1. Select the execution role that you created.

1. Choose **Delete**.

1. Enter the name of the role in the text input field and choose **Delete**.

**To delete the Lambda function**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Select the function that you created.

1. Choose **Actions**, **Delete**.

1. Type **confirm** in the text input field and choose **Delete**.

**To delete the Amazon SNS subscription**

1. Open the [Subscriptions page](https://console.amazonaws.cn//sns/home#subscriptions:) of the Amazon SNS console.

1. Select the subscription you created.

1. Choose **Delete**, **Delete**.