

# Monitoring, debugging, and troubleshooting Lambda functions
Monitoring and debugging functions

Amazon Lambda integrates with other Amazon Web Services services to help you monitor and troubleshoot your Lambda functions. Lambda automatically monitors Lambda functions on your behalf and reports metrics through Amazon CloudWatch. To help you monitor your code when it runs, Lambda automatically tracks the number of requests, the invocation duration per request, and the number of requests that result in an error. 

You can use other Amazon Web Services services to troubleshoot your Lambda functions. This section describes how to use these Amazon Web Services services to monitor, trace, debug, and troubleshoot your Lambda functions and applications. For details about function logging and errors in each runtime, see individual runtime sections. 

**Topics**
+ [

## Pricing
](#monitoring-console-metrics-pricing)
+ [

# Using CloudWatch metrics with Lambda
](monitoring-metrics.md)
+ [

# Working with Lambda function logs
](monitoring-logs.md)
+ [

# Logging Amazon Lambda API calls using Amazon CloudTrail
](logging-using-cloudtrail.md)
+ [

# Visualize Lambda function invocations using Amazon X-Ray
](services-xray.md)
+ [

# Monitor function performance with Amazon CloudWatch Lambda Insights
](monitoring-insights.md)
+ [

# Monitoring Lambda applications
](applications-console-monitoring.md)
+ [

# Monitor application performance with Amazon CloudWatch Application Signals
](monitoring-application-signals.md)
+ [

# Remotely debug Lambda functions with Visual Studio Code
](debugging.md)

## Pricing


CloudWatch has a perpetual free tier. Beyond the free tier threshold, CloudWatch charges for metrics, dashboards, alarms, logs, and insights. For more information, see [Amazon CloudWatch pricing](https://www.amazonaws.cn/cloudwatch/pricing/#Vended_Logs).

# Using CloudWatch metrics with Lambda
Function metrics

When your Amazon Lambda function finishes processing an event, Lambda automatically sends metrics about the invocation to Amazon CloudWatch. You don't need to grant any additional permissions to your execution role to receive function metrics, and there's no additional charge for these metrics.

There are many types of metrics associated with Lambda functions. These include invocation metrics, performance metrics, concurrency metrics, asynchronous invocation metrics, and event source mapping metrics. For more information, see [Types of metrics for Lambda functions](monitoring-metrics-types.md).

In the CloudWatch console, you can [view these metrics](monitoring-metrics-view.md) and build graphs and dashboards with them. You can also set alarms to respond to changes in utilization, performance, or error rates. Lambda sends metric data to CloudWatch in 1-minute intervals. For more immediate insight into your Lambda function, you can create [high-resolution custom metrics](https://docs.amazonaws.cn/AmazonCloudWatch/latest/monitoring/publishingMetrics.html). Charges apply for custom metrics and CloudWatch alarms. For more information, see [Amazon CloudWatch Pricing](https://www.amazonaws.cn/cloudwatch/pricing/).

# Viewing metrics for Lambda functions
View function metrics

Use the CloudWatch console to view metrics for your Lambda functions. In the console, you can filter and sort function metrics by function name, alias, version, or event source mapping UUID.

**To view metrics on the CloudWatch console**

1. Open the [Metrics page](https://console.amazonaws.cn/cloudwatch/home?region=us-east-1#metricsV2:graph=~();namespace=~'AWS*2fLambda) (`AWS/Lambda` namespace) of the CloudWatch console.

1. On the **Browse** tab, under **Metrics**, choose any of the following dimensions:
   + **By Function Name** (`FunctionName`) – View aggregate metrics for all versions and aliases of a function.
   + **By Resource** (`Resource`) – View metrics for a version or alias of a function.
   + **By Executed Version** (`ExecutedVersion`) – View metrics for a combination of alias and version. Use the `ExecutedVersion` dimension to compare error rates for two versions of a function that are both targets of a [weighted alias](configuration-aliases.md).
   + **By Event Source Mapping UUID** (`EventSourceMappingUUID`) – View metrics for an event source mapping.
   + **Across All Functions** (none) – View aggregate metrics for all functions in the current Amazon Web Services Region.

1. Choose a metric. The metric should automatically appear in the visual graph, as well as under the **Graphed metrics** tab.

By default, graphs use the `Sum` statistic for all metrics. To choose a different statistic and customize the graph, use the options on the **Graphed metrics** tab.

**Note**  
The timestamp on a metric reflects when the function was invoked. Depending on the duration of the invocation, this can be several minutes before the metric is emitted. For example, if your function has a 10-minute timeout, then look more than 10 minutes in the past for accurate metrics.

For more information about CloudWatch, see the [ Amazon CloudWatch User Guide](https://docs.amazonaws.cn/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html).

# Types of metrics for Lambda functions
Metric types

This section describes the types of Lambda metrics available in the CloudWatch console.

**Topics**
+ [

## Invocation metrics
](#invocation-metrics)
+ [

## Deployment metrics
](#deployment-metrics)
+ [

## Performance metrics
](#performance-metrics)
+ [

## Concurrency metrics
](#concurrency-metrics)
+ [

## Asynchronous invocation metrics
](#async-invocation-metrics)
+ [

## Event source mapping metrics
](#event-source-mapping-metrics)

## Invocation metrics


Invocation metrics are binary indicators of the outcome of a Lambda function invocation. View these metrics with the `Sum` statistic. For example, if the function returns an error, then Lambda sends the `Errors` metric with a value of 1. To get a count of the number of function errors that occurred each minute, view the `Sum` of the `Errors` metric with a period of 1 minute.
+ `Invocations` – The number of times that your function code is invoked, including successful invocations and invocations that result in a function error. Invocations aren't recorded if the invocation request is throttled or otherwise results in an invocation error. The value of `Invocations` equals the number of requests billed.
+ `Errors` – The number of invocations that result in a function error. Function errors include exceptions that your code throws and exceptions that the Lambda runtime throws. The runtime returns errors for issues such as timeouts and configuration errors. To calculate the error rate, divide the value of `Errors` by the value of `Invocations`. Note that the timestamp on an error metric reflects when the function was invoked, not when the error occurred.
+ `DeadLetterErrors` – For [asynchronous invocation](invocation-async.md), the number of times that Lambda attempts to send an event to a dead-letter queue (DLQ) but fails. Dead-letter errors can occur due to incorrectly set resources or size limits.
+ `DestinationDeliveryFailures` – For asynchronous invocation and supported [event source mappings](https://docs.amazonaws.cn/lambda/latest/dg/invocation-eventsourcemapping.html), the number of times that Lambda attempts to send an event to a [destination](invocation-async-retain-records.md#invocation-async-destinations) but fails. For event source mappings, Lambda supports destinations for stream sources (DynamoDB and Kinesis). Delivery errors can occur due to permissions errors, incorrectly configured resources, or size limits. Errors can also occur if the destination you have configured is an unsupported type such as an Amazon SQS FIFO queue or an Amazon SNS FIFO topic.
+ `Throttles` – The number of invocation requests that are throttled. When all function instances are processing requests and no concurrency is available to scale up, Lambda rejects additional requests with a `TooManyRequestsException` error. Throttled requests and other invocation errors don't count as either `Invocations` or `Errors`.
**Note**  
With [Lambda Managed Instances](lambda-managed-instances.md), Lambda provides granular throttle metrics that identify the specific constraint causing the throttle. When a throttle occurs on the execution environment, exactly one of the following sub-metrics is emitted with a value of 1, while the remaining three are emitted with a value of 0. The `Throttles` metric is always emitted alongside these sub-metrics.  
`CPUThrottles` – Invocations throttled due to CPU exhaustion on the execution environment.
`MemoryThrottles` – Invocations throttled due to memory exhaustion on the execution environment.
`DiskThrottles` – Invocations throttled due to disk exhaustion on the execution environment.
`ConcurrencyThrottles` – Invocations throttled when the execution environment concurrency limit is reached.
+ `OversizedRecordCount` – For Amazon DocumentDB event sources, the number of events your function receives from your change stream that are over 6 MB in size. Lambda drops the message and emits this metric.
+ `ProvisionedConcurrencyInvocations` – The number of times that your function code is invoked using [provisioned concurrency](provisioned-concurrency.md).
+ `ProvisionedConcurrencySpilloverInvocations` – The number of times that your function code is invoked using standard concurrency when all provisioned concurrency is in use.
+ `RecursiveInvocationsDropped` – The number of times that Lambda has stopped invocation of your function because it has detected that your function is part of an infinite recursive loop. Recursive loop detection monitors how many times a function is invoked as part of a chain of requests by tracking metadata added by supported Amazon SDKs. By default, if your function is invoked as part of a chain of requests approximately 16 times, Lambda drops the next invocation. If you disable recursive loop detection, this metric is not emitted. For more information about this feature, see [Use Lambda recursive loop detection to prevent infinite loops](invocation-recursion.md).

## Deployment metrics


Deployment metrics provide information about Lambda function deployment events and related validation processes.
+ `SignatureValidationErrors` – The number of times a code package deployment has occurred with signature validation failures when the code signing configuration policy is set to `Warn`. This metric is emitted when the expiry, mismatch, or revocation checks fail but the deployment is still allowed due to the `Warn` policy setting. For more information about code signing, see [Using code signing to verify code integrity with Lambda](configuration-codesigning.md).

## Performance metrics


Performance metrics provide performance details about a single function invocation. For example, the `Duration` metric indicates the amount of time in milliseconds that your function spends processing an event. To get a sense of how fast your function processes events, view these metrics with the `Average` or `Max` statistic.
+ `Duration` – The amount of time that your function code spends processing an event. The billed duration for an invocation is the value of `Duration` rounded up to the nearest millisecond. `Duration` does not include cold start time.
+ `PostRuntimeExtensionsDuration` – The cumulative amount of time that the runtime spends running code for extensions after the function code has completed.
+ `IteratorAge` – For DynamoDB, Kinesis, and Amazon DocumentDB event sources, the age of the last record in the event in milliseconds. This metric measures the time between when a stream receives the record and when the event source mapping sends the event to the function.
+ `OffsetLag` – For self-managed Apache Kafka and Amazon Managed Streaming for Apache Kafka (Amazon MSK) event sources, the difference in offset between the last record written to a topic and the last record that your function's consumer group processed. Though a Kafka topic can have multiple partitions, this metric measures the offset lag at the topic level.

`Duration` also supports percentile (`p`) statistics. Use percentiles to exclude outlier values that skew `Average` and `Maximum` statistics. For example, the `p95` statistic shows the maximum duration of 95 percent of invocations, excluding the slowest 5 percent. For more information, see [Percentiles](https://docs.amazonaws.cn/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html#Percentiles) in the *Amazon CloudWatch User Guide*.

## Concurrency metrics


Lambda reports concurrency metrics as an aggregate count of the number of instances processing events across a function, version, alias, or Amazon Web Services Region. To see how close you are to hitting [concurrency limits](lambda-concurrency.md#concurrency-quotas), view these metrics with the `Max` statistic.
+ `ConcurrentExecutions` – The number of function instances that are processing events. If this number reaches your [concurrent executions quota](gettingstarted-limits.md#compute-and-storage) for the Region, or the [reserved concurrency](configuration-concurrency.md) limit on the function, then Lambda throttles additional invocation requests.
+ `ProvisionedConcurrentExecutions` – The number of function instances that are processing events using [provisioned concurrency](provisioned-concurrency.md). For each invocation of an alias or version with provisioned concurrency, Lambda emits the current count. If your function is inactive or not receiving requests, Lambda doesn't emit this metric.
+ `ProvisionedConcurrencyUtilization` – For a version or alias, the value of `ProvisionedConcurrentExecutions` divided by the total amount of provisioned concurrency configured. For example, if you configure a provisioned concurrency of 10 for your function, and your `ProvisionedConcurrentExecutions` is 7, then your `ProvisionedConcurrencyUtilization` is 0.7.

  If your function is inactive or not receiving requests, Lambda doesn't emit this metric because it is based on `ProvisionedConcurrentExecutions`. Keep this in mind if you use `ProvisionedConcurrencyUtilization` as the basis for CloudWatch alarms.
+ `UnreservedConcurrentExecutions` – For a Region, the number of events that functions without reserved concurrency are processing.
+ `ClaimedAccountConcurrency` – For a Region, the amount of concurrency that is unavailable for on-demand invocations. `ClaimedAccountConcurrency` is equal to `UnreservedConcurrentExecutions` plus the amount of allocated concurrency (i.e. the total reserved concurrency plus total provisioned concurrency). For more information, see [Working with the `ClaimedAccountConcurrency` metric](monitoring-concurrency.md#claimed-account-concurrency).

## Asynchronous invocation metrics


Asynchronous invocation metrics provide details about asynchronous invocations from event sources and direct invocations. You can set thresholds and alarms to notify you of certain changes. For example, when there's an undesired increase in the number of events queued for processing (`AsyncEventsReceived`). Or, when an event has been waiting a long time to be processed (`AsyncEventAge`).
+ `AsyncEventsReceived` – The number of events that Lambda successfully queues for processing. This metric provides insight into the number of events that a Lambda function receives. Monitor this metric and set alarms for thresholds to check for issues. For example, to detect an undesirable number of events sent to Lambda, and to quickly diagnose issues resulting from incorrect trigger or function configurations. Mismatches between `AsyncEventsReceived` and `Invocations` can indicate a disparity in processing, events being dropped, or a potential queue backlog.
+ `AsyncEventAge` – The time between when Lambda successfully queues the event and when the function is invoked. The value of this metric increases when events are being retried due to invocation failures or throttling. Monitor this metric and set alarms for thresholds on different statistics for when a queue buildup occurs. To troubleshoot an increase in this metric, look at the `Errors` metric to identify function errors and the `Throttles` metric to identify concurrency issues.
+ `AsyncEventsDropped` – The number of events that are dropped without successfully executing the function. If you configure a dead-letter queue (DLQ) or `OnFailure` destination, then events are sent there before they're dropped. Events are dropped for various reasons. For example, events can exceed the maximum event age or exhaust the maximum retry attempts, or reserved concurrency might be set to 0. To troubleshoot why events are dropped, look at the `Errors` metric to identify function errors and the `Throttles` metric to identify concurrency issues.

## Event source mapping metrics


Event source mapping metrics provide insights into the processing behavior of your event source mapping.

Currently, event source mapping metrics are available for Amazon SQS, Kinesis, DynamoDB, Amazon MSK and self-managed Apache Kafka event sources.

For event source mapping with metrics config, you can also check all the ESM related metrics in the **Monitor** tab from the page Console **Lambda** > **Additional resources** > **event source mappings** now.

**To enable metrics or an event source mapping (console)**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Choose the function you want to enable metrics for.

1. Choose **Configuration**, then choose **Triggers**.

1. Choose the event source mapping that you want to enable metrics for, then choose **Edit**.

1. Under **Event source mapping configuration**, choose ** Enable metrics** or select from the **Metrics** dropdown list.

1. Choose **Save**.

Alternatively, you can enable metrics for your event source mapping programmatically using the [ EventSourceMappingMetricsConfig](https://docs.amazonaws.cn/lambda/latest/api/API_EventSourceMappingMetricsConfig.html) object in your [EventSourceMappingConfiguration](https://docs.amazonaws.cn/lambda/latest/api/API_EventSourceMappingConfiguration.html). For example, the following [UpdateEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_UpdateEventSourceMapping.html) CLI command enables metrics for an event source mapping:

```
aws lambda update-event-source-mapping \
    --uuid a1b2c3d4-5678-90ab-cdef-EXAMPLE11111 \
    --metrics-config Metrics=EventCount
```

There are 3 metric goups: `EventCount`, `ErrorCount` and `KafkaMetrics`, and each group has multi metrics. Not every metric is available for each event source. The following table summarizes the supported metrics for each type of event source.

You must opt-in the metric group to receive metrics related metrics. for example set EventCount in metrics config to have: (`PolledEventCount`, `FilteredOutEventCount`, `InvokedEventCount`, `FailedInvokeEventCount`, `DroppedEventCount`, `OnFailureDestinationDeliveredEventCount`, and `DeletedEventCount`). 


| Event source mapping metric | Metric group | Amazon SQS | Kinesis and DynamoDB streams | Amazon MSK and self-managed Apache Kafka | 
| --- | --- | --- | --- | --- | 
|  `PolledEventCount`  |  `EventCount`  |  Yes  |  Yes  |  Yes  | 
|  `FilteredOutEventCount`  |  `EventCount`  |  Yes  |  Yes  |  Yes  | 
|  `InvokedEventCount`  |  `EventCount`  |  Yes  |  Yes  |  Yes  | 
|  `FailedInvokeEventCount`  |  `EventCount`  |  Yes  |  Yes  |  Yes  | 
|  `DroppedEventCount`  |  `EventCount`  |  No  |  Yes  |  Yes  | 
|  `OnFailureDestinationDeliveredEventCount`  |  `EventCount`  |  No  |  Yes  |  Yes  | 
|  `DeletedEventCount`  |  `EventCount`  |  Yes  |  No  |  No  | 
|  `CommittedEventCount`  |  `EventCount`  |  No  |  No  |  Yes  | 
|  `PollingErrorCount`  |  `ErrorCount`  |  No  |  No  |  Yes  | 
|  `InvokeErrorCount`  |  `ErrorCount`  |  No  |  No  |  Yes  | 
|  `OnFailureDestinationDeliveryErrorCount`  |  `ErrorCount`  |  No  |  No  |  Yes  | 
|  `SchemaRegistryErrorCount`  |  `ErrorCount`  |  No  |  No  |  Yes  | 
|  `CommitErrorCount`  |  `ErrorCount`  |  No  |  No  |  Yes  | 
|  `MaxOffsetLag`  |  `KafkaMetrics`  |  No  |  No  |  Yes  | 
|  `SumOffsetLag`  |  `KafkaMetrics`  |  No  |  No  |  Yes  | 

In addition, if your event source mapping is in [ provisioned mode](invocation-eventsourcemapping.md#invocation-eventsourcemapping-provisioned-mode), Lambda provides the following metric:
+ `ProvisionedPollers` – For event source mappings in provisioned mode, the number of event pollers that are actively running. View this metric using the `MAX` math.
+ (Amazon MSK and self-managed Apache Kafka event sources only) `EventPollerUnit` – For event source mappings in provisioned mode, the number of event poller units that are actively running. View this metric using the `SUM` math.
+ (Amazon MSK and self-managed Apache Kafka event sources) `EventPollerThroughputInBytes` – For event source mappings in provisioned mode, the total record size of event pollers polled from the event source. It can tell you the current polling throughput. View this metric using the `SUM` math.

Here is more detail about each of the metric:
+ `PolledEventCount` – The number of events that Lambda reads successfully from the event source. If Lambda polls for events but receives an empty poll (no new records), Lambda emits a 0 value for this metric. Use this metric to detect whether your event source mapping is correctly polling for new events.
+ `FilteredOutEventCount` – For event source mapping with a [filter criteria](invocation-eventfiltering.md), the number of events filtered out by that filter criteria. Use this metric to detect whether your event source mapping is properly filtering out events. For events that match the filter criteria, Lambda emits a 0 metric.
+ `InvokedEventCount` – The number of events that invoked your Lambda function. Use this metric to verify that events are properly invoking your function. If an event results in a function error or throttling, `InvokedEventCount` may count multiple times for the same polled event due to automatic retries.
**Warning**  
Lambda event source mappings process each event at least once, and duplicate processing of records can occur. Because of this, events may be counted multiple times in metrics that involve event counts.
+ `FailedInvokeEventCount` – The number of events that Lambda tried to invoke your function with, but failed. Invocations can fail due to reasons such as network configuration issues, incorrect permissions, or a deleted Lambda function, version, or alias. If your event source mapping has [ partial batch responses](services-sqs-errorhandling.md#services-sqs-batchfailurereporting) enabled, `FailedInvokeEventCount` includes any event with a non-empty `BatchItemFailures` in the response.
**Note**  
The timestamp for the `FailedInvokeEventCount` metric represents the end of the function invocation. This behavior differs from other Lambda invocation error metrics, which are timestamped at the start of the function invocation.
+ `DroppedEventCount` – The number of events that Lambda dropped due to expiry or retry exhaustion. Specifically, this is the number of records that exceed your configured values for `MaximumRecordAgeInSeconds` or `MaximumRetryAttempts`. Importantly, this doesn't include the number of records that expire due to exceeding your event source's retention settings. Dropped events also excludes events that you send to an [ on-failure destination](invocation-async-retain-records.md). Use this metric to detect an increasing backlog of events.
+ `OnFailureDestinationDeliveredEventCount` – For event source mappings with an [on-failure destination](invocation-async-retain-records.md) configured, the number of events sent to that destination. Use this metric to monitor for function errors related to invocations from this event source. If delivery to the destination fails, Lambda handles metrics as follows:
  + Lambda doesn't emit the `OnFailureDestinationDeliveredEventCount` metric.
  + For the `DestinationDeliveryFailures` metric, Lambda emits a 1.
  + For the `DroppedEventCount` metric, Lambda emits a number equal to the number of events that failed delivery.
+ `DeletedEventCount` – The number of events that Lambda successfully deletes after processing. If Lambda tries to delete an event but fails, Lambda emits a 0 metric. Use this metric to ensure that successfully processed events are deleted from your event source.
+ `CommittedEventCount` – The number of events that Lambda successfully committed after processing. It's a sum of the deltas of last and current committted offset from each partition in the Kafka event source mapping.
+ `PollingErrorCount` – The number of errors that Lambda failed to poll requests from event source. Lambda only emits this metric data when error happened.
+ `InvokeErrorCount` – The number of errors that Lambda failed to invoke your function. Notice the invocation is records in batch. The number is on batch level, not on record count level. Lambda only emits this metric data when error happened.
+ `SchemaRegistryErrorCount` – The number of errors that Lambda failed to fetch the schema or deserialize with the scheme. Lambda only emits this metric data when error happened.
+ `CommitErrorCount` – The number of errors that Lambda failed to commit to Kafka cluster. Lambda only emits this metric data when error happened.
+ `MaxOffsetLag` – The max of offset lags (difference between latest and committed offsets) accross all partitions in the event source mapping.
+ `SumOffsetLag` – The sum of the offset lags across all partitions in the event source mapping.

If your event source mapping is disabled, you won't receive event source mapping metrics. You may also see missing metrics if CloudWatch or Lambda is experiencing degraded availability.

# Working with Lambda function logs
Function logs

To help you troubleshoot failures, Amazon Lambda automatically monitors Lambda functions on your behalf. You can view logs for Lambda functions using the Lambda console, the CloudWatch console, the Amazon Command Line Interface (Amazon CLI), the CloudWatch API. You can also configure Lambda to send logs to Amazon S3 and Firehose.

As long as your function's [execution role](lambda-intro-execution-role.md) has the necessary permissions, Lambda captures logs for all requests handled by your function and sends them to Amazon CloudWatch Logs, which is the default destination. You can also use the Lambda console to configure Amazon S3 or Firehose as logging destinations.
+ **CloudWatch Logs** is the default logging destination for Lambda functions. CloudWatch Logs provides real-time log viewing and analysis capabilities, with support for creating metrics and alarms based on your log data.
+ **Amazon S3** is economical for long-term storage, and services like Athena can be used to analyze logs. Latency is typically higher.
+ **Firehose** offers managed streaming of logs to various destinations. If you need to send logs to other Amazon services (for example, OpenSearch Service or Redshift Data API) or third-party platforms (like Datadog, New Relic, or Splunk), Firehose simplifies that process by providing pre-built integrations. You can also stream to custom HTTP endpoints without setting up additional infrastructure.

## Choosing a service destination to send logs to


Consider the following key factors when choosing a service a destination for function logs:
+ **Cost management varies by service.** Amazon S3 typically provides the most economical option for long-term storage, while CloudWatch Logs allows you to view logs, process logs, and set up alerts in real time. Firehose costs include both the streaming service and cost associated with what you configure it to stream to.
+ **Analysis capabilities differ across services.** CloudWatch Logs excels at real-time monitoring and integrates natively with other CloudWatch features, such as Logs Insights and Live Tail. Amazon S3 works well with analysis tools like Athena and can integrate with various services, though it may require additional setup. Firehose simplifies direct streaming to specific Amazon services (like OpenSearch Service and Redshift Data API) and supported third-party platforms (such as Datadog and Splunk) by providing pre-built integrations, potentially reducing configuration work.
+ **Setup and ease of use vary by service.** CloudWatch Logs is the default log destination - it works immediately with no additional configuration and provides straightforward log viewing and analysis through the CloudWatch console. If you need logs sent to Amazon S3, you'll need to do some initial setup in the Lambda console and configure bucket permissions. If you need logs sent directly to services like OpenSearch Service or third-party analytics platforms, Firehose can simplify that process.

## Configuring log destinations


Amazon Lambda supports multiple destinations for your function logs. This guide explains the available logging destinations and helps you choose the right option for your needs. Regardless of your chosen destination, Lambda provides options to control log format, filtering, and delivery.

Lambda supports both JSON and plain text formats for your function's logs. JSON structured logs provide enhanced searchability and enable automated analysis, while plain text logs offer simplicity and potentially reduced storage costs. You can control which logs Lambda sends to your chosen destination by configuring log levels for both system and application logs. Filtering helps you manage storage costs and makes it easier to find relevant log entries during debugging.

For detailed setup instructions for each destination, refer to the following sections:
+ [Sending Lambda function logs to CloudWatch Logs](monitoring-cloudwatchlogs.md)
+ [Sending Lambda function logs to Firehose](logging-with-firehose.md)
+ [Sending Lambda function logs to Amazon S3](logging-with-s3.md)

## Configuring advanced logging controls for Lambda functions


To give you more control over how your function logs are captured, processed, and consumed, Lambda offers the following logging configuration options:
+ **Log format** - select between plain text and structured JSON format for your function’s logs.
+ **Log level** - for JSON structured logs, choose the detail level of the logs Lambda sends to CloudWatch, such as `FATAL`, `ERROR`, `WARN`, `INFO`, `DEBUG`, and `TRACE`.
+ **Log group** - choose the CloudWatch log group your function sends logs to.

To learn more about configuring advanced logging controls, refer to the following sections:
+ [Configuring JSON and plain text log formats](monitoring-cloudwatchlogs-logformat.md)
+ [Log-level filtering](monitoring-cloudwatchlogs-log-level.md)
+ [Configuring CloudWatch log groups](monitoring-cloudwatchlogs-loggroups.md)

# Configuring JSON and plain text log formats
Log formats

Capturing your log outputs as JSON key value pairs makes it easier to search and filter when debugging your functions. With JSON formatted logs, you can also add tags and contextual information to your logs. This can help you to perform automated analysis of large volumes of log data. Unless your development workflow relies on existing tooling that consumes Lambda logs in plain text, we recommend that you select JSON for your log format.

**Lambda Managed Instances**  
Lambda Managed Instances only support JSON log format. When you create a Managed Instances function, Lambda automatically configures the log format to JSON and you cannot change it to plain text. For more information about Managed Instances, see [Lambda Managed Instances](lambda-managed-instances.md).

For all Lambda managed runtimes, you can choose whether your function's system logs are sent to CloudWatch Logs in unstructured plain text or JSON format. System logs are the logs that Lambda generates and are sometimes known as platform event logs.

For [supported runtimes](#monitoring-cloudwatchlogs-logformat-supported), when you use one of the supported built-in logging methods, Lambda can also output your function's application logs (the logs your function code generates) in structured JSON format. When you configure your function's log format for these runtimes, the configuration you choose applies to both system and application logs.

For supported runtimes, if your function uses a supported logging library or method, you don't need to make any changes to your existing code for Lambda to capture logs in structured JSON.

**Note**  
Using JSON log formatting adds additional metadata and encodes log messages as JSON objects containing a series of key value pairs. Because of this, the size of your function's log messages can increase.

## Supported runtimes and logging methods


 Lambda currently supports the option to output JSON structured application logs for the following runtimes. 


| Language | Supported versions | 
| --- | --- | 
| Java | All Java runtimes except Java 8 on Amazon Linux 1 | 
| .NET | .NET 8 and later | 
| Node.js | Node.js 16 and later | 
| Python | Python 3.8 and later | 
| Rust | n/a | 

For Lambda to send your function's application logs to CloudWatch in structured JSON format, your function must use the following built-in logging tools to output logs:
+ **Java**: The `LambdaLogger` logger or Log4j2. For more information, see [Log and monitor Java Lambda functions](java-logging.md).
+ **.NET**: The `ILambdaLogger` instance on the context object. For more information, see [Log and monitor C\$1 Lambda functions](csharp-logging.md).
+ **Node.js** - The console methods `console.trace`, `console.debug`, `console.log`, `console.info`, `console.error`, and `console.warn`. For more information, see [Log and monitor Node.js Lambda functions](nodejs-logging.md).
+ **Python**: The standard Python `logging` library. For more information, see [Log and monitor Python Lambda functions](python-logging.md).
+ **Rust**: The `tracing` crate. For more information, see [Log and monitor Rust Lambda functions](rust-logging.md).

For other managed Lambda runtimes, Lambda currently only natively supports capturing system logs in structured JSON format. However, you can still capture application logs in structured JSON format in any runtime by using logging tools such as Powertools for Amazon Lambda that output JSON formatted log outputs.

## Default log formats


Currently, the default log format for all Lambda runtimes is plain text. For Lambda Managed Instances, the log format is always JSON and cannot be changed.

If you’re already using logging libraries like Powertools for Amazon Lambda to generate your function logs in JSON structured format, you don’t need to change your code if you select JSON log formatting. Lambda doesn’t double-encode any logs that are already JSON encoded, so your function’s application logs will continue to be captured as before.

## JSON format for system logs


When you configure your function's log format as JSON, each system log item (platform event) is captured as a JSON object that contains key value pairs with the following keys:
+ `"time"` - the time the log message was generated
+ `"type"` - the type of event being logged
+ `"record"` - the contents of the log output

The format of the `"record"` value varies according to the type of event being logged. For more information see [Telemetry API `Event` object types](telemetry-schema-reference.md#telemetry-api-events). For more information about the log levels assigned to system log events, see [System log level event mapping](monitoring-cloudwatchlogs-log-level.md#monitoring-cloudwatchlogs-log-level-mapping).

For comparison, the following two examples show the same log output in both plain text and structured JSON formats. Note that in most cases, system log events contain more information when output in JSON format than when output in plain text.

**Example plain text:**  

```
2024-03-13 18:56:24.046000 fbe8c1   INIT_START  Runtime Version: python:3.12.v18  Runtime Version ARN: arn:aws-cn:lambda:eu-west-1::runtime:edb5a058bfa782cb9cedc6d534ac8b8c193bc28e9a9879d9f5ebaaf619cd0fc0
```

**Example structured JSON:**  

```
{
  "time": "2024-03-13T18:56:24.046Z",
  "type": "platform.initStart",
  "record": {
    "initializationType": "on-demand",
    "phase": "init",
    "runtimeVersion": "python:3.12.v18",
    "runtimeVersionArn": "arn:aws-cn:lambda:eu-west-1::runtime:edb5a058bfa782cb9cedc6d534ac8b8c193bc28e9a9879d9f5ebaaf619cd0fc0"
  }
}
```

**Note**  
The [Accessing real-time telemetry data for extensions using the Telemetry API](telemetry-api.md) always emits platform events such as `START` and `REPORT` in JSON format. Configuring the format of the system logs Lambda sends to CloudWatch doesn’t affect Lambda Telemetry API behavior.

## JSON format for application logs


When you configure your function's log format as JSON, application log outputs written using supported logging libraries and methods are captured as a JSON object that contains key value pairs with the following keys.
+ `"timestamp"` - the time the log message was generated
+ `"level"` - the log level assigned to the message
+ `"message"` - the contents of the log message
+ `"requestId"` (Python, .NET, and Node.js) or `"AWSrequestId"` (Java) - the unique request ID for the function invocation

Depending on the runtime and logging method that your function uses, this JSON object may also contain additional key pairs. For example, in Node.js, if your function uses `console` methods to log error objects using multiple arguments, The JSON object will contain extra key value pairs with the keys `errorMessage`, `errorType`, and `stackTrace`. To learn more about JSON formatted logs in different Lambda runtimes, see [Log and monitor Python Lambda functions](python-logging.md), [Log and monitor Node.js Lambda functions](nodejs-logging.md), and [Log and monitor Java Lambda functions](java-logging.md).

**Note**  
The key Lambda uses for the timestamp value is different for system logs and application logs. For system logs, Lambda uses the key `"time"` to maintain consistency with Telemetry API. For application logs, Lambda follows the conventions of the supported runtimes and uses `"timestamp"`.

For comparison, the following two examples show the same log output in both plain text and structured JSON formats.

**Example plain text:**  

```
2024-10-27T19:17:45.586Z 79b4f56e-95b1-4643-9700-2807f4e68189 INFO some log message
```

**Example structured JSON:**  

```
{
    "timestamp":"2024-10-27T19:17:45.586Z",
    "level":"INFO",
    "message":"some log message",
    "requestId":"79b4f56e-95b1-4643-9700-2807f4e68189"
}
```

## Setting your function's log format


To configure the log format for your function, you can use the Lambda console or the Amazon Command Line Interface (Amazon CLI). You can also configure a function’s log format using the [CreateFunction](https://docs.amazonaws.cn/lambda/latest/api/API_CreateFunction.html) and [UpdateFunctionConfiguration](https://docs.amazonaws.cn/lambda/latest/api/API_UpdateFunctionConfiguration.html) Lambda API commands, the Amazon Serverless Application Model (Amazon SAM) [AWS::Serverless::Function](https://docs.amazonaws.cn/serverless-application-model/latest/developerguide/sam-resource-function.html) resource, and the Amazon CloudFormation [AWS::Lambda::Function](https://docs.amazonaws.cn/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-function.html) resource.

Changing your function’s log format doesn’t affect existing logs stored in CloudWatch Logs. Only new logs will use the updated format.

If you change your function's log format to JSON and do not set log level, then Lambda automatically sets your function's application log level and system log level to INFO. This means that Lambda sends only log outputs of level INFO and lower to CloudWatch Logs. To learn more about application and system log-level filtering see [Log-level filtering](monitoring-cloudwatchlogs-log-level.md) 

**Note**  
For Python runtimes, when your function's log format is set to plain text, the default log-level setting is WARN. This means that Lambda only sends log outputs of level WARN and lower to CloudWatch Logs. Changing your function's log format to JSON changes this default behavior. To learn more about logging in Python, see [Log and monitor Python Lambda functions](python-logging.md).

For Node.js functions that emit embedded metric format (EMF) logs, changing your function's log format to JSON could result in CloudWatch being unable to recognize your metrics.

**Important**  
If your function uses Powertools for Amazon Lambda (TypeScript) or the open-sourced EMF client libraries to emit EMF logs, update your [Powertools](https://github.com/aws-powertools/powertools-lambda-typescript) and [EMF](https://www.npmjs.com/package/aws-embedded-metrics) libraries to the latest versions to ensure that CloudWatch can continue to parse your logs correctly. If you switch to the JSON log format, we also recommend that you carry out testing to ensure compatibility with your function's embedded metrics. For further advice about node.js functions that emit EMF logs, see [Using embedded metric format (EMF) client libraries with structured JSON logs](nodejs-logging.md#nodejs-logging-advanced-emf).

**To configure a function’s log format (console)**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Choose a function

1. On the function configuration page, choose **Monitoring and operations tools**.

1. In the **Logging configuration** pane, choose **Edit**.

1. Under **Log content**, for **Log format** select either **Text** or **JSON**.

1. Choose **Save**.

**To change the log format of an existing function (Amazon CLI)**
+ To change the log format of an existing function, use the [update-function-configuration](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-function-configuration.html) command. Set the `LogFormat` option in `LoggingConfig` to either `JSON` or `Text`.

  ```
  aws lambda update-function-configuration \
    --function-name myFunction \
    --logging-config LogFormat=JSON
  ```

**To set log format when you create a function (Amazon CLI)**
+ To configure log format when you create a new function, use the `--logging-config` option in the [create-function](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/create-function.html) command. Set `LogFormat` to either `JSON` or `Text`. The following example command creates a Node.js function that outputs logs in structured JSON.

  If you don’t specify a log format when you create a function, Lambda will use the default log format for the runtime version you select. For information about default logging formats, see [Default log formats](#monitoring-cloudwatchlogs-format-default).

  ```
  aws lambda create-function \ 
    --function-name myFunction \ 
    --runtime nodejs24.x \
    --handler index.handler \
    --zip-file fileb://function.zip \
    --role arn:aws-cn:iam::123456789012:role/LambdaRole \
    --logging-config LogFormat=JSON
  ```

# Log-level filtering


Lambda can filter your function's logs so that only logs of a certain detail level or lower are sent to CloudWatch Logs. You can configure log-level filtering separately for your function's system logs (the logs that Lambda generates) and application logs (the logs that your function code generates).

For [Supported runtimes and logging methods](monitoring-cloudwatchlogs-logformat.md#monitoring-cloudwatchlogs-logformat-supported), you don't need to make any changes to your function code for Lambda to filter your function's application logs.

For all other runtimes and logging methods , your function code must output log events to `stdout` or `stderr` as JSON formatted objects that contain a key value pair with the key `"level"`. For example, Lambda interprets the following output to `stdout` as a DEBUG level log.

```
print('{"level": "debug", "msg": "my debug log", "timestamp": "2024-11-02T16:51:31.587199Z"}')
```

If the `"level"` value field is invalid or missing, Lambda will assign the log output the level INFO. For Lambda to use the timestamp field, you must specify the time in valid [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) timestamp format. If you don't supply a valid timestamp, Lambda will assign the log the level INFO and add a timestamp for you.

When naming the timestamp key, follow the conventions of the runtime you are using. Lambda supports most common naming conventions used by the managed runtimes.

**Note**  
To use log-level filtering, your function must be configured to use the JSON log format. The default log format for all Lambda managed runtimes is currently plain text. To learn how to configure your function's log format to JSON, see [Setting your function's log format](monitoring-cloudwatchlogs-logformat.md#monitoring-cloudwatchlogs-set-format).

For application logs (the logs generated by your function code), you can choose between the following log levels.


| Log level | Standard usage | 
| --- | --- | 
| TRACE (most detail) | The most fine-grained information used to trace the path of your code's execution | 
| DEBUG | Detailed information for system debugging | 
| INFO | Messages that record the normal operation of your function | 
| WARN | Messages about potential errors that may lead to unexpected behavior if unaddressed | 
| ERROR | Messages about problems that prevent the code from performing as expected | 
| FATAL (least detail) | Messages about serious errors that cause the application to stop functioning | 

When you select a log level, Lambda sends logs at that level and lower to CloudWatch Logs. For example, if you set a function’s application log level to WARN, Lambda doesn’t send log outputs at the INFO and DEBUG levels. The default application log level for log filtering is INFO.

When Lambda filters your function’s application logs, log messages with no level will be assigned the log level INFO.

For system logs (the logs generated by the Lambda service), you can choose between the following log levels.


| Log level | Usage | 
| --- | --- | 
| DEBUG (most detail) | Detailed information for system debugging | 
| INFO | Messages that record the normal operation of your function | 
| WARN (least detail) | Messages about potential errors that may lead to unexpected behavior if unaddressed | 

When you select a log level, Lambda sends logs at that level and lower. For example, if you set a function’s system log level to INFO, Lambda doesn’t send log outputs at the DEBUG level.

By default, Lambda sets the system log level to INFO. With this setting, Lambda automatically sends `"start"` and `"report"` log messages to CloudWatch. To receive more or less detailed system logs, change the log level to DEBUG or WARN. To see a list of the log levels that Lambda maps different system log events to, see [System log level event mapping](#monitoring-cloudwatchlogs-log-level-mapping).

## Configuring log-level filtering


To configure application and system log-level filtering for your function, you can use the Lambda console or the Amazon Command Line Interface (Amazon CLI). You can also configure a function’s log level using the [CreateFunction](https://docs.amazonaws.cn/lambda/latest/api/API_CreateFunction.html) and [UpdateFunctionConfiguration](https://docs.amazonaws.cn/lambda/latest/api/API_UpdateFunctionConfiguration.html) Lambda API commands, the Amazon Serverless Application Model (Amazon SAM) [AWS::Serverless::Function](https://docs.amazonaws.cn/serverless-application-model/latest/developerguide/sam-resource-function.html) resource, and the Amazon CloudFormation [AWS::Lambda::Function](https://docs.amazonaws.cn/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-function.html) resource.

Note that if you set your function's log level in your code, this setting takes precedence over any other log level settings you configure. For example, if you use the Python `logging` `setLevel()` method to set your function's logging level to INFO, this setting takes precedence over a setting of WARN that you configure using the Lambda console.

**To configure an existing function’s application or system log level (console)**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Choose a function.

1. On the function configuration page, choose **Monitoring and operations tools**.

1. In the **Logging configuration** pane, choose **Edit**.

1. Under **Log content**, for **Log format** ensure **JSON** is selected.

1. Using the radio buttons, select your desired **Application log level** and **System log level** for your function.

1. Choose **Save**.

**To configure an existing function’s application or system log level (Amazon CLI)**
+ To change the application or system log level of an existing function, use the [update-function-configuration](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-function-configuration.html) command. Use `--logging-config` to set `SystemLogLevel` to one of `DEBUG`, `INFO`, or `WARN`. Set `ApplicationLogLevel` to one of `DEBUG`, `INFO`, `WARN`, `ERROR`, or `FATAL`. 

  ```
  aws lambda update-function-configuration \
    --function-name myFunction \
    --logging-config LogFormat=JSON,ApplicationLogLevel=ERROR,SystemLogLevel=WARN
  ```

**To configure log-level filtering when you create a function**
+ To configure log-level filtering when you create a new function, use `--logging-config` to set the `SystemLogLevel` and `ApplicationLogLevel` keys in the [create-function](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/create-function.html) command. Set `SystemLogLevel` to one of `DEBUG`, `INFO`, or `WARN`. Set `ApplicationLogLevel` to one of `DEBUG`, `INFO`, `WARN`, `ERROR`, or `FATAL`.

  ```
  aws lambda create-function \
    --function-name myFunction \
    --runtime nodejs24.x \
    --handler index.handler \
    --zip-file fileb://function.zip \
    --role arn:aws-cn:iam::123456789012:role/LambdaRole \ 
    --logging-config LogFormat=JSON,ApplicationLogLevel=ERROR,SystemLogLevel=WARN
  ```

## System log level event mapping


For system level log events generated by Lambda, the following table defines the log level assigned to each event. To learn more about the events listed in the table, see [Lambda Telemetry API `Event` schema reference](telemetry-schema-reference.md)


| Event name | Condition | Assigned log level | 
| --- | --- | --- | 
| initStart | runtimeVersion is set | INFO | 
| initStart | runtimeVersion is not set | DEBUG | 
| initRuntimeDone | status=success | DEBUG | 
| initRuntimeDone | status\$1=success | WARN | 
| initReport | initializationType\$1=on-demand | INFO | 
| initReport | initializationType=on-demand | DEBUG | 
| initReport | status\$1=success | WARN | 
| restoreStart | runtimeVersion is set | INFO | 
| restoreStart | runtimeVersion is not set | DEBUG | 
| restoreRuntimeDone | status=success | DEBUG | 
| restoreRuntimeDone | status\$1=success | WARN | 
| restoreReport | status=success | INFO | 
| restoreReport | status\$1=success | WARN | 
| start | - | INFO | 
| runtimeDone | status=success | DEBUG | 
| runtimeDone | status\$1=success | WARN | 
| report | status=success | INFO | 
| report | status\$1=success | WARN | 
| extension | state=success | INFO | 
| extension | state\$1=success | WARN | 
| logSubscription | - | INFO | 
| telemetrySubscription | - | INFO | 
| logsDropped | - | WARN | 

**Note**  
The [Accessing real-time telemetry data for extensions using the Telemetry API](telemetry-api.md) always emits the complete set of platform events. Configuring the level of the system logs Lambda sends to CloudWatch doesn’t affect Lambda Telemetry API behavior.

## Application log-level filtering with custom runtimes


When you configure application log-level filtering for your function, behind the scenes Lambda sets the application log level in the runtime using the `AWS_LAMBDA_LOG_LEVEL` environment variable. Lambda also sets your function's log format using the `AWS_LAMBDA_LOG_FORMAT` environment variable. You can use these variables to integrate Lambda advanced logging controls into a [custom runtime](runtimes-custom.md).

For the ability to configure logging settings for a function using a custom runtime with the Lambda console, Amazon CLI, and Lambda APIs, configure your custom runtime to check the value of these environment variables. You can then configure your runtime's loggers in accordance with the log format and log levels you select.

# Sending Lambda function logs to CloudWatch Logs
Log with CloudWatch Logs

By default, Lambda automatically captures logs for all function invocations and sends them to CloudWatch Logs, provided your function's execution role has the necessary permissions. These logs are, by default, stored in a log group named /aws/lambda/*<function-name>*. To enhance debugging, you can insert custom logging statements into your code, which Lambda will seamlessly integrate with CloudWatch Logs. If needed, you can configure your function to send logs to a different group using the Lambda console, Amazon CLI, or Lambda API. See [Configuring CloudWatch log groups](monitoring-cloudwatchlogs-loggroups.md) to learn more.

You can view logs for Lambda functions using the Lambda console, the CloudWatch console, the Amazon Command Line Interface (Amazon CLI), or the CloudWatch API. For more information, see to [Viewing CloudWatch logs for Lambda functions](monitoring-cloudwatchlogs-view.md).

**Note**  
It may take 5 to 10 minutes for logs to show up after a function invocation.

## Required IAM permissions


Your [execution role](lambda-intro-execution-role.md) needs the following permissions to upload logs to CloudWatch Logs:
+ `logs:CreateLogGroup`
+ `logs:CreateLogStream`
+ `logs:PutLogEvents`

To learn more, see [Using identity-based policies (IAM policies) for CloudWatch Logs](https://docs.amazonaws.cn/AmazonCloudWatch/latest/logs/iam-identity-based-access-control-cwl.html) in the *Amazon CloudWatch User Guide*.

You can add these CloudWatch Logs permissions using the `AWSLambdaBasicExecutionRole` Amazon managed policy provided by Lambda. To add this policy to your role, run the following command:

```
aws iam attach-role-policy --role-name your-role --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
```

For more information, see [Working with Amazon managed policies in the execution role](permissions-managed-policies.md).

## Pricing


There is no additional charge for using Lambda logs; however, standard CloudWatch Logs charges apply. For more information, see [CloudWatch pricing.](http://www.amazonaws.cn/cloudwatch/pricing/)

# Configuring CloudWatch log groups
Configure CloudWatch function logs

By default, CloudWatch automatically creates a log group named `/aws/lambda/<function name>` for your function when it's first invoked. To configure your function to send logs to an existing log group, or to create a new log group for your function, you can use the Lambda console or the Amazon CLI. You can also configure custom log groups using the [CreateFunction](https://docs.amazonaws.cn/lambda/latest/api/API_CreateFunction.html) and [UpdateFunctionConfiguration](https://docs.amazonaws.cn/lambda/latest/api/API_UpdateFunctionConfiguration.html) Lambda API commands and the Amazon Serverless Application Model (Amazon SAM) [Amazon::Serverless::Function]() resource.

You can configure multiple Lambda functions to send logs to the same CloudWatch log group. For example, you could use a single log group to store logs for all of the Lambda functions that make up a particular application. When you use a custom log group for a Lambda function, the log streams Lambda creates include the function name and function version. This ensures that the mapping between log messages and functions is preserved, even if you use the same log group for multiple functions.

The log stream naming format for custom log groups follows this convention:

```
YYYY/MM/DD/<function_name>[<function_version>][<execution_environment_GUID>]
```

Note that when configuring a custom log group, the name you select for your log group must follow the [CloudWatch Logs naming rules](https://docs.amazonaws.cn/AmazonCloudWatchLogs/latest/APIReference/API_CreateLogGroup.html). Additionally, custom log group names mustn't begin with the string `aws/`. If you create a custom log group beginning with `aws/`, Lambda won't be able to create the log group. As a result of this, your function's logs won't be sent to CloudWatch.

**To change a function’s log group (console)**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Choose a function.

1. On the function configuration page, choose **Monitoring and operations tools**.

1. In the **Logging configuration** pane, choose **Edit**.

1. In the **Logging group** pane, for **CloudWatch log group**, choose **Custom**.

1. Under **Custom log group**, enter the name of the CloudWatch log group you want your function to send logs to. If you enter the name of an existing log group, then your function will use that group. If no log group exists with the name that you enter, then Lambda will create a new log group for your function with that name.

**To change a function's log group (Amazon CLI)**
+ To change the log group of an existing function, use the [update-function-configuration](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/update-function-configuration.html) command.

  ```
  aws lambda update-function-configuration \
    --function-name myFunction \
    --logging-config LogGroup=myLogGroup
  ```

**To specify a custom log group when you create a function (Amazon CLI)**
+ To specify a custom log group when you create a new Lambda function using the Amazon CLI, use the `--logging-config` option. The following example command creates a Node.js Lambda function that sends logs to a log group named `myLogGroup`.

  ```
  aws lambda create-function \
    --function-name myFunction \
    --runtime nodejs24.x \
    --handler index.handler \
    --zip-file fileb://function.zip \
    --role arn:aws-cn:iam::123456789012:role/LambdaRole \
    --logging-config LogGroup=myLogGroup
  ```

## Execution role permissions


For your function to send logs to CloudWatch Logs, it must have the [logs:PutLogEvents](https://docs.amazonaws.cn/AmazonCloudWatchLogs/latest/APIReference/API_PutLogEvents.html) permission. When you configure your function's log group using the Lambda console, Lambda will add this permission to the role under the following conditions:
+ The service destination is set to CloudWatch Logs
+ Your function's execution role doesn't have permissions to upload logs to CloudWatch Logs (the default destination)

**Note**  
Lambda does not add any Put permission for Amazon S3 or Firehose log destinations.

When Lambda adds this permission, it gives the function permission to send logs to any CloudWatch Logs log group.

To prevent Lambda from automatically updating the function's execution role and edit it manually instead, expand **Permissions** and uncheck **Add required permissions**.

When you configure your function's log group using the Amazon CLI, Lambda won't automatically add the `logs:PutLogEvents` permission. Add the permission to your function's execution role if it doesn't already have it. This permission is included in the [AWSLambdaBasicExecutionRole](https://console.amazonaws.cn/iam/home#/policies/arn:aws-cn:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole$jsonEditor) managed policy.

## CloudWatch logging for Lambda Managed Instances


When using [Lambda Managed Instances](lambda-managed-instances.md), there are additional considerations for sending logs to CloudWatch Logs:

### VPC networking requirements


Lambda Managed Instances run on customer-owned EC2 instances within your VPC. To send logs to CloudWatch Logs and traces to X-Ray, you must ensure that these Amazon APIs are routable from your VPC. You have several options:
+ **Amazon PrivateLink (recommended)**: Use [Amazon PrivateLink](https://docs.amazonaws.cn/vpc/latest/privatelink/what-is-privatelink.html) to create VPC endpoints for CloudWatch Logs and X-Ray services. This allows your instances to access these services privately without requiring an internet gateway or NAT gateway. For more information, see [Using CloudWatch Logs with interface VPC endpoints](https://docs.amazonaws.cn/AmazonCloudWatch/latest/logs/cloudwatch-logs-and-interface-VPC.html).
+ **NAT Gateway**: Configure a NAT gateway to allow outbound internet access from your private subnets.
+ **Internet Gateway**: For public subnets, ensure your VPC has an internet gateway configured.

If CloudWatch Logs or X-Ray APIs are not routable from your VPC, your function logs and traces will not be delivered.

### Concurrent invocations and log attribution


Lambda Managed Instances execution environments can process multiple invocations concurrently. When multiple invocations run simultaneously, their log entries are interleaved in the same log stream. To effectively filter and analyze logs from concurrent invocations, you should ensure each log entry includes the Amazon request ID.

We recommend one of the following approaches:
+ **Use default Lambda runtime loggers (recommended)**: The default logging libraries provided by Lambda managed runtimes automatically include the request ID in each log entry.
+ **Implement structured JSON logging**: If you're building a [custom runtime](runtimes-custom.md) or need custom logging, implement JSON-formatted logs that include the request ID in each entry. Lambda Managed Instances only support the JSON log format. Include the `requestId` field in your JSON logs to enable filtering by invocation:

  ```
  {
    "timestamp": "2025-01-15T10:30:00.000Z",
    "level": "INFO",
    "requestId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
    "message": "Processing request"
  }
  ```

With request ID attribution, you can filter CloudWatch Logs log entries for a specific invocation using CloudWatch Logs Insights queries. For example:

```
fields @timestamp, @message
| filter requestId = "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
| sort @timestamp asc
```

For more information about Lambda Managed Instances logging requirements, see [Understanding the Lambda Managed Instances execution environment](lambda-managed-instances-execution-environment.md).

# Viewing CloudWatch logs for Lambda functions
View function logs

You can view Amazon CloudWatch logs for your Lambda function using the Lambda console, the CloudWatch console, or the Amazon Command Line Interface (Amazon CLI). Follow the instructions in the following sections to access your function's logs.

## Stream function logs with CloudWatch Logs Live Tail


Amazon CloudWatch Logs Live Tail helps you quickly troubleshoot your functions by displaying a streaming list of new log events directly in the Lambda console. You can view and filter ingested logs from your Lambda functions in real time, helping you to detect and resolve issues quickly.

**Note**  
Live Tail sessions incur costs by session usage time, per minute. For more information about pricing, see [Amazon CloudWatch Pricing](http://www.amazonaws.cn/cloudwatch/pricing/).

### Comparing Live Tail and --log-type Tail


There are several differences between CloudWatch Logs Live Tail and the [LogType: Tail](https://docs.amazonaws.cn/lambda/latest/api/API_Invoke.html#lambda-Invoke-request-LogType) option in the Lambda API (`--log-type Tail` in the Amazon CLI):
+ `--log-type Tail` returns only the first 4 KB of the invocation logs. Live Tail does not share this limit, and can receive up to 500 log events per second.
+ `--log-type Tail` captures and sends the logs with the response, which can impact the function's response latency. Live Tail does not affect function response latency.
+ `--log-type Tail` supports synchronous invocations only. Live Tail works for both synchronous and asynchronous invocations.

**Note**  
[Lambda Managed Instances](lambda-managed-instances.md) does not support the `--log-type Tail` option. Use CloudWatch Logs Live Tail or query CloudWatch Logs directly to view logs for Managed Instances functions.

### Permissions


The following permissions are required to start and stop CloudWatch Logs Live Tail sessions:
+ `logs:DescribeLogGroups`
+ `logs:StartLiveTail`
+ `logs:StopLiveTail`

### Start a Live Tail session in the Lambda console
Start a session

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Choose the name of the function.

1. Choose the **Test** tab.

1. In the **Test event** pane, choose **CloudWatch Logs Live Tail**.

1. For **Select log groups**, the function's log group is selected by default. You can select up to five log groups at a time.

1. (Optional) To display only log events that contain certain words or other strings, enter the word or string in the **Add filter pattern** box. The filters field is case-sensitive. You can include multiple terms and pattern operators in this field, including regular expressions (regex). For more information about pattern syntax, see [Filter pattern syntax](https://docs.amazonaws.cn/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html). in the *Amazon CloudWatch Logs User Guide*.

1. Choose **Start**. Matching log events begin appearing in the window.

1. To stop the Live Tail session, choose **Stop**.
**Note**  
The Live Tail session automatically stops after 15 minutes of inactivity or when the Lambda console session times out.

## Access function logs using the console


1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Select a function.

1. Choose the **Monitor** tab.

1. Choose **View CloudWatch logs** to open the CloudWatch console.

1. Scroll down and choose the **Log stream** for the function invocations you want to look at.  
![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/log-stream.png)

Each instance of a Lambda function has a dedicated log stream. If a function scales up, each concurrent instance has its own log stream. Each time a new execution environment is created in response to an invocation, this generates a new log stream. The naming convention for log streams is:

```
YYYY/MM/DD[Function version][Execution environment GUID]
```

A single execution environment writes to the same log stream during its lifetime. The log stream contains messages from that execution environment and also any output from your Lambda function’s code. Every message is timestamped, including your custom logs. Even if your function does not log any output from your code, there are three minimal log statements generated per invocation (START, END and REPORT):

![\[monitoring observability figure 3\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/monitoring-observability-figure-3.png)


These logs show:
+  **RequestId** – this is a unique ID generated per request. If the Lambda function retries a request, this ID does not change and appears in the logs for each subsequent retry.
+  **Start/End** – these bookmark a single invocation, so every log line between these belongs to the same invocation.
+  **Duration** – the total invocation time for the handler function, excluding `INIT` code.
+  **Billed Duration** – applies rounding logic for billing purposes.
+  **Memory Size** – the amount of memory allocated to the function.
+  **Max Memory Used** – the maximum amount of memory used during the invocation.
+  **Init Duration** – the time taken to run the `INIT` section of code, outside of the main handler.

## Access logs with the Amazon CLI


The Amazon CLI is an open-source tool that enables you to interact with Amazon services using commands in your command line shell. To complete the steps in this section, you must have the [Amazon CLI version 2](https://docs.amazonaws.cn/cli/latest/userguide/getting-started-install.html).

You can use the [Amazon CLI](https://docs.amazonaws.cn/cli/latest/userguide/cli-chap-welcome.html) to retrieve logs for an invocation using the `--log-type` command option. The response contains a `LogResult` field that contains up to 4 KB of base64-encoded logs from the invocation.

**Example retrieve a log ID**  
The following example shows how to retrieve a *log ID* from the `LogResult` field for a function named `my-function`.  

```
aws lambda invoke --function-name my-function out --log-type Tail
```
You should see the following output:  

```
{
    "StatusCode": 200,
    "LogResult": "U1RBUlQgUmVxdWVzdElkOiA4N2QwNDRiOC1mMTU0LTExZTgtOGNkYS0yOTc0YzVlNGZiMjEgVmVyc2lvb...",
    "ExecutedVersion": "$LATEST"
}
```

**Example decode the logs**  
In the same command prompt, use the `base64` utility to decode the logs. The following example shows how to retrieve base64-encoded logs for `my-function`.  

```
aws lambda invoke --function-name my-function out --log-type Tail \
--query 'LogResult' --output text --cli-binary-format raw-in-base64-out | base64 --decode
```
The **cli-binary-format** option is required if you're using Amazon CLI version 2. To make this the default setting, run `aws configure set cli-binary-format raw-in-base64-out`. For more information, see [Amazon CLI supported global command line options](https://docs.amazonaws.cn/cli/latest/userguide/cli-configure-options.html#cli-configure-options-list) in the *Amazon Command Line Interface User Guide for Version 2*.  
You should see the following output:  

```
START RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8 Version: $LATEST
"AWS_SESSION_TOKEN": "AgoJb3JpZ2luX2VjELj...", "_X_AMZN_TRACE_ID": "Root=1-5d02e5ca-f5792818b6fe8368e5b51d50;Parent=191db58857df8395;Sampled=0"",ask/lib:/opt/lib",
END RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8
REPORT RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8  Duration: 79.67 ms      Billed Duration: 80 ms         Memory Size: 128 MB     Max Memory Used: 73 MB
```
The `base64` utility is available on Linux, macOS, and [Ubuntu on Windows](https://docs.microsoft.com/en-us/windows/wsl/install-win10). macOS users may need to use `base64 -D`.



**Example get-logs.sh script**  
In the same command prompt, use the following script to download the last five log events. The script uses `sed` to remove quotes from the output file, and sleeps for 15 seconds to allow time for the logs to become available. The output includes the response from Lambda and the output from the `get-log-events` command.   
Copy the contents of the following code sample and save in your Lambda project directory as `get-logs.sh`.  
The **cli-binary-format** option is required if you're using Amazon CLI version 2. To make this the default setting, run `aws configure set cli-binary-format raw-in-base64-out`. For more information, see [Amazon CLI supported global command line options](https://docs.amazonaws.cn/cli/latest/userguide/cli-configure-options.html#cli-configure-options-list) in the *Amazon Command Line Interface User Guide for Version 2*.  

```
#!/bin/bash
aws lambda invoke --function-name my-function --cli-binary-format raw-in-base64-out --payload '{"key": "value"}' out
sed -i'' -e 's/"//g' out
sleep 15
aws logs get-log-events --log-group-name /aws/lambda/my-function --log-stream-name stream1 --limit 5
```

**Example macOS and Linux (only)**  
In the same command prompt, macOS and Linux users may need to run the following command to ensure the script is executable.  

```
chmod -R 755 get-logs.sh
```

**Example retrieve the last five log events**  
In the same command prompt, run the following script to get the last five log events.  

```
./get-logs.sh
```
You should see the following output:  

```
{
    "StatusCode": 200,
    "ExecutedVersion": "$LATEST"
}
{
    "events": [
        {
            "timestamp": 1559763003171,
            "message": "START RequestId: 4ce9340a-b765-490f-ad8a-02ab3415e2bf Version: $LATEST\n",
            "ingestionTime": 1559763003309
        },
        {
            "timestamp": 1559763003173,
            "message": "2019-06-05T19:30:03.173Z\t4ce9340a-b765-490f-ad8a-02ab3415e2bf\tINFO\tENVIRONMENT VARIABLES\r{\r  \"AWS_LAMBDA_FUNCTION_VERSION\": \"$LATEST\",\r ...",
            "ingestionTime": 1559763018353
        },
        {
            "timestamp": 1559763003173,
            "message": "2019-06-05T19:30:03.173Z\t4ce9340a-b765-490f-ad8a-02ab3415e2bf\tINFO\tEVENT\r{\r  \"key\": \"value\"\r}\n",
            "ingestionTime": 1559763018353
        },
        {
            "timestamp": 1559763003218,
            "message": "END RequestId: 4ce9340a-b765-490f-ad8a-02ab3415e2bf\n",
            "ingestionTime": 1559763018353
        },
        {
            "timestamp": 1559763003218,
            "message": "REPORT RequestId: 4ce9340a-b765-490f-ad8a-02ab3415e2bf\tDuration: 26.73 ms\tBilled Duration: 27 ms \tMemory Size: 128 MB\tMax Memory Used: 75 MB\t\n",
            "ingestionTime": 1559763018353
        }
    ],
    "nextForwardToken": "f/34783877304859518393868359594929986069206639495374241795",
    "nextBackwardToken": "b/34783877303811383369537420289090800615709599058929582080"
}
```

## Parsing logs and structured logging


With CloudWatch Logs Insights, you can search and analyze log data using a specialized [query syntax](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_QuerySyntax.html). It performs queries over multiple log groups and provides powerful filtering using [glob](https://en.wikipedia.org/wiki/Glob_(programming)) and [regular expressions](https://en.wikipedia.org/wiki/Regular_expression) pattern matching.

You can take advantage of these capabilities by implementing structured logging in your Lambda functions. Structured logging organizes your logs into a pre-defined format, making it easier to query for. Using log levels is an important first step in generating filter-friendly logs that separate informational messages from warnings or errors. For example, consider the following Node.js code:

```
exports.handler = async (event) => {
    console.log("console.log - Application is fine")
    console.info("console.info - This is the same as console.log")
    console.warn("console.warn - Application provides a warning")
    console.error("console.error - An error occurred")
}
```

The resulting CloudWatch log file contains a separate field specifying the log level:

![\[monitoring observability figure 10\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/monitoring-observability-figure-10.png)


A CloudWatch Logs Insights query can then filter on log level. For example, to query for errors only, you can use the following query:

```
fields @timestamp, @message
| filter @message like /ERROR/
| sort @timestamp desc
```

### JSON structured logging


JSON is commonly used to provide structure for application logs. In the following example, the logs have been converted to JSON to output three distinct values:

![\[monitoring observability figure 11\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/monitoring-observability-figure-11.png)


The CloudWatch Logs Insights feature automatically discovers values in JSON output and parses the messages as fields, without the need for custom glob or regular expression. By using the JSON-structured logs, the following query finds invocations where the uploaded file was larger than 1 MB, the upload time was more than 1 second, and the invocation was not a cold start:

```
fields @message
| filter @message like /INFO/
| filter uploadedBytes > 1000000
| filter uploadTimeMS > 1000
| filter invocation != 1
```

This query might produce the following result:

![\[monitoring observability figure 12\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/monitoring-observability-figure-12.png)


The discovered fields in JSON are automatically populated in the *Discovered fields* menu on the right side. Standard fields emitted by the Lambda service are prefixed with '@', and you can query on these fields in the same way. Lambda logs always include the fields @timestamp, @logStream, @message, @requestId, @duration, @billedDuration, @type, @maxMemoryUsed, @memorySize. If X-Ray is enabled for a function, logs also include @xrayTraceId and @xraySegmentId.

When an Amazon event source such as Amazon S3, Amazon SQS, or Amazon EventBridge invokes your function, the entire event is provided to the function as a JSON object input. By logging this event in the first line of the function, you can then query on any of the nested fields using CloudWatch Logs Insights.

### Useful Insights queries


The following table shows example Insights queries that can be useful for monitoring Lambda functions.


| Description | Example query syntax | 
| --- | --- | 
|  The last 100 errors  |  

```
 fields Timestamp, LogLevel, Message
 \| filter LogLevel == "ERR"
 \| sort @timestamp desc
 \| limit 100
```  | 
|  The top 100 highest billed invocations  |  

```
filter @type = "REPORT"
\| fields @requestId, @billedDuration
\| sort by @billedDuration desc
\| limit 100
```  | 
|  Percentage of cold starts in total invocations  |  

```
filter @type = "REPORT"
\| stats sum(strcontains(@message, "Init Duration"))/count(*) * 100 as
  coldStartPct, avg(@duration)
  by bin(5m)
```  | 
|  Percentile report of Lambda duration  |  

```
filter @type = "REPORT"
\| stats
    avg(@billedDuration) as Average,
    percentile(@billedDuration, 99) as NinetyNinth,
    percentile(@billedDuration, 95) as NinetyFifth,
    percentile(@billedDuration, 90) as Ninetieth
    by bin(30m)
```  | 
|  Percentile report of Lambda memory usage  |  

```
filter @type="REPORT"
\| stats avg(@maxMemoryUsed/1024/1024) as mean_MemoryUsed,
    min(@maxMemoryUsed/1024/1024) as min_MemoryUsed,
    max(@maxMemoryUsed/1024/1024) as max_MemoryUsed,
    percentile(@maxMemoryUsed/1024/1024, 95) as Percentile95
```  | 
|  Invocations using 100% of assigned memory  |  

```
filter @type = "REPORT" and @maxMemoryUsed=@memorySize
\| stats
    count_distinct(@requestId)
    by bin(30m)
```  | 
|  Average memory used across invocations  |  

```
avgMemoryUsedPERC,
    avg(@billedDuration) as avgDurationMS
    by bin(5m)
```  | 
|  Visualization of memory statistics  |  

```
filter @type = "REPORT"
\| stats
    max(@maxMemoryUsed / 1024 / 1024) as maxMemMB,
    avg(@maxMemoryUsed / 1024 / 1024) as avgMemMB,
    min(@maxMemoryUsed / 1024 / 1024) as minMemMB,
    (avg(@maxMemoryUsed / 1024 / 1024) / max(@memorySize / 1024 / 1024)) * 100 as avgMemUsedPct,
    avg(@billedDuration) as avgDurationMS
    by bin(30m)
```  | 
|  Invocations where Lambda exited  |  

```
filter @message like /Process exited/
\| stats count() by bin(30m)
```  | 
|  Invocations that timed out  |  

```
filter @message like /Task timed out/
\| stats count() by bin(30m)
```  | 
|  Latency report  |  

```
filter @type = "REPORT"
\| stats avg(@duration), max(@duration), min(@duration)
  by bin(5m)
```  | 
|  Over-provisioned memory  |  

```
filter @type = "REPORT"
\| stats max(@memorySize / 1024 / 1024) as provisonedMemMB,
        min(@maxMemoryUsed / 1024 / 1024) as smallestMemReqMB,
        avg(@maxMemoryUsed / 1024 / 1024) as avgMemUsedMB,
        max(@maxMemoryUsed / 1024 / 1024) as maxMemUsedMB,
        provisonedMemMB - maxMemUsedMB as overProvisionedMB
```  | 

## Log visualization and dashboards


For any CloudWatch Logs Insights query, you can export the results to markdown or CSV format. In some cases, it might be more useful to create [ visualizations from queries](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_Insights-Visualizing-Log-Data.html), providing there is at least one aggregation function. The `stats` function allows you to define aggregations and grouping.

The previous *logInsightsJSON* example filtered on upload size and upload time and excluded first invocations. This resulted in a table of data. For monitoring a production system, it may be more useful to visualize minimum, maximum, and average file sizes to find outliers. To do this, apply the stats function with the required aggregates, and group on a time value such as every minute:

For example, consider the following query. This is the same example query from the [JSON structured logging](#querying-logs-json) section, but with additional aggregation functions:

```
fields @message
| filter @message like /INFO/
| filter uploadedBytes > 1000000
| filter uploadTimeMS > 1000
| filter invocation != 1
| stats min(uploadedBytes), avg(uploadedBytes), max(uploadedBytes) by bin (1m)
```

We included these aggregates because it may be more useful to visualize minimum, maximum, and average file sizes to find outliers. You can view the results in the **Visualization** tab:

![\[monitoring observability figure 14\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/monitoring-observability-figure-14.png)


After you have finished building the visualization, you can optionally add the graph to a CloudWatch dashboard. To do this, choose **Add to dashboard** above the visualization. This adds the query as a widget and enables you to select automatic refresh intervals, making it easier to continuously monitor the results:

![\[monitoring observability figure 15\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/monitoring-observability-figure-15.png)


# Sending Lambda function logs to Firehose
Log with Firehose

The Lambda console now offers the option to send function logs to Firehose. This enables real-time streaming of your logs to various destinations supported by Firehose, including third-party analytics tools and custom endpoints.

**Note**  
You can configure Lambda function logs to be sent to Firehose using the Lambda console, Amazon CLI, Amazon CloudFormation, and all Amazon SDKs.

## Pricing


For details on pricing, see [Amazon CloudWatch pricing](https://www.amazonaws.cn/cloudwatch/pricing/#Vended_Logs).

## Required permissions for Firehose log destination


When using the Lambda console to configure Firehose as your function's log destination, you need:

1. The [required IAM permissions](https://docs.aws.amazon.com/lambda/latest/dg/monitoring-cloudwatchlogs.html#monitoring-cloudwatchlogs-prereqs) to use CloudWatch Logs with Lambda.

1. To [set up subscription filters with Firehose](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html#FirehoseExample). This filter defines which log events are delivered to your Firehose stream.

## Sending Lambda function logs to Firehose


In the Lambda console, you can send function logs directly to Firehose after creating a new function. To do this, complete these steps:

1. Sign in to the Amazon Management Console and open the Lambda console.

1. Choose your function's name.

1. Choose the **Configuration** tab.

1. Choose the **Monitoring and operations tools** tab.

1. In the "Logging configuration" section, choose **Edit**.

1. In the "Log content" section, select a log format.

1. In the "Log destination" section, complete the following steps:

   1. Select a destination service.

   1. Choose to **Create a new log group** or use an **Existing log group**.
**Note**  
If choosing an existing log group for a Firehose destination, ensure the log group you choose is a `Delivery` log group type.

   1. Choose a Firehose stream.

   1. The CloudWatch `Delivery` log group will appear.

1. Choose **Save**.

**Note**  
If the IAM role provided in the console doesn't have the required permission, then the destination setup will fail. To fix this, refer to Required permissions for Firehose log destination to provide the required permissions.

## Cross-Account Logging
Cross-Account Logs

You can configure Lambda to send logs to Firehose delivery stream in a different Amazon account. This requires setting up a destination and configuring appropriate permissions in both accounts.

For detailed instructions on setting up cross-account logging, including required IAM roles and policies, see [Setting up a new cross-account subscription](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CrossAccountSubscriptions.html) in the CloudWatch Logs documentation.

# Sending Lambda function logs to Amazon S3
Log with Amazon S3

You can configure your Lambda function to send logs directly to Amazon S3 using the Lambda console. This feature provides a cost-effective solution for long-term log storage and enables powerful analysis options using services like Athena.

**Note**  
You can configure Lambda function logs to be sent to Amazon S3 using the Lambda console, Amazon CLI, Amazon CloudFormation, and all Amazon SDKs.

## Pricing


For details on pricing, see [Amazon CloudWatch pricing](https://www.amazonaws.cn/cloudwatch/pricing/#Vended_Logs).

## Required permissions for Amazon S3 log destination


When using the Lambda console to configure Amazon S3 as your function's log destination, you need:

1. The [required IAM permissions](https://docs.aws.amazon.com/lambda/latest/dg/monitoring-cloudwatchlogs.html#monitoring-cloudwatchlogs-prereqs) to use CloudWatch Logs with Lambda.

1. To [Set up a CloudWatch Logs subscriptions filter to send Lambda function logs to Amazon S3](#using-cwl-subscription-filter-lambda-s3). This filter defines which log events are delivered to your Amazon S3 bucket.

## Set up a CloudWatch Logs subscriptions filter to send Lambda function logs to Amazon S3
Send Lambda logs to Amazon S3

To send logs from CloudWatch Logs to Amazon S3, you need to create a subscription filter. This filter defines which log events are delivered to your Amazon S3 bucket. Your Amazon S3 bucket must be in the same Region as your log group.

### To create a subscription filter for Amazon S3


1. Create an Amazon Simple Storage Service (Amazon S3) bucket. We recommend that you use a bucket that was created specifically for CloudWatch Logs. However, if you want to use an existing bucket, skip to step 2.

   Run the following command, replacing the placeholder Region with the Region you want to use:

   ```
   aws s3api create-bucket --bucket amzn-s3-demo-bucket2 --create-bucket-configuration LocationConstraint=region
   ```
**Note**  
`amzn-s3-demo-bucket2` is an example Amazon S3 bucket name. It is *reserved*. For this procedure to work, you must to replace it with your unique Amazon S3 bucket name.

   The following is example output:

   ```
   {
       "Location": "/amzn-s3-demo-bucket2"
   }
   ```

1. Create the IAM role that grants CloudWatch Logs permission to put data into your Amazon S3 bucket. This policy includes a aws:SourceArn global condition context key to help prevent the confused deputy security issue. For more information, see [Confused deputy prevention](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Subscriptions-confused-deputy.html).

   1. Use a text editor to create a trust policy in a file `~/TrustPolicyForCWL.json` as follows:

      ```
      {
          "Statement": {
              "Effect": "Allow",
              "Principal": { "Service": "logs.amazonaws.com" },
              "Condition": { 
                  "StringLike": {
                      "aws:SourceArn": "arn:aws:logs:region:123456789012:*"
                  } 
               },
              "Action": "sts:AssumeRole"
          } 
      }
      ```

   1. Use the create-role command to create the IAM role, specifying the trust policy file. Note of the returned Role.Arn value, as you will need it in a later step:

      ```
      aws iam create-role \
       --role-name CWLtoS3Role \
       --assume-role-policy-document file://~/TrustPolicyForCWL.json
      {
          "Role": {
              "AssumeRolePolicyDocument": {
                  "Statement": {
                      "Action": "sts:AssumeRole",
                      "Effect": "Allow",
                      "Principal": {
                          "Service": "logs.amazonaws.com"
                      },
                      "Condition": { 
                          "StringLike": {
                              "aws:SourceArn": "arn:aws:logs:region:123456789012:*"
                          } 
                      }
                  }
              },
              "RoleId": "AAOIIAH450GAB4HC5F431",
              "CreateDate": "2015-05-29T13:46:29.431Z",
              "RoleName": "CWLtoS3Role",
              "Path": "/",
              "Arn": "arn:aws:iam::123456789012:role/CWLtoS3Role"
          }
      }
      ```

1. Create a permissions policy to define what actions CloudWatch Logs can do on your account. First, use a text editor to create a permissions policy in a file `~/PermissionsForCWL.json`:

   ```
   {
     "Statement": [
       {
         "Effect": "Allow",
         "Action": ["s3:PutObject"],
         "Resource": ["arn:aws:s3:::amzn-s3-demo-bucket2/*"]
       }
     ]
   }
   ```

   Associate the permissions policy with the role using the following `put-role-policy` command:

   ```
   aws iam put-role-policy --role-name CWLtoS3Role --policy-name Permissions-Policy-For-S3 --policy-document file://~/PermissionsForCWL.json
   ```

1. Create a `Delivery` log group or use a existing `Delivery` log group.

   ```
   aws logs create-log-group --log-group-name my-logs --log-group-class DELIVERY --region REGION_NAME
   ```

1. `PutSubscriptionFilter` to set up destination

   ```
   aws logs put-subscription-filter
   --log-group-name my-logs
   --filter-name my-lambda-delivery
   --filter-pattern ""
   --destination-arn arn:aws:s3:::amzn-s3-demo-bucket2
   --role-arn arn:aws:iam::123456789012:role/CWLtoS3Role
   --region REGION_NAME
   ```

## Sending Lambda function logs to Amazon S3


In the Lambda console, you can send function logs directly to Amazon S3 after creating a new function. To do this, complete these steps:

1. Sign in to the Amazon Management Console and open the Lambda console.

1. Choose your function's name.

1. Choose the **Configuration** tab.

1. Choose the **Monitoring and operations tools** tab.

1. In the "Logging configuration" section, choose **Edit**.

1. In the "Log content" section, select a log format.

1. In the "Log destination" section, complete the following steps:

   1. Select a destination service.

   1. Choose to **Create a new log group** or use an **Existing log group**.
**Note**  
If choosing an existing log group for an Amazon S3 destination, ensure the log group you choose is a `Delivery` log group type.

   1. Choose an Amazon S3 bucket to be the destination for your function logs.

   1. The CloudWatch `Delivery` log group will appear.

1. Choose **Save**.

**Note**  
If the IAM role provided in the console doesn't have the required permissions, then the destination setup will fail. To fix this, refer to [Required permissions for Amazon S3 log destination](#logging-s3-permissions).

## Cross-Account Logging
Cross-Account Logs

You can configure Lambda to send logs to an Amazon S3 bucket in a different Amazon account. This requires setting up a destination and configuring appropriate permissions in both accounts.

For detailed instructions on setting up cross-account logging, including required IAM roles and policies, see [Setting up a new cross-account subscription](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CrossAccountSubscriptions.html) in the CloudWatch Logs documentation.

# Logging Amazon Lambda API calls using Amazon CloudTrail
CloudTrail logs

Amazon Lambda is integrated with [Amazon CloudTrail](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/cloudtrail-user-guide.html), a service that provides a record of actions taken by a user, role, or an Amazon Web Services service. CloudTrail captures API calls for Lambda as events. The calls captured include calls from the Lambda console and code calls to the Lambda API operations. Using the information collected by CloudTrail, you can determine the request that was made to Lambda, the IP address from which the request was made, when it was made, and additional details.

Every event or log entry contains information about who generated the request. The identity information helps you determine the following:
+ Whether the request was made with root user or user credentials.
+ Whether the request was made on behalf of an IAM Identity Center user.
+ Whether the request was made with temporary security credentials for a role or federated user.
+ Whether the request was made by another Amazon Web Services service.

CloudTrail is active in your Amazon Web Services account when you create the account and you automatically have access to the CloudTrail **Event history**. The CloudTrail **Event history** provides a viewable, searchable, downloadable, and immutable record of the past 90 days of recorded management events in an Amazon Web Services Region. For more information, see [Working with CloudTrail Event history](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/view-cloudtrail-events.html) in the *Amazon CloudTrail User Guide*. There are no CloudTrail charges for viewing the **Event history**.

For an ongoing record of events in your Amazon Web Services account past 90 days, create a trail or a [CloudTrail Lake](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/cloudtrail-lake.html) event data store.

**CloudTrail trails**  
A *trail* enables CloudTrail to deliver log files to an Amazon S3 bucket. All trails created using the Amazon Web Services Management Console are multi-Region. You can create a single-Region or a multi-Region trail by using the Amazon CLI. Creating a multi-Region trail is recommended because you capture activity in all Amazon Web Services Regions in your account. If you create a single-Region trail, you can view only the events logged in the trail's Amazon Web Services Region. For more information about trails, see [Creating a trail for your Amazon Web Services account](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/cloudtrail-create-and-update-a-trail.html) and [Creating a trail for an organization](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/creating-trail-organization.html) in the *Amazon CloudTrail User Guide*.  
You can deliver one copy of your ongoing management events to your Amazon S3 bucket at no charge from CloudTrail by creating a trail, however, there are Amazon S3 storage charges. For more information about CloudTrail pricing, see [Amazon CloudTrail Pricing](https://www.amazonaws.cn/cloudtrail/pricing/). For information about Amazon S3 pricing, see [Amazon S3 Pricing](https://www.amazonaws.cn/s3/pricing/).

**CloudTrail Lake event data stores**  
*CloudTrail Lake* lets you run SQL-based queries on your events. CloudTrail Lake converts existing events in row-based JSON format to [ Apache ORC](https://orc.apache.org/) format. ORC is a columnar storage format that is optimized for fast retrieval of data. Events are aggregated into *event data stores*, which are immutable collections of events based on criteria that you select by applying [advanced event selectors](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/cloudtrail-lake-concepts.html#adv-event-selectors). The selectors that you apply to an event data store control which events persist and are available for you to query. For more information about CloudTrail Lake, see [Working with Amazon CloudTrail Lake](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/cloudtrail-lake.html) in the *Amazon CloudTrail User Guide*.  
CloudTrail Lake event data stores and queries incur costs. When you create an event data store, you choose the [pricing option](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/cloudtrail-lake-manage-costs.html#cloudtrail-lake-manage-costs-pricing-option) you want to use for the event data store. The pricing option determines the cost for ingesting and storing events, and the default and maximum retention period for the event data store. For more information about CloudTrail pricing, see [Amazon CloudTrail Pricing](https://www.amazonaws.cn/cloudtrail/pricing/).

## Lambda data events in CloudTrail


[Data events](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events) provide information about the resource operations performed on or in a resource (for example, reading or writing to an Amazon S3 object). These are also known as data plane operations. Data events are often high-volume activities. By default, CloudTrail doesn’t log most data events, and the CloudTrail **Event history** doesn't record them.

One CloudTrail data event that is logged by default for supported services is `LambdaESMDisabled`. To learn more about using this event to help troubleshoot issues with Lambda event source mappings, see [Using CloudTrail to troubleshoot disabled Lambda event sources](#cloudtrail-ESM-troubleshooting).

Additional charges apply for data events. For more information about CloudTrail pricing, see [Amazon CloudTrail Pricing](https://www.amazonaws.cn/cloudtrail/pricing/).

You can log data events for the `AWS::Lambda::Function` resource type by using the CloudTrail console, Amazon CLI, or CloudTrail API operations. For more information about how to log data events, see [Logging data events with the Amazon Web Services Management Console](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#logging-data-events-console) and [Logging data events with the Amazon Command Line Interface](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html#creating-data-event-selectors-with-the-AWS-CLI) in the *Amazon CloudTrail User Guide*.

The following table lists the Lambda resource type for which you can log data events. The **Data event type (console)** column shows the value to choose from the **Data event type** list on the CloudTrail console. The **resources.type value** column shows the `resources.type` value, which you would specify when configuring advanced event selectors using the Amazon CLI or CloudTrail APIs. The **Data APIs logged to CloudTrail** column shows the API calls logged to CloudTrail for the resource type. 


| Data event type (console) | resources.type value | Data APIs logged to CloudTrail | 
| --- | --- | --- | 
| Lambda |  AWS::Lambda::Function  |  [Invoke](https://docs.amazonaws.cn/lambda/latest/api/API_Invoke.html)  | 

You can configure advanced event selectors to filter on the `eventName`, `readOnly`, and `resources.ARN` fields to log only those events that are important to you. The following example is the JSON view of a data event configuration that logs events for a specific function only. For more information about these fields, see [https://docs.amazonaws.cn/awscloudtrail/latest/APIReference/API_AdvancedFieldSelector.html](https://docs.amazonaws.cn/awscloudtrail/latest/APIReference/API_AdvancedFieldSelector.html) in the *Amazon CloudTrail API Reference*.

```
[
  {
    "name": "function-invokes",
    "fieldSelectors": [
      {
        "field": "eventCategory",
        "equals": [
          "Data"
        ]
      },
      {
        "field": "resources.type",
        "equals": [
          "AWS::Lambda::Function"
        ]
      },
      {
        "field": "resources.ARN",
        "equals": [
          "arn:aws:lambda:us-east-1:111122223333:function:hello-world"
        ]
      }
    ]
  }
]
```

## Lambda management events in CloudTrail


[Management events](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/logging-management-events-with-cloudtrail.html#logging-management-events) provide information about management operations that are performed on resources in your Amazon Web Services account. These are also known as control plane operations. By default, CloudTrail logs management events.

Lambda supports logging the following actions as management events in CloudTrail log files.

**Note**  
In the CloudTrail log file, the `eventName` might include date and version information, but it is still referring to the same public API action. For example the, `GetFunction` action appears as `GetFunction20150331v2`. The following list specifies when the event name differs from the API action name.
+ [AddLayerVersionPermission](https://docs.amazonaws.cn/lambda/latest/api/API_AddLayerVersionPermission.html)
+ [AddPermission](https://docs.amazonaws.cn/lambda/latest/api/API_AddPermission.html) (event name: `AddPermission20150331v2`)
+ [CreateAlias](https://docs.amazonaws.cn/lambda/latest/api/API_CreateAlias.html) (event name: `CreateAlias20150331`)
+ [CreateEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_CreateEventSourceMapping.html) (event name: `CreateEventSourceMapping20150331`)
+ [CreateFunction](https://docs.amazonaws.cn/lambda/latest/api/API_CreateFunction.html) (event name: `CreateFunction20150331`)

  (The `Environment` and `ZipFile` parameters are omitted from the CloudTrail logs for `CreateFunction`.)
+ [CreateFunctionUrlConfig](https://docs.amazonaws.cn/lambda/latest/api/API_CreateFunctionUrlConfig.html)
+ [DeleteAlias](https://docs.amazonaws.cn/lambda/latest/api/API_DeleteAlias.html) (event name: `DeleteAlias20150331`)
+ [DeleteCodeSigningConfig](https://docs.amazonaws.cn/lambda/latest/api/API_DeleteCodeSigningConfig.html)
+ [DeleteEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_DeleteEventSourceMapping.html) (event name: `DeleteEventSourceMapping20150331`)
+ [DeleteFunction](https://docs.amazonaws.cn/lambda/latest/api/API_DeleteFunction.html) (event name: `DeleteFunction20150331`)
+ [DeleteFunctionConcurrency](https://docs.amazonaws.cn/lambda/latest/api/API_DeleteFunctionConcurrency.html) (event name: `DeleteFunctionConcurrency20171031`)
+ [DeleteFunctionUrlConfig](https://docs.amazonaws.cn/lambda/latest/api/API_DeleteFunctionUrlConfig.html)
+ [DeleteProvisionedConcurrencyConfig](https://docs.amazonaws.cn/lambda/latest/api/API_DeleteProvisionedConcurrencyConfig.html)
+ [GetAlias](https://docs.amazonaws.cn/lambda/latest/api/API_GetAlias.html) (event name: `GetAlias20150331`)
+ [GetEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_GetEventSourceMapping.html)
+ [GetFunction](https://docs.amazonaws.cn/lambda/latest/api/API_GetFunction.html)
+ [GetFunctionUrlConfig](https://docs.amazonaws.cn/lambda/latest/api/API_GetFunctionUrlConfig.html)
+ [GetFunctionConfiguration](https://docs.amazonaws.cn/lambda/latest/api/API_GetFunctionConfiguration.html)
+ [GetLayerVersionPolicy](https://docs.amazonaws.cn/lambda/latest/api/API_GetLayerVersionPolicy.html)
+ [GetPolicy](https://docs.amazonaws.cn/lambda/latest/api/API_GetPolicy.html)
+ [ListEventSourceMappings](https://docs.amazonaws.cn/lambda/latest/api/API_ListEventSourceMappings.html)
+ [ListFunctions](https://docs.amazonaws.cn/lambda/latest/api/API_ListFunctions.html)
+ [ListFunctionUrlConfigs](https://docs.amazonaws.cn/lambda/latest/api/API_ListFunctionUrlConfigs.html)
+ [PublishLayerVersion](https://docs.amazonaws.cn/lambda/latest/api/API_PublishLayerVersion.html) (event name: `PublishLayerVersion20181031`)

  (The `ZipFile` parameter is omitted from the CloudTrail logs for `PublishLayerVersion`.)
+ [PublishVersion](https://docs.amazonaws.cn/lambda/latest/api/API_PublishVersion.html) (event name: `PublishVersion20150331`)
+ [PutFunctionConcurrency](https://docs.amazonaws.cn/lambda/latest/api/API_PutFunctionConcurrency.html) (event name: `PutFunctionConcurrency20171031`)
+ [PutFunctionCodeSigningConfig](https://docs.amazonaws.cn/lambda/latest/api/API_PutFunctionCodeSigningConfig.html)
+ [PutFunctionEventInvokeConfig](https://docs.amazonaws.cn/lambda/latest/api/API_PutFunctionEventInvokeConfig.html)
+ [PutProvisionedConcurrencyConfig](https://docs.amazonaws.cn/lambda/latest/api/API_PutProvisionedConcurrencyConfig.html)
+ [PutRuntimeManagementConfig](https://docs.amazonaws.cn/lambda/latest/api/API_PutRuntimeManagementConfig.html)
+ [RemovePermission](https://docs.amazonaws.cn/lambda/latest/api/API_RemovePermission.html) (event name: `RemovePermission20150331v2`)
+ [TagResource](https://docs.amazonaws.cn/lambda/latest/api/API_TagResource.html) (event name: `TagResource20170331v2`)
+ [UntagResource](https://docs.amazonaws.cn/lambda/latest/api/API_UntagResource.html) (event name: `UntagResource20170331v2`)
+ [UpdateAlias](https://docs.amazonaws.cn/lambda/latest/api/API_UpdateAlias.html) (event name: `UpdateAlias20150331`)
+ [UpdateCodeSigningConfig](https://docs.amazonaws.cn/lambda/latest/api/API_UpdateCodeSigningConfig.html) 
+ [UpdateEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_UpdateEventSourceMapping.html) (event name: `UpdateEventSourceMapping20150331`)
+ [UpdateFunctionCode](https://docs.amazonaws.cn/lambda/latest/api/API_UpdateFunctionCode.html) (event name: `UpdateFunctionCode20150331v2`)

  (The `ZipFile` parameter is omitted from the CloudTrail logs for `UpdateFunctionCode`.)
+ [UpdateFunctionConfiguration](https://docs.amazonaws.cn/lambda/latest/api/API_UpdateFunctionConfiguration.html) (event name: `UpdateFunctionConfiguration20150331v2`)

  (The `Environment` parameter is omitted from the CloudTrail logs for `UpdateFunctionConfiguration`.)
+ [UpdateFunctionEventInvokeConfig](https://docs.amazonaws.cn/lambda/latest/api/API_UpdateFunctionEventInvokeConfig.html)
+ [UpdateFunctionUrlConfig](https://docs.amazonaws.cn/lambda/latest/api/API_UpdateFunctionUrlConfig.html)

## Using CloudTrail to troubleshoot disabled Lambda event sources


When you change the state of an event source mapping using the [UpdateEventSourceMapping](https://docs.amazonaws.cn/lambda/latest/api/API_UpdateEventSourceMapping.html) API action, the API call is logged as a management event in CloudTrail. Event source mappings can also transition directly to the `Disabled` state due to errors.

For the following services, Lambda publishes the `LambdaESMDisabled` data event to CloudTrail when your event source transitions to the Disabled state:
+ Amazon Simple Queue Service (Amazon SQS)
+ Amazon DynamoDB
+ Amazon Kinesis

Lambda doesn't support this event for any other event source mapping types.

To receive alerts when event source mappings for supported services transition to the `Disabled` state, set up an alarm in Amazon CloudWatch using the `LambdaESMDisabled` CloudTrail event. To learn more about setting up a CloudWatch alarm, see [Creating CloudWatch alarms for CloudTrail events: examples](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html).

The `serviceEventDetails` entity in the `LambdaESMDisabled` event message contains one of the following error codes.

**`RESOURCE_NOT_FOUND`**  
The resource specified in the request does not exist.

**`FUNCTION_NOT_FOUND`**  
The function attached to the event source does not exist.

**`REGION_NAME_NOT_VALID`**  
A Region name provided to the event source or function is invalid.

**`AUTHORIZATION_ERROR`**  
Permissions have not been set, or are misconfigured.

**`FUNCTION_IN_FAILED_STATE`**  
The function code does not compile, has encountered an unrecoverable exception, or a bad deployment has occurred.

## Lambda event examples


An event represents a single request from any source and includes information about the requested API operation, the date and time of the operation, request parameters, and so on. CloudTrail log files aren't an ordered stack trace of the public API calls, so events don't appear in any specific order.

The following example shows CloudTrail log entries for the `GetFunction` and `DeleteFunction` actions.

**Note**  
The `eventName` might include date and version information, such as `"GetFunction20150331"`, but it is still referring to the same public API. 

```
{
  "Records": [
    {
      "eventVersion": "1.03",
      "userIdentity": {
        "type": "IAMUser",
        "principalId": "A1B2C3D4E5F6G7EXAMPLE",
        "arn": "arn:aws-cn:iam::111122223333:user/myUserName",
        "accountId": "111122223333",
        "accessKeyId": "AKIAIOSFODNN7EXAMPLE",
        "userName": "myUserName"
      },
      "eventTime": "2015-03-18T19:03:36Z",
      "eventSource": "lambda.amazonaws.com",
      "eventName": "GetFunction",
      "awsRegion": "us-east-1",
      "sourceIPAddress": "127.0.0.1",
      "userAgent": "Python-httplib2/0.8 (gzip)",
      "errorCode": "AccessDenied",
      "errorMessage": "User: arn:aws-cn:iam::111122223333:user/myUserName is not authorized to perform: lambda:GetFunction on resource: arn:aws-cn:lambda:us-west-2:111122223333:function:other-acct-function",
      "requestParameters": null,
      "responseElements": null,
      "requestID": "7aebcd0f-cda1-11e4-aaa2-e356da31e4ff",
      "eventID": "e92a3e85-8ecd-4d23-8074-843aabfe89bf",
      "eventType": "AwsApiCall",
      "recipientAccountId": "111122223333"
    },
    {
      "eventVersion": "1.03",
      "userIdentity": {
        "type": "IAMUser",
        "principalId": "A1B2C3D4E5F6G7EXAMPLE",
        "arn": "arn:aws-cn:iam::111122223333:user/myUserName",
        "accountId": "111122223333",
        "accessKeyId": "AKIAIOSFODNN7EXAMPLE",
        "userName": "myUserName"
      },
      "eventTime": "2015-03-18T19:04:42Z",
      "eventSource": "lambda.amazonaws.com",
      "eventName": "DeleteFunction20150331",
      "awsRegion": "us-east-1",
      "sourceIPAddress": "127.0.0.1",
      "userAgent": "Python-httplib2/0.8 (gzip)",
      "requestParameters": {
        "functionName": "basic-node-task"
      },
      "responseElements": null,
      "requestID": "a2198ecc-cda1-11e4-aaa2-e356da31e4ff",
      "eventID": "20b84ce5-730f-482e-b2b2-e8fcc87ceb22",
      "eventType": "AwsApiCall",
      "recipientAccountId": "111122223333"
    }
  ]
}
```

For information about CloudTrail record contents, see [CloudTrail record contents](https://docs.amazonaws.cn/awscloudtrail/latest/userguide/cloudtrail-event-reference-record-contents.html) in the *Amazon CloudTrail User Guide*.

# Visualize Lambda function invocations using Amazon X-Ray
Amazon X-Ray

You can use Amazon X-Ray to visualize the components of your application, identify performance bottlenecks, and troubleshoot requests that resulted in an error. Your Lambda functions send trace data to X-Ray, and X-Ray processes the data to generate a service map and searchable trace summaries.

Lambda supports two tracing modes for X-Ray: `Active` and `PassThrough`. With `Active` tracing, Lambda automatically creates trace segments for function invocations and sends them to X-Ray. `PassThrough` mode, on the other hand, simply propagates the tracing context to downstream services. If you've enabled `Active` tracing for your function, Lambda automatically sends traces to X-Ray for sampled requests. Typically, an upstream service, such as Amazon API Gateway or an application hosted on Amazon EC2 that is instrumented with the X-Ray SDK, decides whether incoming requests should be traced, then adds that sampling decision as a tracing header. Lambda uses that header to decide to send traces or not. Traces from upstream message producers, such as Amazon SQS, are automatically linked to traces from downstream Lambda functions, creating an end-to-end view of the entire application. For more information, see [Tracing event-driven applications](https://docs.amazonaws.cn//xray/latest/devguide/xray-tracelinking.html) in the *Amazon X-Ray Developer Guide*.

**Note**  
X-Ray tracing is currently not supported for Lambda functions with Amazon Managed Streaming for Apache Kafka (Amazon MSK), self-managed Apache Kafka, Amazon MQ with ActiveMQ and RabbitMQ, or Amazon DocumentDB event source mappings.

To toggle active tracing on your Lambda function with the console, follow these steps:

**To turn on active tracing**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Choose a function.

1. Choose **Configuration** and then choose **Monitoring and operations tools**.

1. Under **Additional monitoring tools**, choose **Edit**.

1. Under **CloudWatch Application Signals and Amazon X-Ray**, choose **Enable** for **Lambda service traces**.

1. Choose **Save**.

Your function needs permission to upload trace data to X-Ray. When you activate tracing in the Lambda console, Lambda adds the required permissions to your function's [execution role](lambda-intro-execution-role.md). Otherwise, add the [AWSXRayDaemonWriteAccess](https://console.amazonaws.cn/iam/home#/policies/arn:aws-cn:iam::aws:policy/AWSXRayDaemonWriteAccess) policy to the execution role.

X-Ray doesn't trace all requests to your application. X-Ray applies a sampling algorithm to ensure that tracing is efficient, while still providing a representative sample of all requests. The sampling rate is 1 request per second and 5 percent of additional requests. You can't configure the X-Ray sampling rate for your functions.

## Understanding X-Ray traces


In X-Ray, a *trace* records information about a request that is processed by one or more *services*. Lambda records 2 segments per trace, which creates two nodes on the service graph. The following image highlights these two nodes:

![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/xray-servicemap-function.png)


The first node on the left represents the Lambda service, which receives the invocation request. The second node represents your specific Lambda function.

The segment recorded for the Lambda service, `AWS::Lambda`, covers all the steps required to prepare the Lambda execution environment. This includes scheduling the MicroVM, creating or unfreezing an execution environment with the resources you have configured, as well as downloading your function code and all layers.

The `AWS::Lambda::Function` segment is for the work done by the function.

**Note**  
Amazon is currently implementing changes to the Lambda service. Due to these changes, you may see minor differences between the structure and content of system log messages and trace segments emitted by different Lambda functions in your Amazon Web Services account.  
This change affects the subsegments of the function segment. The following paragraphs describe both the old and new formats for these subsegments.  
These changes will be implemented during the coming weeks, and all functions in all Amazon Web Services Regions except the China and GovCloud regions will transition to use the new-format log messages and trace segments.

**Old-style Amazon X-Ray Lambda segment structure**  
The old-style X-Ray structure for the `AWS::Lambda` segment looks like the following:

![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/V2_sandbox_images/v1_XRay_structure.png)


In this format, the function segment has subsegments for `Initialization`, `Invocation`, and `Overhead`. For [Lambda SnapStart](snapstart.md) only, there is also a `Restore` subsegment (not shown on this diagram). 

The `Initialization` subsegment represents the init phase of the Lambda execution environment lifecycle. During this phase, Lambda initializes extensions, initializes the runtime, and runs the function's initialization code.

The `Invocation` subsegment represents the invoke phase where Lambda invokes the function handler. This begins with runtime and extension registration and it ends when the runtime is ready to send the response.

(Lambda SnapStart only) The `Restore` subsegment shows the time it takes for Lambda to restore a snapshot, load the runtime, and run any after-restore [runtime hooks](snapstart-runtime-hooks.md). The process of restoring snapshots can include time spent on activities outside the MicroVM. This time is reported in the `Restore` subsegment. You aren't charged for the time spent outside the microVM to restore a snapshot.

The `Overhead` subsegment represents the phase that occurs between the time when the runtime sends the response and the signal for the next invoke. During this time, the runtime finishes all tasks related to an invoke and prepares to freeze the sandbox.

**Important**  
You can use the X-Ray SDK to extend the `Invocation` subsegment with additional subsegments for downstream calls, annotations, and metadata. You can't access the function segment directly or record work done outside of the handler invocation scope.

For more information about Lambda execution environment phases, see [Understanding the Lambda execution environment lifecycle](lambda-runtime-environment.md).

An example trace using the old-style X-Ray structure is shown in the following diagram.

![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/V2_sandbox_images/my-function-2-v1.png)


Note the two segments in the example. Both are named **my-function**, but one has an origin of `AWS::Lambda` and the other has an origin of `AWS::Lambda::Function`. If the `AWS::Lambda` segment shows an error, the Lambda service had an issue. If the `AWS::Lambda::Function` segment shows an error, your function had an issue.

**Note**  
Occasionally, you may notice a large gap between the function initialization and invocation phases in your X-Ray traces. For functions using [provisioned concurrency](provisioned-concurrency.md), this is because Lambda initializes your function instances well in advance of invocation. For functions using [unreserved (on-demand) concurrency](lambda-concurrency.md), Lambda may proactively initialize a function instance, even if there's no invocation. Visually, both of these cases show up as a time gap between the initialization and invocation phases.

**New-style Amazon X-Ray Lambda segment structure**  
The new-style X-Ray structure for the `AWS::Lambda` segment looks like the following:

![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/V2_sandbox_images/v2_XRay_structure.png)


In this new format, The `Init` subsegment represents the init phase of the Lambda execution environment lifecycle as before.

There is no invocation segment in the new format. Instead, customer subsegments are attached directly to the `AWS::Lambda::Function` segment. This segment contains the following metrics as annotations:
+ `aws.responseLatency` - the time taken for the function to run
+ `aws.responseDuration` - the time taken to transfer the response to the customer
+ `aws.runtimeOverhead` - the amount of additional time the runtime needed to finish
+ `aws.extensionOverhead` - the amount of additional time the extensions needed to finish

An example trace using the new-style X-Ray structure is shown in the following diagram.

![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/V2_sandbox_images/my-function-2-v2.png)


Note the two segments in the example. Both are named **my-function**, but one has an origin of `AWS::Lambda` and the other has an origin of `AWS::Lambda::Function`. If the `AWS::Lambda` segment shows an error, the Lambda service had an issue. If the `AWS::Lambda::Function` segment shows an error, your function had an issue.

See the following topics for a language-specific introduction to tracing in Lambda:
+ [Instrumenting Node.js code in Amazon Lambda](nodejs-tracing.md)
+ [Instrumenting Python code in Amazon Lambda](python-tracing.md)
+ [Instrumenting Ruby code in Amazon Lambda](ruby-tracing.md)
+ [Instrumenting Java code in Amazon Lambda](java-tracing.md)
+ [Instrumenting Go code in Amazon Lambda](golang-tracing.md)
+ [Instrumenting C\$1 code in Amazon Lambda](csharp-tracing.md)

For a full list of services that support active instrumentation, see [Supported Amazon Web Services services](https://docs.amazonaws.cn/xray/latest/devguide/xray-usage.html#xray-usage-codechanges) in the Amazon X-Ray Developer Guide.

## Default tracing behavior in Lambda


If you do not have `Active` tracing turned on, Lambda defaults to `PassThrough` tracing mode.

In `PassThrough` mode, Lambda forwards the X-Ray tracing header to downstream services, but does not send traces automatically. This is true even if the tracing header contains a decision to sample the request. If the upstream service does not provide an X-Ray tracing header, Lambda generates a header and makes the decision not to sample. However, you can send your own traces by calling tracing libraries from your function code. 

**Note**  
 Previously, Lambda would send traces automatically when upstream services, such as Amazon API Gateway, added a tracing header. By not sending traces automatically, Lambda gives you the control to trace the functions that are important to you. If your solution depends on this passive tracing behavior, switch to `Active` tracing. 

## Execution role permissions


Lambda needs the following permissions to send trace data to X-Ray. Add them to your function's [execution role](lambda-intro-execution-role.md).
+ [xray:PutTraceSegments](https://docs.amazonaws.cn/xray/latest/api/API_PutTraceSegments.html)
+ [xray:PutTelemetryRecords](https://docs.amazonaws.cn/xray/latest/api/API_PutTelemetryRecords.html)

These permissions are included in the [AWSXRayDaemonWriteAccess](https://console.amazonaws.cn/iam/home?#/policies/arn:aws-cn:iam::aws:policy/AWSXRayDaemonWriteAccess) managed policy.

## Enabling `Active` tracing with the Lambda API


To manage tracing configuration with the Amazon CLI or Amazon SDK, use the following API operations:
+ [UpdateFunctionConfiguration](https://docs.amazonaws.cn/lambda/latest/api/API_UpdateFunctionConfiguration.html)
+ [GetFunctionConfiguration](https://docs.amazonaws.cn/lambda/latest/api/API_GetFunctionConfiguration.html)
+ [CreateFunction](https://docs.amazonaws.cn/lambda/latest/api/API_CreateFunction.html)

The following example Amazon CLI command enables active tracing on a function named **my-function**.

```
aws lambda update-function-configuration --function-name my-function \
--tracing-config Mode=Active
```

Tracing mode is part of the version-specific configuration when you publish a version of your function. You can't change the tracing mode on a published version.

## Enabling `Active` tracing with Amazon CloudFormation


To activate tracing on an `AWS::Lambda::Function` resource in an Amazon CloudFormation template, use the `TracingConfig` property.

**Example [function-inline.yml](https://github.com/awsdocs/aws-lambda-developer-guide/blob/master/templates/function-inline.yml) – Tracing configuration**  

```
Resources:
  function:
    Type: [AWS::Lambda::Function](https://docs.amazonaws.cn/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-function.html)
    Properties:
      TracingConfig:
        Mode: Active
      ...
```

For an Amazon Serverless Application Model (Amazon SAM) `AWS::Serverless::Function` resource, use the `Tracing` property.

**Example [template.yml](https://github.com/awsdocs/aws-lambda-developer-guide/tree/main/sample-apps/blank-nodejs/template.yml) – Tracing configuration**  

```
Resources:
  function:
    Type: [AWS::Serverless::Function](https://docs.amazonaws.cn/serverless-application-model/latest/developerguide/sam-resource-function.html)
    Properties:
      Tracing: Active
      ...
```

# Monitor function performance with Amazon CloudWatch Lambda Insights
Function insights

Amazon CloudWatch Lambda Insights collects and aggregates Lambda function runtime performance metrics and logs for your serverless applications. This page describes how to enable and use Lambda Insights to diagnose issues with your Lambda functions.

**Topics**
+ [

## How Lambda Insights monitors serverless applications
](#monitoring-insights-how)
+ [

## Pricing
](#monitoring-insights-pricing)
+ [

## Supported runtimes
](#monitoring-insights-runtimes)
+ [

## Enabling Lambda Insights in the Lambda console
](#monitoring-insights-enabling-console)
+ [

## Enabling Lambda Insights programmatically
](#monitoring-insights-enabling-programmatically)
+ [

## Using the Lambda Insights dashboard
](#monitoring-insights-multifunction)
+ [

## Example workflow to detect function anomalies
](#monitoring-insights-anomalies)
+ [

## Example workflow using queries to troubleshoot a function
](#monitoring-insights-queries)
+ [

## What's next?
](#monitoring-console-next-up)

## How Lambda Insights monitors serverless applications
How it works

CloudWatch Lambda Insights is a monitoring and troubleshooting solution for serverless applications running on Amazon Lambda. The solution collects, aggregates, and summarizes system-level metrics including CPU time, memory, disk and network usage. It also collects, aggregates, and summarizes diagnostic information such as cold starts and Lambda worker shutdowns to help you isolate issues with your Lambda functions and resolve them quickly.

Lambda Insights uses a new CloudWatch Lambda Insights [extension](https://docs.amazonaws.cn/lambda/latest/dg/lambda-extensions.html), which is provided as a [Lambda layer](chapter-layers.md). When you enable this extension on a Lambda function for a supported runtime, it collects system-level metrics and emits a single performance log event for every invocation of that Lambda function. CloudWatch uses embedded metric formatting to extract metrics from the log events. For more information, see [Using Amazon Lambda extensions](https://docs.amazonaws.cn/lambda/latest/dg/lambda-extensions.html).

The Lambda Insights layer extends the `CreateLogStream` and `PutLogEvents` for the `/aws/lambda-insights/` log group.

## Pricing


When you enable Lambda Insights for your Lambda function, Lambda Insights reports 8 metrics per function and every function invocation sends about 1KB of log data to CloudWatch. You only pay for the metrics and logs reported for your function by Lambda Insights. There are no minimum fees or mandatory service usage policies. You do not pay for Lambda Insights if the function is not invoked. For a pricing example, see [Amazon CloudWatch pricing](https://www.amazonaws.cn/cloudwatch/pricing/). 

## Supported runtimes


You can use Lambda Insights with any of the runtimes that support [Lambda extensions](runtimes-extensions-api.md).

## Enabling Lambda Insights in the Lambda console
Enabling Lambda Insights in the console

You can enable Lambda Insights enhanced monitoring on new and existing Lambda functions. When you enable Lambda Insights on a function in the Lambda console for a supported runtime, Lambda adds the Lambda Insights [extension](https://docs.amazonaws.cn/lambda/latest/dg/lambda-extensions.html) as a layer to your function, and verifies or attempts to attach the [https://console.amazonaws.cn/iam/home#/policies/arn:aws-cn:iam::aws:policy/CloudWatchLambdaInsightsExecutionRolePolicy$jsonEditor](https://console.amazonaws.cn/iam/home#/policies/arn:aws-cn:iam::aws:policy/CloudWatchLambdaInsightsExecutionRolePolicy$jsonEditor) policy to your function’s [execution role](https://docs.amazonaws.cn/lambda/latest/dg/lambda-intro-execution-role.html).

**To enable Lambda Insights in the Lambda console**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Choose your function.

1. Choose the **Configuration** tab.

1. On the left menu, choose **Monitoring and operations tools**.

1. On the **Additional monitoring tools** pane, choose **Edit**.

1. Under **CloudWatch Lambda Insights**, turn on **Enhanced monitoring**.

1. Choose **Save**.

## Enabling Lambda Insights programmatically


You can also enable Lambda Insights using the Amazon Command Line Interface (Amazon CLI), Amazon Serverless Application Model (SAM) CLI, Amazon CloudFormation, or the Amazon Cloud Development Kit (Amazon CDK). When you enable Lambda Insights programmatically on a function for a supported runtime, CloudWatch attaches the [https://console.amazonaws.cn/iam/home#/policies/arn:aws-cn:iam::aws:policy/CloudWatchLambdaInsightsExecutionRolePolicy$jsonEditor](https://console.amazonaws.cn/iam/home#/policies/arn:aws-cn:iam::aws:policy/CloudWatchLambdaInsightsExecutionRolePolicy$jsonEditor) policy to your function’s [execution role](https://docs.amazonaws.cn/lambda/latest/dg/lambda-intro-execution-role.html).

For more information, see [Getting started with Lambda Insights](https://docs.amazonaws.cn/AmazonCloudWatch/latest/monitoring/Lambda-Insights-Getting-Started.html) in the *Amazon CloudWatch User Guide*.

## Using the Lambda Insights dashboard


The Lambda Insights dashboard has two views in the CloudWatch console: the multi-function overview and the single-function view. The multi-function overview aggregates the runtime metrics for the Lambda functions in the current Amazon account and Region. The single-function view shows the available runtime metrics for a single Lambda function.

You can use the Lambda Insights dashboard multi-function overview in the CloudWatch console to identify over- and under-utilized Lambda functions. You can use the Lambda Insights dashboard single-function view in the CloudWatch console to troubleshoot individual requests.

**To view the runtime metrics for all functions**

1. Open the [Multi-function](https://console.amazonaws.cn/cloudwatch/home#lambda-insights:performance) page in the CloudWatch console.

1. Choose from the predefined time ranges, or choose a custom time range.

1. (Optional) Choose **Add to dashboard** to add the widgets to your CloudWatch dashboard.  
![\[The multi-function overview on the Lambda Insights dashboard.\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/lambdainsights-multifunction-view.png)

**To view the runtime metrics of a single function**

1. Open the [Single-function](https://console.amazonaws.cn/cloudwatch/home#lambda-insights:functions) page in the CloudWatch console.

1. Choose from the predefined time ranges, or choose a custom time range.

1. (Optional) Choose **Add to dashboard** to add the widgets to your CloudWatch dashboard.  
![\[The single-function view on the Lambda Insights dashboard.\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/lambainsights-singlefunction-view.png)

For more information, see [Creating and working with widgets on CloudWatch dashboards](https://docs.amazonaws.cn/AmazonCloudWatch/latest/monitoring/create-and-work-with-widgets.html).

## Example workflow to detect function anomalies
Detecting function anomalies

You can use the multi-function overview on the Lambda Insights dashboard to identify and detect compute memory anomalies with your function. For example, if the multi-function overview indicates that a function is using a large amount of memory, you can view detailed memory utilization metrics in the **Memory Usage** pane. You can then go to the Metrics dashboard to enable anomaly detection or create an alarm.

**To enable anomaly detection for a function**

1. Open the [Multi-function](https://console.amazonaws.cn/cloudwatch/home#lambda-insights:performance) page in the CloudWatch console.

1. Under **Function summary**, choose your function's name.

   The single-function view opens with the function runtime metrics.  
![\[The function summary pane on the Lambda Insights dashboard.\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/lambdainsights-function-summary.png)

1. On the **Memory Usage** pane, choose the three vertical dots, and then choose **View in metrics** to open the **Metrics** dashboard.  
![\[The menu on the Memory Usage pane.\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/lambdainsights-memory-usage.png)

1. On the **Graphed metrics** tab, in the **Actions** column, choose the first icon to enable anomaly detection for the function.  
![\[The Graphed metrics tab of the Memory Usage pane.\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/lambdainsights-graphed-metrics.png)

For more information, see [Using CloudWatch Anomaly Detection](https://docs.amazonaws.cn/AmazonCloudWatch/latest/monitoring/CloudWatch_Anomaly_Detection.html).

## Example workflow using queries to troubleshoot a function
Troubleshooting a function

You can use the single-function view on the Lambda Insights dashboard to identify the root cause of a spike in function duration. For example, if the multi-function overview indicates a large increase in function duration, you can pause on or choose each function in the **Duration** pane to determine which function is causing the increase. You can then go to the single-function view and review the **Application logs** to determine the root cause.

**To run queries on a function**

1. Open the [Multi-function](https://console.amazonaws.cn/cloudwatch/home#lambda-insights:performance) page in the CloudWatch console.

1. In the **Duration** pane, choose your function to filter the duration metrics.  
![\[A function chosen in the Duration pane.\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/lambdainsights-choose-function.png)

1. Open the [Single-function](https://console.amazonaws.cn/cloudwatch/home#lambda-insights:functions) page.

1. Choose the **Filter metrics by function name** dropdown list, and then choose your function.

1. To view the **Most recent 1000 application logs**, choose the **Application logs** tab.

1. Review the **Timestamp** and **Message** to identify the invocation request that you want to troubleshoot.  
![\[The Most recent 1000 application logs.\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/lambdainsights-application-logs.png)

1. To show the **Most recent 1000 invocations**, choose the **Invocations** tab.

1. Select the **Timestamp** or **Message** for the invocation request that you want to troubleshoot.  
![\[Selecting a recent invocation request.\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/lambdainsights-invocations-function-select.png)

1. Choose the **View logs** dropdown list, and then choose **View performance logs**.

   An autogenerated query for your function opens in the **Logs Insights** dashboard.

1. Choose **Run query** to generate a **Logs** message for the invocation request.  
![\[Querying the selected function in the Logs Insights dashboard.\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/lambdainsights-query.png)

## What's next?

+ Learn how to create a CloudWatch Logs dashboard in [Create a Dashboard](https://docs.amazonaws.cn/AmazonCloudWatch/latest/monitoring/create_dashboard.html) in the *Amazon CloudWatch User Guide*.
+ Learn how to add queries to a CloudWatch Logs dashboard in [Add Query to Dashboard or Export Query Results](https://docs.amazonaws.cn/AmazonCloudWatch/latest/logs/CWL_ExportQueryResults.html) in the *Amazon CloudWatch User Guide*.

# Monitoring Lambda applications
View application metrics

The **Applications** section of the Lambda console includes a **Monitoring** tab where you can review an Amazon CloudWatch dashboard with aggregate metrics for the resources in your application.

**To monitor a Lambda application**

1. Open the Lambda console [Applications page](https://console.amazonaws.cn/lambda/home#/applications).

1. Choose **Monitoring**.

1. To see more details about the metrics in any graph, choose **View in metrics** from the drop-down menu.  
![\[A monitoring widget.\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/applications-monitoring-widget.png)

   The graph appears in a new tab, with the relevant metrics listed below the graph. You can customize your view of this graph, changing the metrics and resources shown, the statistic, the period, and other factors to get a better understanding of the current situation.

By default, the Lambda console shows a basic dashboard. You can customize this page by adding one or more Amazon CloudWatch dashboards to your application template with the [AWS::CloudWatch::Dashboard](https://docs.amazonaws.cn/AWSCloudFormation/latest/UserGuide/aws-properties-cw-dashboard.html) resource type. When your template includes one or more dashboards, the page shows your dashboards instead of the default dashboard. You can switch between dashboards with the drop-down menu on the top right of the page. The following example creates a dashboard with a single widget that graphs the number of invocations of a function named `my-function`.

**Example function dashboard template**  

```
Resources:
  MyDashboard:
    Type: AWS::CloudWatch::Dashboard
    Properties:
      DashboardName: my-dashboard
      DashboardBody: |
        {
            "widgets": [
                {
                    "type": "metric",
                    "width": 12,
                    "height": 6,
                    "properties": {
                        "metrics": [
                            [
                                "AWS/Lambda",
                                "Invocations",
                                "FunctionName",
                                "my-function",
                                {
                                    "stat": "Sum",
                                    "label": "MyFunction"
                                }
                            ],
                            [
                                {
                                    "expression": "SUM(METRICS())",
                                    "label": "Total Invocations"
                                }
                            ]
                        ],
                        "region": "us-east-1",
                        "title": "Invocations",
                        "view": "timeSeries",
                        "stacked": false
                    }
                }
            ]
        }
```

For more information about authoring CloudWatch dashboards and widgets, see [Dashboard body structure and syntax](https://docs.amazonaws.cn/AmazonCloudWatch/latest/APIReference/CloudWatch-Dashboard-Body-Structure.html) in the *Amazon CloudWatch API Reference*.

# Monitor application performance with Amazon CloudWatch Application Signals
Application Signals

Amazon CloudWatch Application Signals is an application performance monitoring (APM) solution that enables developers and operators to monitor the health and performance of their serverless applications built using Lambda. You can enable Application Signals in one-click from the Lambda console, and you don't need to add any instrumentation code or external dependencies to your Lambda function. After you enable Application Signals, you can view all collected metrics and traces in the CloudWatch console. This page describes how to enable and view Application Signals telemetry data for your applications.

**Topics**
+ [

## How Application Signals integrates with Lambda
](#monitoring-application-signals-how)
+ [

## Pricing
](#monitoring-application-signals-pricing)
+ [

## Supported runtimes
](#monitoring-application-signals-runtimes)
+ [

## Enabling Application Signals in the Lambda console
](#monitoring-application-signals-console)
+ [

## Using the Application Signals dashboard
](#monitoring-application-signals-dashboard)

## How Application Signals integrates with Lambda


Application Signals automatically instruments your Lambda functions using enhanced [Amazon Distro for OpenTelemetry (ADOT)](https://aws-otel.github.io/) libraries, provided via a [Lambda layer](https://docs.amazonaws.cn/lambda/latest/dg/chapter-layers.html). Application Signals reads data collected by the layer and generates dashboards with key performance metrics for your applications.

You can attach this layer in one-click by [ enabling Application Signals](#monitoring-application-signals-console) in the Lambda console. When you enable Application Signals from the console, Lambda does the following on your behalf:
+ Updates your function's execution role to include the `CloudWatchLambdaApplicationSignalsExecutionRolePolicy`. [ This policy](https://docs.amazonaws.cn/aws-managed-policy/latest/reference/CloudWatchLambdaApplicationSignalsExecutionRolePolicy.html) provides write access to Amazon X-Ray and CloudWatch log groups used for Application Signals.
+ Adds a layer to your function which automatically instruments the function to capture telemetry data such as requests, availability, latency, errors, and faults. To ensure that Application Signals works properly, remove any existing X-Ray SDK instrumentation code from your function. Custom X-Ray SDK instrumentation code can interfere with the layer-provided instrumentation.
+ Adds the `AWS_LAMBDA_EXEC_WRAPPER` environment variable to your function, and sets its value to `/opt/otel-instrument`. This environment variable modifies your function's startup behavior to utilize the Application Signals layer, and is required for proper instrumentation. If this environment variable already exists, ensure that it's set to the required value.

## Pricing


Using Application Signals for your Lambda functions incurs costs. For pricing information, see [Amazon CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/).

## Supported runtimes


The Application Signals integration with Lambda works with the following runtimes:
+ .NET 8
+ Java 11
+ Java 17
+ Java 21
+ Python 3.10
+ Python 3.11
+ Python 3.12
+ Python 3.13
+ Node.js 18.x
+ Node.js 20.x
+ Node.js 22.x

## Enabling Application Signals in the Lambda console


You can enable Application Signals on any existing Lambda function using a [supported runtime](#monitoring-application-signals-runtimes). The following steps describe how to enable Application Signals in one-click in the Lambda console.

**To enable Application Signals in the Lambda console**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Choose your function.

1. Choose the **Configuration** tab.

1. On the left menu, choose **Monitoring and operations tools**.

1. On the **Additional monitoring tools** pane, choose **Edit**.

1. Under **CloudWatch Application Signals and Amazon X-Ray**, and under **Application Signals**, choose **Enable**.

1. Choose **Save**.

If this is your first time enabling Application Signals for your function, you must also do a one-time service discovery setup for Application Signals in the CloudWatch console. After you complete this one-time service discovery setup, Application Signals automatically discovers any additional Lambda functions that you enable Application Signals for, across all Regions.

**Note**  
After you invoke your updated function, it can take up to 10 minutes for service data to start appearing in the Application Signals dashboard in the CloudWatch console.

## Using the Application Signals dashboard


After you enable Application Signals for your function, you can visualize your application metrics in the CloudWatch console. You can quickly view the associated Application Signals dashboard from the Lambda console with the following steps:

**To view the Application Signals dashboard for your function**

1. Open the [Functions page](https://console.amazonaws.cn/lambda/home#/functions) of the Lambda console.

1. Choose your function.

1. Choose the **Monitor** tab.

1. Choose the **View Application Signals** button. This takes you directly to the Application Signals overview for your service in the CloudWatch console.

For example, the following screenshot shows metrics for latency, number of requests, availability, fault rate, and error rate for a function across a 10 minute time window.

![\[\]](http://docs.amazonaws.cn/en_us/lambda/latest/dg/images/monitoring-application-signals-dashboard.png)


To make the most out of your integration with Application Signals, you can create service-level objectives (SLOs) for your aplication. For example, you can create latency SLOs to ensure your application responds quickly to user requests, and availability SLOs to track uptime. SLOs can help you detect performance degradation or outages before they impact your users. For more information, see [Service level objectives (SLOs)](https://docs.amazonaws.cn/AmazonCloudWatch/latest/monitoring/CloudWatch-ServiceLevelObjectives.html) in the Amazon CloudWatch User Guide.

# Remotely debug Lambda functions with Visual Studio Code
Debug with VS Code

With the remote debugging feature in the [Amazon Toolkit for Visual Studio Code](https://aws.amazon.com/visualstudiocode/), you can debug your Lambda functions running directly in the Amazon cloud. This is useful when investigating issues that are difficult to replicate locally or diagnose only with logs.

With remote debugging, you can:
+ Set breakpoints in your Lambda function code.
+ Step through code execution in real-time.
+ Inspect variables and state during runtime.
+ Debug Lambda functions deployed to Amazon, including those in VPCs or with specific IAM permissions.

## Supported runtimes


Remote debugging is supported for the following runtimes:
+ Python (AL2023)
+ Java
+ JavaScript/Node.js (AL2023)

**Note**  
Remote debugging is supported for both x86\$164 and arm64 architectures.

## Security and remote debugging


Remote debugging operates within existing Lambda security boundaries. Users can attach layers to a function using the `UpdateFunctionConfiguration` permission, which already has the ability to access function environment variables and configuration. Remote debugging doesn't extend beyond these existing permissions. Instead, it adds extra security controls through secure tunneling and automatic session management. Additionally, remote debugging is entirely a customer-controlled feature that requires explicit permissions and actions:
+ **IoT Secure Tunnel Creation**: The Amazon Toolkit must create an IoT secure tunnel, which only occurs with the user's explicit permission using `iot:OpenTunnel`.
+ **Debug Layer Attachment and Token Management**: The debugging process maintains security through these controls:
  + The debugging layer must be attached to the Lambda function and this process requires the following permissions: `lambda:UpdateFunctionConfiguration` and `lambda:GetLayerVersion`.
  + A security token (generated via `iot:OpenTunnel`) must be updated in the function environment variable before each debug session, which also requires `lambda:UpdateFunctionConfiguration`.
  + For security, this token is automatically rotated and the debug layer is automatically removed at the end of each debug session and cannot be reused.

**Note**  
Remote debugging is supported for both x86\$164 and arm64 architectures.

## Prerequisites


Before you begin remote debugging, ensure you have the following:

1. A Lambda function deployed to your Amazon account.

1. Amazon Toolkit for Visual Studio Code. See [Setting up the Amazon Toolkit for Visual Studio Code](https://docs.amazonaws.cn/toolkit-for-vscode/latest/userguide/setup-toolkit.html) for installation instructions.

1. The version of the Amazon Toolkit you have installed is **3.69.0** or later.

1. Amazon credentials configured in Amazon Toolkit for Visual Studio Code. For more information, see [Authentication and access control](foundation-iac-local-development.md#lambda-functions-vscode-authentication-and-access-control).

## Remotely debug Lambda functions


Follow these steps to start a remote debugging session:

1. Open the Amazon Explorer in VS Code by selecting the Amazon icon in the left sidebar.

1. Expand the Lambda section to see your functions.

1. Right-click on the function you want to debug.

1. From the context menu, select **Remotely invoke**.

1. In the invoke window that opens, check the box for **Enable debugging**.

1. Click **Invoke** to start the remote debugging session.

**Note**  
Lambda functions have a 250MB combined limit for function code and all attached layers. The remote debugging layer adds approximately 40MB to your function's size.

A remote debugging session ends when you:
+ Choose **Remove Debug Setup** from the Remote invoke configuration screen.
+ Select the disconnect icon in the VS Code debugging controls.
+ Select the handler file in the VS Code editor.

**Note**  
The debug layer is automatically removed after 60 seconds of inactivity following your last invoke.

## Disable remote debugging


There are three ways to disable this feature:
+ **Deny Function Updates**: Set `lambda:UpdateFunctionConfiguration` to `deny`.
+ **Restrict IoT Permissions**: Deny IoT-related permissions
+ **Block Debug Layers**: Deny `lambda:GetLayerVersion` for the following ARNs:
  + `arn:aws:lambda:*:*:layer:LDKLayerX86:*`
  + `arn:aws:lambda:*:*:layer:LDKLayerArm64:*`
**Note**  
Disabling this feature prevents the debugging layer from being added during function configuration updates.

## Additional information


For more information on using Lambda in VS Code, refer to [Developing Lambda functions locally with VS Code](foundation-iac-local-development.md).

For detailed instructions on troubleshooting, advanced use cases, and region availability, see [Remote debugging Lambda functions](https://docs.aws.amazon.com/toolkit-for-vscode/latest/userguide/lambda-remote-debug.html) in the Amazon Toolkit for Visual Studio Code User Guide.