Configuring Application Signals - Amazon CloudWatch
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Configuring Application Signals

This section contains information about configuring CloudWatch Application Signals.

Trace sampling rate

By default, when you enable Application Signals X-Ray centralized sampling is enabled using the default sampling rate settings of reservoir=1/s and fixed_rate=5%. The environment variables for the Amazon Distro for OpenTelemetry (ADOT) SDK agent as set as follows.

Environment variable Value Note

OTEL_TRACES_SAMPLER

xray

OTEL_TRACES_SAMPLER_ARG

endpoint=http://cloudwatch-agent.amazon-cloudwatch:2000

Endpoint of the CloudWatch agent

For information about changing the sampling configuration, see the following:

If you want to disable X-Ray centralized sampling and use local sampling instead, set the following values for the ADOT SDK Java agent as below. The following example sets the sampling rate at 5%.

Environment variable Value

OTEL_TRACES_SAMPLER

parentbased_traceidratio

OTEL_TRACES_SAMPLER_ARG

0.05

For information about more advanced sampling settings, see OTEL_TRACES_SAMPLER.

Enable trace log correlation

You can enable trace log correlation in Application Signals. This automatically injects trace IDs and span IDs into the relevant application logs. Then, when you open a trace detail page in the Application Signals console, the relevant log entries (if any) that correlate with the current trace automatically appear at the bottom of the page.

For example, suppose you notice a spike in a latency graph. You can choose the point on the graph to load the diagnostics information for that point in time. You then choose the relevant trace to get more information. When you view the trace information, you can scroll down to see the logs associated with the trace. These logs might reveal patterns or error codes associated with the issues causing the latency spike.

To achieve trace log correlation, Application Signals relies on the Logger MDC auto-instrumentation for Java and the OpenTelemetry Logging Instrumentation for Python. Both are provided by OpenTelemetry community. Application Signals uses this to inject trace contexts such as trace ID and span ID into application logs. To enable this, you must manually change your logging configuration to enable the auto-instrumentation.

Depending on the architecture that your application runs on, you might have to also set an environment variable to enable trace log correlation, in addition to following the steps in this section.

After you enable trace log correlation,

Trace log correlation setup examples

This section contains examples of setting up trace log correlation in several environments.

Spring Boot for Java

Suppose you have a Spring Boot application in a folder called custom-app. The application configuration is usually a YAML file named custom-app/src/main/resources/application.yml that might look like this:

spring: application: name: custom-app config: import: optional:configserver:${CONFIG_SERVER_URL:http://localhost:8888/} ...

To enable trace log correlation, add the following logging configuration.

spring: application: name: custom-app config: import: optional:configserver:${CONFIG_SERVER_URL:http://localhost:8888/} ... logging: pattern: level: trace_id=%mdc{trace_id} span_id=%mdc{span_id} trace_flags=%mdc{trace_flags} %5p

Logback for Java

In the logging configuration (such as logback.xml), insert the trace context trace_id=%mdc{trace_id} span_id=%mdc{span_id} trace_flags=%mdc{trace_flags} %5p into pattern of Encoder. For example, the following configuration prepends the trace context before the log message.

<appender name="FILE" class="ch.qos.logback.core.FileAppender"> <file>app.log</file> <append>true</append> <encoder> <pattern>trace_id=%mdc{trace_id} span_id=%mdc{span_id} trace_flags=%mdc{trace_flags} %5p - %m%n</pattern> </encoder> </appender>

For more information about encoders in Logback, see Encoders in the Logback documentation.

Log4j2 for Java

In the logging configuration (such as log4j2.xml), insert the trace context trace_id=%mdc{trace_id} span_id=%mdc{span_id} trace_flags=%mdc{trace_flags} %5p into PatternLayout. For example, the following configuration prepends the trace context before the log message.

<Appenders> <File name="FILE" fileName="app.log"> <PatternLayout pattern="trace_id=%mdc{trace_id} span_id=%mdc{span_id} trace_flags=%mdc{trace_flags} %5p - %m%n"/> </File> </Appenders>

For more information about pattern layouts in Log4j2, see Pattern Layout in the Log4j2 documentation.

Log4j for Java

In the logging configuration (such as log4j.xml), insert the trace context trace_id=%mdc{trace_id} span_id=%mdc{span_id} trace_flags=%mdc{trace_flags} %5p into PatternLayout. For example, the following configuration prepends the trace context before the log message.

<appender name="FILE" class="org.apache.log4j.FileAppender">; <param name="File" value="app.log"/>; <param name="Append" value="true"/>; <layout class="org.apache.log4j.PatternLayout">; <param name="ConversionPattern" value="trace_id=%mdc{trace_id} span_id=%mdc{span_id} trace_flags=%mdc{trace_flags} %5p - %m%n"/>; </layout>; </appender>;

For more information about pattern layouts in Log4j, see Class Pattern Layout in the Log4j documentation.

Python

Set the environment variable OTEL_PYTHON_LOG_CORRELATION to true while running your application. For more information, see Enable trace context injectionin the Python OpenTelemetry documentation.

Manage high-cardinality operations

Application Signals includes settings in the CloudWatch agent that you can use to manage the cardinality of your operations and manage the metric exportation to optimize costs. By default, the metric limiting function becomes active when the number of distinct operations for a service over time exceeds the default threshold of 500. You can tune the behavior by adjusting the configuration settings.

Determine if metric limiting is activated

You can use the following methods to find if the default metric limiting is happening. If it is, you should consider optimizing the cardinality control by following the steps in the next section.

  • In the CloudWatch console, choose Application Signals, Services. If you see an Operation named AllOtherOperations or a RemoteOperation named AllOtherRemoteOperations, then metric limiting is happening.

  • If any metrics collected by Application Signals have the value AllOtherOperations for their Operation dimension, then metric limiting is happening.

  • If any metrics collected by Application Signals have the value AllOtherRemoteOperations for their RemoteOperation dimension, then metric limiting is happening.

Optimize cardinality control

To optimize your cardinality control, you can do the following:

  • Create custom rules to aggregate operations.

  • Configure your metric limiting policy.

Create custom rules to aggregate operations

High-cardinality operations can sometimes be caused by inappropriate unique values extracted from the context. For example, sending out HTTP/S requests that include user IDs or session IDs in the path can lead to hundreds of disparate operations. To resolve such issues, we recommend that you configure the CloudWatch agent with customization rules to rewrite these operations.

In cases where there is a surge in generating numerous different metrics through individual RemoteOperation calls, such as PUT /api/customer/owners/123, PUT /api/customer/owners/456, and similar requests, we recommend that you consolidate these operations into a single RemoteOperation. One approach is to standardize all RemoteOperation calls that start with PUT /api/customer/owners/ to a uniform format, specifically PUT /api/customer/owners/{ownerId}. The following example illustrates this. For information about other customization rules, see Enable CloudWatch Application Signals.

{ "logs":{ "metrics_collected":{ "application_signals":{ "rules":[ { "selectors":[ { "dimension":"RemoteOperation", "match":"PUT /api/customer/owners/*" } ], "replacements":[ { "target_dimension":"RemoteOperation", "value":"PUT /api/customer/owners/{ownerId}" } ], "action":"replace" } ] } } } }

In other cases, high-cardinality metrics might have been aggregated to AllOtherRemoteOperations, and it might be unclear what specific metrics are included. The CloudWatch agent is able to log the dropped operations. To identify dropped operations, use the configuration in the following example to activate logging until the problem resurfaces. Then inspect the CloudWatch agent logs (accessible by container stdout or EC2 log files) and search for the keyword drop metric data.

{ "agent": { "config": { "agent": { "debug": true }, "traces": { "traces_collected": { "application_signals": { } } }, "logs": { "metrics_collected": { "application_signals": { "limiter": { "log_dropped_metrics": true } } } } } } }
Create your metric limiting policy

If the default metric limiting configuration doesn’t address the cardinality for your service, you can customize the metric limiter configuration. To do this, add a limiter section under the logs/metrics_collected/application_signals section in the CloudWatch Agent configuration file.

The following example lowers the threshold of metric limiting from 500 distinct metrics to 100.

{ "logs": { "metrics_collected": { "application_signals": { "limiter": { "drop_threshold": 100 } } } } }