在适用于 Apache Flink 的亚马逊托管服务中使用 CloudWatch 警报 - Managed Service for Apache Flink
Amazon Web Services 文档中描述的 Amazon Web Services 服务或功能可能因区域而异。要查看适用于中国区域的差异,请参阅 中国的 Amazon Web Services 服务入门 (PDF)

Amazon Managed Service for Apache Flink 之前称为 Amazon Kinesis Data Analytics for Apache Flink。

本文属于机器翻译版本。若本译文内容与英语原文存在差异,则一律以英文原文为准。

在适用于 Apache Flink 的亚马逊托管服务中使用 CloudWatch 警报

使用 Amazon CloudWatch 指标警报,您可以监控您指定的时间段内的 CloudWatch 指标。告警根据指标或表达式在多个时间段内相对于某阈值的值执行一项或多项操作。操作的示例是向 Amazon Simple Notification Service (Amazon SNS) 主题发送通知。

有关 CloudWatch 警报的更多信息,请参阅使用 Amazon CloudWatch 警报

本节包含用于监控 Managed Service for Apache Flink 应用程序的推荐警报。

该表描述了推荐的警报,并包含以下各列:

  • 指标表达式:要根据阈值进行测试的指标或指标表达式。

  • 统计数据:用于检查指标的统计数据,例如,平均值

  • 阈值:使用此警报需要您确定一个阈值,该阈值定义了预期应用程序性能的极限。您需要通过在正常条件下监控您的应用程序来确定此阈值。

  • 描述:可能触发此警报的原因以及可能的解决方法。

指标表达式 Statistic Threshold 描述
停机时间 > 0 Average 0 A downtime greater than zero indicates that the application has failed. If the value is larger than 0, the application is not processing any data. Recommended for all applications. The 停机时间 metric measures the duration of an outage. A downtime greater than zero indicates that the application has failed. For troubleshooting, see 应用程序正在重新启动.
费率(numberOfFailed检查点) > 0 Average 0 This metric counts the number of failed checkpoints since the application started. Depending on the application, it can be tolerable if checkpoints fail occasionally. But if checkpoints are regularly failing, the application is likely unhealthy and needs further attention. We recommend monitoring RATE(numberOfFailedCheckpoints) to alarm on the gradient and not on absolute values. Recommended for all applications. Use this metric to monitor application health and checkpointing progress. The application saves state data to checkpoints when it's healthy. Checkpointing can fail due to timeouts if the application isn't making progress in processing the input data. For troubleshooting, see 检查点操作已超时.
操作员。 numRecordsOutPerSecond < threshold Average The minimum number of records emitted from the application during normal conditions. Recommended for all applications. Falling below this threshold can indicate that the application isn't making expected progress on the input data. For troubleshooting, see 吞吐量太慢.
records_lag_max| millisBehedLates > threshold Maximum The maximum expected latency during normal conditions. If the application is consuming from Kinesis or Kafka, these metrics indicate if the application is falling behind and needs to be scaled in order to keep up with the current load. This is a good generic metric that is easy to track for all kinds of applications. But it can only be used for reactive scaling, i.e., when the application has already fallen behind. Recommended for all applications. Use the records_lag_max metric for a Kafka source, or the millisbehindLatest for a Kinesis stream source. Rising above this threshold can indicate that the application isn't making expected progress on the input data. For troubleshooting, see 吞吐量太慢.
lastCheckpointDuration > threshold Maximum The maximum expected checkpoint duration during normal conditions. Monitors how much data is stored in state and how long it takes to take a checkpoint. If checkpoints grow or take long, the application is continuously spending time on checkpointing and has less cycles for actual processing. At some points, checkpoints may grow too large or take so long that they fail. In addition to monitoring absolute values, customers should also considering monitoring the change rate with 费率 (lastCheckpointSize) and 费率 (lastCheckpointDuration). If the lastCheckpointDuration continuously increases, rising above this threshold can indicate that the application isn't making expected progress on the input data, or that there are problems with application health such as backpressure. For troubleshooting, see 无限制的状态增长.
lastCheckpointSize > threshold Maximum The maximum expected checkpoint size during normal conditions. Monitors how much data is stored in state and how long it takes to take a checkpoint. If checkpoints grow or take long, the application is continuously spending time on checkpointing and has less cycles for actual processing. At some points, checkpoints may grow too large or take so long that they fail. In addition to monitoring absolute values, customers should also considering monitoring the change rate with 费率 (lastCheckpointSize) and 费率 (lastCheckpointDuration). If the lastCheckpointSize continuously increases, rising above this threshold can indicate that the application is accumulating state data. If the state data becomes too large, the application can run out of memory when recovering from a checkpoint, or recovering from a checkpoint might take too long. For troubleshooting, see 无限制的状态增长.
heapMemoryUtilization > threshold Maximum This gives a good indication of the overall resource utilization of the application and can be used for proactive scaling unless the application is I/O bound. The maximum expected heapMemoryUtilization size during normal conditions, with a recommended value of 90 percent. You can use this metric to monitor the maximum memory utilization of task managers across the application. If the application reaches this threshold, you need to provision more resources. You do this by enabling automatic scaling or increasing the application parallelism. For more information about increasing resources, see 扩展.
cpuUtilization > threshold Maximum This gives a good indication of the overall resource utilization of the application and can be used for proactive scaling unless the application is I/O bound. The maximum expected cpuUtilization size during normal conditions, with a recommended value of 80 percent. You can use this metric to monitor the maximum CPU utilization of task managers across the application. If the application reaches this threshold, you need to provision more resources You do this by enabling automatic scaling or increasing the application parallelism. For more information about increasing resources, see 扩展.
线程数 > threshold Maximum The maximum expected 线程数 size during normal conditions. You can use this metric to watch for thread leaks in task managers across the application. If this metric reaches this threshold, check your application code for threads being created without being closed.
(oldGarbageCollection时间 * 100) /60_000 超过 1 分钟') > threshold Maximum The maximum expected oldGarbageCollection时间 duration. We recommend setting a threshold such that typical garbage collection time is 60 percent of the specified threshold, but the correct threshold for your application will vary. If this metric is continually increasing, this can indicate that there is a memory leak in task managers across the application.
费率(oldGarbageCollection计数) > threshold Maximum The maximum expected oldGarbageCollection计数 under normal conditions. The correct threshold for your application will vary. If this metric is continually increasing, this can indicate that there is a memory leak in task managers across the application.
操作员。 currentOutputWatermark -操作员。 currentInputWatermark > threshold Minimum The minimum expected watermark increment under normal conditions. The correct threshold for your application will vary. If this metric is continually increasing, this can indicate that either the application is processing increasingly older events, or that an upstream subtask has not sent a watermark in an increasingly long time.