Publishing Aurora PostgreSQL logs to Amazon CloudWatch Logs - Amazon Aurora
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Publishing Aurora PostgreSQL logs to Amazon CloudWatch Logs

You can configure your Aurora PostgreSQL DB cluster to export log data to Amazon CloudWatch Logs on a regular basis. When you do so, events from your Aurora PostgreSQL DB cluster's PostgreSQL log are automatically published to Amazon CloudWatch, as Amazon CloudWatch Logs. In CloudWatch, you can find the exported log data in a Log group for your Aurora PostgreSQL DB cluster. The log group contains one or more log streams that contain the events from the PostgreSQL log from each instance in the cluster.

Publishing the logs to CloudWatch Logs allows you to keep your cluster's PostgreSQL log records in highly durable storage. With the log data available in CloudWatch Logs, you can evaluate and improve your cluster's operations. You can also use CloudWatch to create alarms and view metrics. To learn more, see Monitoring log events in Amazon CloudWatch.

Note

Publishing your PostgreSQL logs to CloudWatch Logs consumes storage, and you incur charges for that storage. Be sure to delete any CloudWatch Logs that you no longer need.

Turning the export log option off for an existing Aurora PostgreSQL DB cluster doesn't affect any data that's already held in CloudWatch Logs. Existing logs remain available in CloudWatch Logs based on your log retention settings. To learn more about CloudWatch Logs, see What is Amazon CloudWatch Logs?

Aurora PostgreSQL supports publishing logs to CloudWatch Logs for the following versions.

  • 14.3 and higher 14 versions

  • 13.3 and higher 13 versions

  • 12.8 and higher 12 versions

  • 11.12 and higher 11 versions

Turning on the option to publish logs to Amazon CloudWatch

To publish your Aurora PostgreSQL DB cluster's PostgreSQL log to CloudWatch Logs, choose the Log export option for the cluster. You can choose the Log export setting when you create your Aurora PostgreSQL DB cluster. Or, you can modify the cluster later on. When you modify an existing cluster, its PostgreSQL logs from each instance are published to CloudWatch cluster from that point on. For Aurora PostgreSQL, the PostgreSQL log (postgresql.log) is the only log that gets published to Amazon CloudWatch.

You can use the Amazon Web Services Management Console, the Amazon CLI, or the RDS API to turn on the Log export feature for your Aurora PostgreSQL DB cluster.

You choose the Log exports option to start publishing the PostgreSQL logs from your Aurora PostgreSQL DB cluster to CloudWatch Logs.

To turn on the Log export feature from the console
  1. Open the Amazon RDS console at https://console.amazonaws.cn/rds/.

  2. In the navigation pane, choose Databases.

  3. Choose the Aurora PostgreSQL DB cluster whose log data you want to publish to CloudWatch Logs.

  4. Choose Modify.

  5. In the Log exports section, choose PostgreSQL log.

  6. Choose Continue, and then choose Modify cluster on the summary page.

You can turn on the log export option to start publishing Aurora PostgreSQL logs to Amazon CloudWatch Logs with the Amazon CLI. To do so, run the modify-db-cluster Amazon CLI command with the following options:

  • --db-cluster-identifier—The DB cluster identifier.

  • --cloudwatch-logs-export-configuration—The configuration setting for the log types to be set for export to CloudWatch Logs for the DB cluster.

You can also publish Aurora PostgreSQL logs by running one of the following Amazon CLI commands:

Run one of these Amazon CLI commands with the following options:

  • --db-cluster-identifier—The DB cluster identifier.

  • --engine—The database engine.

  • --enable-cloudwatch-logs-exports—The configuration setting for the log types to be enabled for export to CloudWatch Logs for the DB cluster.

Other options might be required depending on the Amazon CLI command that you run.

The following command creates an Aurora PostgreSQL DB cluster to publish log files to CloudWatch Logs.

For Linux, macOS, or Unix:

aws rds create-db-cluster \ --db-cluster-identifier my-db-cluster \ --engine aurora-postgresql \ --enable-cloudwatch-logs-exports postgresql

For Windows:

aws rds create-db-cluster ^ --db-cluster-identifier my-db-cluster ^ --engine aurora-postgresql ^ --enable-cloudwatch-logs-exports postgresql

The following command modifies an existing Aurora PostgreSQL DB cluster to publish log files to CloudWatch Logs. The --cloudwatch-logs-export-configuration value is a JSON object. The key for this object is EnableLogTypes, and its value is postgresql.

For Linux, macOS, or Unix:

aws rds modify-db-cluster \ --db-cluster-identifier my-db-cluster \ --cloudwatch-logs-export-configuration '{"EnableLogTypes":["postgresql"]}'

For Windows:

aws rds modify-db-cluster ^ --db-cluster-identifier my-db-cluster ^ --cloudwatch-logs-export-configuration '{\"EnableLogTypes\":[\"postgresql\"]}'
Note

When using the Windows command prompt, make sure to escape double quotation marks (") in JSON code by prefixing them with a backslash (\).

The following example modifies an existing Aurora PostgreSQL DB cluster to disable publishing log files to CloudWatch Logs. The --cloudwatch-logs-export-configuration value is a JSON object. The key for this object is DisableLogTypes, and its value is postgresql.

For Linux, macOS, or Unix:

aws rds modify-db-cluster \ --db-cluster-identifier mydbinstance \ --cloudwatch-logs-export-configuration '{"DisableLogTypes":["postgresql"]}'

For Windows:

aws rds modify-db-cluster ^ --db-cluster-identifier mydbinstance ^ --cloudwatch-logs-export-configuration "{\"DisableLogTypes\":[\"postgresql\"]}"
Note

When using the Windows command prompt, you must escape double quotes (") in JSON code by prefixing them with a backslash (\).

You can turn on the log export option to start publishing Aurora PostgreSQL logs with the RDS API. To do so, run the ModifyDBCluster operation with the following options:

  • DBClusterIdentifier – The DB cluster identifier.

  • CloudwatchLogsExportConfiguration – The configuration setting for the log types to be enabled for export to CloudWatch Logs for the DB cluster.

You can also publish Aurora PostgreSQL logs with the RDS API by running one of the following RDS API operations:

Run the RDS API action with the following parameters:

  • DBClusterIdentifier—The DB cluster identifier.

  • Engine—The database engine.

  • EnableCloudwatchLogsExports—The configuration setting for the log types to be enabled for export to CloudWatch Logs for the DB cluster.

Other parameters might be required depending on the Amazon CLI command that you run.

Monitoring log events in Amazon CloudWatch

With Aurora PostgreSQL log events published and available as Amazon CloudWatch Logs, you can view and monitor events using Amazon CloudWatch. For more information about monitoring, see View log data sent to CloudWatch Logs.

When you turn on Log exports, a new log group is automatically created using the prefix /aws/rds/cluster/ with the name of your Aurora PostgreSQL and the log type, as in the following pattern.

/aws/rds/cluster/your-cluster-name/postgresql

As an example, suppose that an Aurora PostgreSQL DB cluster named docs-lab-apg-small exports its log to Amazon CloudWatch Logs. Its log group name in Amazon CloudWatch is shown following.

/aws/rds/cluster/docs-lab-apg-small/postgresql

If a log group with the specified name exists, Aurora uses that log group to export log data for the Aurora DB cluster. Each DB instance in the Aurora PostgreSQL DB cluster uploads its PostgreSQL log to the log group as a distinct log stream. You can examine the log group and its log streams using the various graphical and analytical tools available in Amazon CloudWatch.

For example, you can search for information within the log events from your Aurora PostgreSQL DB cluster, and filter events by using the CloudWatch Logs console, the Amazon CLI, or the CloudWatch Logs API. For more information, Searching and filtering log data in the Amazon CloudWatch Logs User Guide.

By default, new log groups are created using Never expire for their retention period. You can use the CloudWatch Logs console, the Amazon CLI, or the CloudWatch Logs API to change the log retention period. To learn more, see Change log data retention in CloudWatch Logs in the Amazon CloudWatch Logs User Guide.

Tip

You can use automated configuration, such as Amazon CloudFormation, to create log groups with predefined log retention periods, metric filters, and access permissions.

Analyzing PostgreSQL logs using CloudWatch Logs Insights

With the PostgreSQL logs from your Aurora PostgreSQL DB cluster published as CloudWatch Logs, you can use CloudWatch Logs Insights to interactively search and analyze your log data in Amazon CloudWatch Logs. CloudWatch Logs Insights includes a query language, sample queries, and other tools for analyzing your log data so that you can identify potential issues and verify fixes. To learn more, see Analyzing log data with CloudWatch Logs Insights in the Amazon CloudWatch Logs User Guide. Amazon CloudWatch Logs

To analyze PostgreSQL logs with CloudWatch Logs Insights
  1. Open the CloudWatch console at https://console.amazonaws.cn/cloudwatch/.

  2. In the navigation pane, open Logs and choose Log insights.

  3. In Select log group(s), select the log group for your Aurora PostgreSQL DB cluster.

    Choose the Aurora PostgreSQL log group.
  4. In the query editor, delete the query that is currently shown, enter the following, and then choose Run query.

    ##Autovacuum execution time in seconds per 5 minute fields @message | parse @message "elapsed: * s" as @duration_sec | filter @message like / automatic vacuum / | display @duration_sec | sort @timestamp | stats avg(@duration_sec) as avg_duration_sec, max(@duration_sec) as max_duration_sec by bin(5 min)
    Query in the query editor.
  5. Choose the Visualization tab.

    The Visualization tab.
  6. Choose Add to dashboard.

  7. In Select a dashboard, either select a dashboard or enter a name to create a new dashboard.

  8. In Widget type, choose a widget type for your visualization.

    The dashboard.
  9. (Optional) Add more widgets based on your log query results.

    1. Choose Add widget.

    2. Choose a widget type, such as Line.

      Choose a widget.
    3. In the Add to this dashboard window, choose Logs.

      Add logs to the dashboard.
    4. In Select log group(s), select the log group for your DB cluster.

    5. In the query editor, delete the query that is currently shown, enter the following, and then choose Run query.

      ##Autovacuum tuples statistics per 5 min fields @timestamp, @message | parse @message "tuples: " as @tuples_temp | parse @tuples_temp "* removed," as @tuples_removed | parse @tuples_temp "remain, * are dead but not yet removable, " as @tuples_not_removable | filter @message like / automatic vacuum / | sort @timestamp | stats avg(@tuples_removed) as avg_tuples_removed, avg(@tuples_not_removable) as avg_tuples_not_removable by bin(5 min)
      Query in the query editor.
    6. Choose Create widget.

      Your dashboard should look similar to the following image.

      Dashboard with two graphs.