Oracle Log Miner and MySQL Logs - Oracle to Aurora MySQL Migration Playbook
Services or capabilities described in Amazon Web Services documentation might vary by Region. To see the differences applicable to the China Regions, see Getting Started with Amazon Web Services in China (PDF).

Oracle Log Miner and MySQL Logs

Feature compatibility Amazon SCT / Amazon DMS automation level Amazon SCT action code index Key differences

Three star feature compatibility

N/A

N/A

MySQL doesn’t support LogMiner, workaround is available.

Oracle Usage

Oracle Log Miner is a tool for querying the database Redo Logs and the Archived Redo Logs using an SQL interface. Using Log Miner, you can analyze the content of database transaction logs (online and archived redo logs) and gain historical insights on past database activity such as data modification by individual DML statements.

Examples

The following examples demonstrate how to use Log Miner to view DML statements that run on the employees table.

Find the current redo log file.

SELECT V$LOG.STATUS, MEMBER
FROM V$LOG, V$LOGFILE
WHERE V$LOG.GROUP# = V$LOGFILE.GROUP#
AND V$LOG.STATUS = 'CURRENT';

STATUS    MEMBER
CURRENT   /u01/app/oracle/oradata/orcl/redo02.log

Use the DBMS_LOGMNR.ADD_LOGFILE procedure. Pass the file path as a parameter to the Log Miner API.

BEGIN
DBMS_LOGMNR.ADD_LOGFILE('/u01/app/oracle/oradata/orcl/redo02.log');
END;
/

PL/SQL procedure successfully completed.

Start Log Miner using the DBMS_LOGMNR.START_LOGMNR procedure.

BEGIN
DBMS_LOGMNR.START_LOGMNR(options=>
dbms_logmnr.dict_from_online_catalog);
END;
/

PL/SQL procedure successfully completed.

Run a DML statement.

UPDATE HR.EMPLOYEES SET SALARY=SALARY+1000 WHERE EMPLOYEE_ID=116;
COMMIT;

Query the V$LOGMNR_CONTENTS table to view the DML commands captured by the Log Miner.

SELECT TO_CHAR(TIMESTAMP,'mm/dd/yy hh24:mi:ss') TIMESTAMP,
SEG_NAME, OPERATION, SQL_REDO, SQL_UNDO
FROM V$LOGMNR_CONTENTS
WHERE TABLE_NAME = 'EMPLOYEES'
AND OPERATION = 'UPDATE';

TIMESTAMP  SEG_NAME  OPERATION
10/09/17   06:43:44  EMPLOYEES UPDATE

SQL_REDO                                         SQL_UNDO
update "HR"."EMPLOYEES" set                      update "HR"."EMPLOYEES" set
"SALARY" = '3900' where "SALARY" = '2900'        "SALARY" = '2900' where "SALARY" = '3900'
and ROWID = 'AAAViUAAEAAABVvAAQ';                and ROWID = 'AAAViUAAEAAABVvAAQ';

For more information, see Using LogMiner to Analyze Redo Log Files in the Oracle documentation.

MySQL Usage

The mysqlbinlog utility is the MySQL equivalent to Oracle Log Miner. You can use Log Miner to search for many types of information. This topic covers all of the MySQL logs that are available so you can decide which log is best for your use case.

Aurora MySQL generates four logs that can be viewed by database administrators:

  • Error log — Contains information about errors and server start and stop events.

  • General query log — Contains a general record of MySQL operations such as connect, disconnect, queries, and so on.

  • Slow query log — Contains a log of slow SQL statements.

  • Bin log — When used, contains row and statement levels of commands records.

The MySQL error log is generated by default. You can generate the slow query and general logs by setting parameters in the database parameter group. Amazon RDS rotates all MySQL log files.

You can monitor the MySQL logs directly through the Amazon RDS console, Amazon RDS API, Amazon CLI, or Amazon SDKs. You can also access MySQL logs by directing the logs to a database table in the main database and then querying that table. You can use the mysqlbinlog utility to download a binary log.

Downloading MySQL Binlog files

The binlog in MySQL is used for replication needs. MySQL uses it to replicate commands between master MySQL server to slave server. These logs can be read using the mysqlbinlog utility.

The mysqlbinlog utility is equivalent to Oracle Log Miner and enables users to read the server’s binary log (similar to the Oracle redo log). The server’s binary log consists of files that describe modifications to database contents (events).

While these logs do not contain a lot of information, they can provide needed data for some use cases.

To download and read the binary log, check to see if the binlog is activated by typing this command:

SHOW BINARY LOGS;
Note

If the binlog isn’t activated, this command returns an error. If the binlog is activated, the binlog files list is displayed.

After querying the binlog files list you can select a file to download by using this command:

mysqlbinlog
  --read-from-remote-server
  --host=mysql-cluster1.cluster-crqdlsqqnpry.useast-1.rds.amazonaws.com
  --port=3306
  --user naya
  --password mysql-bin-changelog.0098

For more information, see MySQL database log files in the Amazon Relational Database Service User Guide.

The output example for binlog looks as shown following:

use `aws`/*!*/;
SET TIMESTAMP=1551125550/*!*/;
SET @@session.pseudo_thread_id=12/*!*/;
SET @@session.foreign_key_checks=1, @@session.sql_auto_is_null=0, @@session.unique_
checks=1, @@session.autocommit=1/*!*/;
SET @@session.sql_mode=2097152/*!*/;
SET @@session.auto_increment_increment=1, @@session.auto_increment_offset=1/*!*/;
/*!\C utf8 *//*!*/;
SET @@session.character_set_client=33,@@session.collation_connection=
33,@@session.collation_server=8/*!*/;
SET @@session.lc_time_names=0/*!*/;
SET @@session.collation_database=DEFAULT/*!*/;
last_committed=1 sequence_number=2 rbr_only=no original_committed_
timestamp=0 immediate_commit_timestamp=0 transaction_length=0
# original_commit_timestamp=0 (1969-12-31 19:00:00.000000 Eastern Standard Time)
# immediate_commit_timestamp=0 (1969-12-31 19:00:00.000000 Eastern Standard Time)
/*!80001 SET @@session.original_commit_timestamp=0*//*!*/;
/*!80014 SET @@session.original_server_version=0*//*!*/;
/*!80014 SET @@session.immediate_server_version=0*//*!*/;
SET @@SESSION.GTID_NEXT= 'ANONYMOUS'/*!*/;
# at 434
#190225 15:12:50 server id 565151648 end_log_pos 513 CRC32 0x1188c639 Query
thread_id=12 exec_time=0 error_code=0
SET TIMESTAMP=1551125570/*!*/;
BEGIN
/*!*/;
# at 513
#190225 15:12:50 server id 565151648 end_log_pos 669 CRC32 0x051c3800 Query
thread_id=12 exec_time=0 error_code=0
SET TIMESTAMP=1551125570/*!*/;
/* ApplicationName=mysql */ insert into test values (1),(1),(1)
/*!*/;
# at 669
#190225 15:12:50 server id 565151648 end_log_pos 700 CRC32 0x72697ff4 Xid = 5467
COMMIT/*!*/;
SET @@SESSION.GTID_NEXT= 'AUTOMATIC' /* added by mysqlbinlog */ /*!*/;
DELIMITER ;
# End of log file

For more information, see mysqlbinlog — Utility for Processing Binary Log Files in the MySQL documentation.

Accessing MySQL Error Logs

The MySQL error log is written to the mysql-error.log file. You can view mysql-error.log by using the Amazon RDS console or by retrieving the log using the Amazon RDS API, Amazon RDS CLI, or Amazon SDKs. Mysqlerror.log is flushed every 5 minutes and its contents are appended to mysql-error-running.log. The mysql-errorrunning.log file is then rotated every hour. The hourly files generated during the last 24 hours are retained. Each log file has the hour it was generated (in UTC) appended to its name. The log files also have a timestamp that helps you determine when the log entries were written.

MySQL writes to the error log only on startup, shutdown, and when it encounters errors. A database instance can go hours or days without new entries being written to the error log. If you see no recent entries, it’s because the server did not encounter an error that would result in a log entry.

Accessing the MySQL Slow Query and General Logs

The MySQL slow query log and the general log can be written to a file or a database table by setting parameters in the database parameter group. You must set these parameters before you can view the slow query log or general log in the Amazon RDS console, Amazon RDS API, Amazon RDS CLI, or Amazon SDKs.

You can control MySQL logging by using the following parameters:

  • slow_query_log — To create the slow query log, set to 1. The default is 0.

  • general_log — To create the general log, set to 1. The default is 0.

  • long_query_time — To prevent fast-running queries from being logged in the slow query log, specify a value for the shortest query run time in seconds to be logged. The default is 10 seconds; the minimum is 0. If log_output = FILE, you can specify a floating point value with a resolution of microseconds. If log_output = TABLE, make sure that you specify an integer value with a resolution of seconds. Only queries where the execution time exceeds the long_query_time value are logged. For example, setting long_query_time to 0.1 prevents a query that runs for less than 100 milliseconds from being logged.

  • log_queries_not_using_indexes — To log all queries that do not use an index to the slow query log, set to 1. The default is 0. Queries that do not use an index are logged even if their execution time is less than the value of the long_query_time parameter.

  • log_output — You can specify one of the following options for the log_output parameter.

    • TABLE — Write general queries to the mysql.general_log table, and write slow queries to the mysql.slow_log table. This is the default option.

    • FILE — Write both general and slow query logs to the file system. Log files are rotated hourly.

    • NONE — Turn off logging.

You can configure a MySQL instance to publish log data to a log group in Amazon CloudWatch Logs. CloudWatch Logs support real-time analysis of the log data, create alarms, and view metrics. You can use CloudWatch Logs to store your log records in highly durable storage. For more information, see MySQL Database Log Files in the Amazon Relational Database Service User Guide.

Amazon RDS normally purges a binary log as soon as possible, but the binary log must still be available on the instance to be accessed by mysqlbinlog. To specify the number of hours for RDS to retain binary logs, use the mysql.rds_set_configuration stored procedure and specify a period with enough time for you to download the logs. After you set the retention period, monitor storage usage for the database instance to ensure the retained binary logs don’t consume too much storage.

Examples

Determine the output location of the logs and if slow query and general logging are turned on.

select @@GLOBAL.log_output, @@GLOBAL.slow_query_log, @@GLOBAL.general_log

To view the logs using Amazon Management Console:

  1. Sign in to your Amazon console and choose RDS.

  2. Choose your DB instance and scroll down to the Logs section.

  3. Choose a log to inspect or download.

The following example configures retention of the binary logs (in hours). In this example the binary log will be retained one day.

call mysql.rds_set_configuration('binlog retention hours', 24);

For more information, see The Binary Log in the MySQL documentation and MySQL database log files in the Amazon Relational Database Service User Guide.